Linux A Future
5.6K views | +0 today
Scooped by Jan Bergmans
onto Linux A Future!

With TPP's Internet trap, you could face a fine just for clicking on the wrong link.

With TPP's Internet trap, you could face a fine just for clicking on the wrong link. | Linux A Future |
Industry lobbyists and unelected trade representatives are meeting in secret to draft the Trans-Pacific Partnership, a devastating new trade deal. Learn more about the Internet trap before it's too late.
No comment yet.
Linux A Future
Linux The Rosetta Stone
Curated by Jan Bergmans
Your new post is loading...
Your new post is loading...
Scooped by Jan Bergmans!

Rejoice, Penguinistas, Linux 4.4 is upon us

Rejoice, Penguinistas, Linux 4.4 is upon us | Linux A Future |
Emperor Penguin Linus Torvalds announced the release on Sunday evening, US time.

What's new this time around? Support for GPUs seem the headline item, with plenty of new drivers and hooks for AMD kit. Perhaps most notable is the adoption of the Virgil 3D project which makes it possible to parcel up virtual GPUs. With virtual Linux desktops now on offer from Citrix and VMware, those who want to deliver virtual desktops with workstation-esque graphics capabilities have their on-ramp to Penguin heaven.

Raspberry Pi owners also have better graphics to look forward to, thanks to a new Pi KMS driver that will be updated with acceleration code in future releases.

There's also better 64-bit ARM support and fixes for memory leaks on Intel's Skylake CPUs.

Torvalds also says the new release caught a recent problem, by “unbreaking the x86-32 'sysenter' ABI, when somebody (*cough*android-x86*cough*) misused it by not using the vdso and instead using the instruction directly.”

It will, of course, be months before the new kernel pops up in a majority of production Linux rigs. But it's out there for those who want it. And Torvalds is of course letting world+dog know he's about to start work on version 4.5. ®
No comment yet.
Scooped by Jan Bergmans!

Linux Performance Analysis in 60,000 Milliseconds

Linux Performance Analysis in 60,000 Milliseconds | Linux A Future |
First 60 Seconds: Summary

In 60 seconds you can get a high level idea of system resource usage and running processes by running the following ten commands. Look for errors and saturation metrics, as they are both easy to interpret, and then resource utilization. Saturation is where a resource has more load than it can handle, and can be exposed either as the length of a request queue, or time spent waiting.

dmesg | tail
vmstat 1
mpstat -P ALL 1
pidstat 1
iostat -xz 1
free -m
sar -n DEV 1
sar -n TCP,ETCP 1

Some of these commands require the sysstat package installed. The metrics these commands expose will help you complete some of the USE Method: a methodology for locating performance bottlenecks. This involves checking utilization, saturation, and error metrics for all resources (CPUs, memory, disks, e.t.c.). Also pay attention to when you have checked and exonerated a resource, as by process of elimination this narrows the targets to study, and directs any follow on investigation.

The following sections summarize these commands, with examples from a production system. For more information about these tools, see their main pages.
1. uptime

$ uptime
23:51:26 up 21:31, 1 user, load average: 30.02, 26.43, 19.02

This is a quick way to view the load averages, which indicate the number of tasks (processes) wanting to run. On Linux systems, these numbers include processes wanting to run on CPU, as well as processes blocked in uninterruptible I/O (usually disk I/O). This gives a high level idea of resource load (or demand), but can’t be properly understood without other tools. Worth a quick look only.

The three numbers are exponentially damped moving sum averages with a 1 minute, 5 minute, and 15 minute constant. The three numbers give us some idea of how load is changing over time. For example, if you’ve been asked to check a problem server, and the 1 minute value is much lower than the 15 minute value, then you might have logged in too late and missed the issue.

In the example above, the load averages show a recent increase, hitting 30 for the 1 minute value, compared to 19 for the 15 minute value. That the numbers are this large means a lot of something: probably CPU demand; vmstat or mpstat will confirm, which are commands 3 and 4 in this sequence.
2. dmesg | tail

$ dmesg | tail
[1880957.563150] perl invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
[1880957.563400] Out of memory: Kill process 18694 (perl) score 246 or sacrifice child
[1880957.563408] Killed process 18694 (perl) total-vm:1972392kB, anon-rss:1953348kB, file-rss:0kB
[2320864.954447] TCP: Possible SYN flooding on port 7001. Dropping request. Check SNMP counters.

This views the last 10 system messages, if there are any. Look for errors that can cause performance issues. The example above includes the oom-killer, and TCP dropping a request.

Don’t miss this step! dmesg is always worth checking.
3. vmstat 1

$ vmstat 1
procs ---------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
34 0 0 200889792 73708 591828 0 0 0 5 6 10 96 1 3 0 0
32 0 0 200889920 73708 591860 0 0 0 592 13284 4282 98 1 1 0 0
32 0 0 200890112 73708 591860 0 0 0 0 9501 2154 99 1 0 0 0
32 0 0 200889568 73712 591856 0 0 0 48 11900 2459 99 0 0 0 0
32 0 0 200890208 73712 591860 0 0 0 0 15898 4840 98 1 1 0 0

Short for virtual memory stat, vmstat(8) is a commonly available tool (first created for BSD decades ago). It prints a summary of key server statistics on each line.

vmstat was run with an argument of 1, to print one second summaries. The first line of output (in this version of vmstat) has some columns that show the average since boot, instead of the previous second. For now, skip the first line, unless you want to learn and remember which column is which.

Columns to check:

r: Number of processes running on CPU and waiting for a turn. This provides a better signal than load averages for determining CPU saturation, as it does not include I/O. To interpret: an “r” value greater than the CPU count is saturation.
free: Free memory in kilobytes. If there are too many digits to count, you have enough free memory. The “free -m” command, included as command 7, better explains the state of free memory.
si, so: Swap-ins and swap-outs. If these are non-zero, you’re out of memory.
us, sy, id, wa, st: These are breakdowns of CPU time, on average across all CPUs. They are user time, system time (kernel), idle, wait I/O, and stolen time (by other guests, or with Xen, the guest's own isolated driver domain).

The CPU time breakdowns will confirm if the CPUs are busy, by adding user + system time. A constant degree of wait I/O points to a disk bottleneck; this is where the CPUs are idle, because tasks are blocked waiting for pending disk I/O. You can treat wait I/O as another form of CPU idle, one that gives a clue as to why they are idle.

System time is necessary for I/O processing. A high system time average, over 20%, can be interesting to explore further: perhaps the kernel is processing the I/O inefficiently.

In the above example, CPU time is almost entirely in user-level, pointing to application level usage instead. The CPUs are also well over 90% utilized on average. This isn’t necessarily a problem; check for the degree of saturation using the “r” column.
4. mpstat -P ALL 1

$ mpstat -P ALL 1
Linux 3.13.0-49-generic (titanclusters-xxxxx) 07/14/2015 _x86_64_ (32 CPU)
07:38:49 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
07:38:50 PM all 98.47 0.00 0.75 0.00 0.00 0.00 0.00 0.00 0.00 0.78
07:38:50 PM 0 96.04 0.00 2.97 0.00 0.00 0.00 0.00 0.00 0.00 0.99
07:38:50 PM 1 97.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00
07:38:50 PM 2 98.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00
07:38:50 PM 3 96.97 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.03

This command prints CPU time breakdowns per CPU, which can be used to check for an imbalance. A single hot CPU can be evidence of a single-threaded application.
5. pidstat 1

$ pidstat 1
Linux 3.13.0-49-generic (titanclusters-xxxxx) 07/14/2015 _x86_64_ (32 CPU)
07:41:02 PM UID PID %usr %system %guest %CPU CPU Command
07:41:03 PM 0 9 0.00 0.94 0.00 0.94 1 rcuos/0
07:41:03 PM 0 4214 5.66 5.66 0.00 11.32 15 mesos-slave
07:41:03 PM 0 4354 0.94 0.94 0.00 1.89 8 java
07:41:03 PM 0 6521 1596.23 1.89 0.00 1598.11 27 java
07:41:03 PM 0 6564 1571.70 7.55 0.00 1579.25 28 java
07:41:03 PM 60004 60154 0.94 4.72 0.00 5.66 9 pidstat
07:41:03 PM UID PID %usr %system %guest %CPU CPU Command
07:41:04 PM 0 4214 6.00 2.00 0.00 8.00 15 mesos-slave
07:41:04 PM 0 6521 1590.00 1.00 0.00 1591.00 27 java
07:41:04 PM 0 6564 1573.00 10.00 0.00 1583.00 28 java
07:41:04 PM 108 6718 1.00 0.00 0.00 1.00 0 snmp-pass
07:41:04 PM 60004 60154 1.00 4.00 0.00 5.00 9 pidstat

Pidstat is a little like top’s per-process summary, but prints a rolling summary instead of clearing the screen. This can be useful for watching patterns over time, and also recording what you saw (copy-n-paste) into a record of your investigation.

The above example identifies two java processes as responsible for consuming CPU. The %CPU column is the total across all CPUs; 1591% shows that that java processes is consuming almost 16 CPUs.
6. iostat -xz 1

$ iostat -xz 1
Linux 3.13.0-49-generic (titanclusters-xxxxx) 07/14/2015 _x86_64_ (32 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
73.96 0.00 3.73 0.03 0.06 22.21
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
xvda 0.00 0.23 0.21 0.18 4.52 2.08 34.37 0.00 9.98 13.80 5.42 2.44 0.09
xvdb 0.01 0.00 1.02 8.94 127.97 598.53 145.79 0.00 0.43 1.78 0.28 0.25 0.25
xvdc 0.01 0.00 1.02 8.86 127.79 595.94 146.50 0.00 0.45 1.82 0.30 0.27 0.26
dm-0 0.00 0.00 0.69 2.32 10.47 31.69 28.01 0.01 3.23 0.71 3.98 0.13 0.04
dm-1 0.00 0.00 0.00 0.94 0.01 3.78 8.00 0.33 345.84 0.04 346.81 0.01 0.00
dm-2 0.00 0.00 0.09 0.07 1.35 0.36 22.50 0.00 2.55 0.23 5.62 1.78 0.03

This is a great tool for understanding block devices (disks), both the workload applied and the resulting performance. Look for:

r/s, w/s, rkB/s, wkB/s: These are the delivered reads, writes, read Kbytes, and write Kbytes per second to the device. Use these for workload characterization. A performance problem may simply be due to an excessive load applied.
await: The average time for the I/O in milliseconds. This is the time that the application suffers, as it includes both time queued and time being serviced. Larger than expected average times can be an indicator of device saturation, or device problems.
avgqu-sz: The average number of requests issued to the device. Values greater than 1 can be evidence of saturation (although devices can typically operate on requests in parallel, especially virtual devices which front multiple back-end disks.)
%util: Device utilization. This is really a busy percent, showing the time each second that the device was doing work. Values greater than 60% typically lead to poor performance (which should be seen in await), although it depends on the device. Values close to 100% usually indicate saturation.

If the storage device is a logical disk device fronting many back-end disks, then 100% utilization may just mean that some I/O is being processed 100% of the time, however, the back-end disks may be far from saturated, and may be able to handle much more work.

Bear in mind that poor performing disk I/O isn’t necessarily an application issue. Many techniques are typically used to perform I/O asynchronously, so that the application doesn’t block and suffer the latency directly (e.g., read-ahead for reads, and buffering for writes).
7. free -m

$ free -m
total used free shared buffers cached
Mem: 245998 24545 221453 83 59 541
-/+ buffers/cache: 23944 222053
Swap: 0 0 0

The right two columns show:

buffers: For the buffer cache, used for block device I/O.
cached: For the page cache, used by file systems.

We just want to check that these aren’t near-zero in size, which can lead to higher disk I/O (confirm using iostat), and worse performance. The above example looks fine, with many Mbytes in each.

The “-/+ buffers/cache” provides less confusing values for used and free memory. Linux uses free memory for the caches, but can reclaim it quickly if applications need it. So in a way the cached memory should be included in the free memory column, which this line does. There’s even a website, linuxatemyram, about this confusion.

It can be additionally confusing if ZFS on Linux is used, as we do for some services, as ZFS has its own file system cache that isn’t reflected properly by the free -m columns. It can appear that the system is low on free memory, when that memory is in fact available for use from the ZFS cache as needed.
8. sar -n DEV 1

$ sar -n DEV 1
Linux 3.13.0-49-generic (titanclusters-xxxxx) 07/14/2015 _x86_64_ (32 CPU)
12:16:48 AM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
12:16:49 AM eth0 18763.00 5032.00 20686.42 478.30 0.00 0.00 0.00 0.00
12:16:49 AM lo 14.00 14.00 1.36 1.36 0.00 0.00 0.00 0.00
12:16:49 AM docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
12:16:49 AM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
12:16:50 AM eth0 19763.00 5101.00 21999.10 482.56 0.00 0.00 0.00 0.00
12:16:50 AM lo 20.00 20.00 3.25 3.25 0.00 0.00 0.00 0.00
12:16:50 AM docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

Use this tool to check network interface throughput: rxkB/s and txkB/s, as a measure of workload, and also to check if any limit has been reached. In the above example, eth0 receive is reaching 22 Mbytes/s, which is 176 Mbits/sec (well under, say, a 1 Gbit/sec limit).

This version also has %ifutil for device utilization (max of both directions for full duplex), which is something we also use Brendan’s nicstat tool to measure. And like with nicstat, this is hard to get right, and seems to not be working in this example (0.00).
9. sar -n TCP,ETCP 1

$ sar -n TCP,ETCP 1
Linux 3.13.0-49-generic (titanclusters-xxxxx) 07/14/2015 _x86_64_ (32 CPU)
12:17:19 AM active/s passive/s iseg/s oseg/s
12:17:20 AM 1.00 0.00 10233.00 18846.00
12:17:19 AM atmptf/s estres/s retrans/s isegerr/s orsts/s
12:17:20 AM 0.00 0.00 0.00 0.00 0.00
12:17:20 AM active/s passive/s iseg/s oseg/s
12:17:21 AM 1.00 0.00 8359.00 6039.00
12:17:20 AM atmptf/s estres/s retrans/s isegerr/s orsts/s
12:17:21 AM 0.00 0.00 0.00 0.00 0.00

This is a summarized view of some key TCP metrics. These include:

active/s: Number of locally-initiated TCP connections per second (e.g., via connect()).
passive/s: Number of remotely-initiated TCP connections per second (e.g., via accept()).
retrans/s: Number of TCP retransmits per second.

The active and passive counts are often useful as a rough measure of server load: number of new accepted connections (passive), and number of downstream connections (active). It might help to think of active as outbound, and passive as inbound, but this isn’t strictly true (e.g., consider a localhost to localhost connection).

Retransmits are a sign of a network or server issue; it may be an unreliable network (e.g., the public Internet), or it may be due a server being overloaded and dropping packets. The example above shows just one new TCP connection per-second.
10. top

$ top
top - 00:15:40 up 21:56, 1 user, load average: 31.09, 29.87, 29.92
Tasks: 871 total, 1 running, 868 sleeping, 0 stopped, 2 zombie
%Cpu(s): 96.8 us, 0.4 sy, 0.0 ni, 2.7 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 25190241+total, 24921688 used, 22698073+free, 60448 buffers
KiB Swap: 0 total, 0 used, 0 free. 554208 cached Mem
20248 root 20 0 0.227t 0.012t 18748 S 3090 5.2 29812:58 java
4213 root 20 0 2722544 64640 44232 S 23.5 0.0 233:35.37 mesos-slave
66128 titancl+ 20 0 24344 2332 1172 R 1.0 0.0 0:00.07 top
5235 root 20 0 38.227g 547004 49996 S 0.7 0.2 2:02.74 java
4299 root 20 0 20.015g 2.682g 16836 S 0.3 1.1 33:14.42 java
1 root 20 0 33620 2920 1496 S 0.0 0.0 0:03.82 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:05.35 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
6 root 20 0 0 0 0 S 0.0 0.0 0:06.94 kworker/u256:0
8 root 20 0 0 0 0 S 0.0 0.0 2:38.05 rcu_sched

The top command includes many of the metrics we checked earlier. It can be handy to run it to see if anything looks wildly different from the earlier commands, which would indicate that load is variable.

A downside to top is that it is harder to see patterns over time, which may be more clear in tools like vmstat and pidstat, which provide rolling output. Evidence of intermittent issues can also be lost if you don’t pause the output quick enough (Ctrl-S to pause, Ctrl-Q to continue), and the screen clears.
Follow-on Analysis

There are many more commands and methodologies you can apply to drill deeper. See Brendan’s Linux Performance Tools tutorial from Velocity 2015, which works through over 40 commands, covering observability, benchmarking, tuning, static performance tuning, profiling, and tracing.

Tackling system reliability and performance problems at web scale is one of our passions. If you would like to join us in tackling these kinds of challenges we are hiring!
Share on Facebook
Share on LinkedIn
More from the Tech Blog

20 November 2015
Sleepy Puppy Extension for Burp Suite

Today, we are pleased to open source a Burp extension that allows security engineers to simplify the process of injecting payloads from Sleep Puppy and then tracking the XSS propagation over longer periods of time and over multiple assessments. Read More

23 November 2015
Creating Your Own EC2 Spot Market -- Part 2
Receive Tech Blog Updates

Enter your email to receive alerts whenever new posts are added.
Welcome to the club!

Your email was successfully added to our list. Look for your first email alert soon!
Written by
Brendan Gregg for Netflix Media Center
Brendan Gregg
No comment yet.
Scooped by Jan Bergmans!

GNU social

GNU social | Linux A Future |
Jan Bergmans's insight:

GNU social, true to the Unix-philosophy of small programs to do a small job, will be a federated social network that you can install on your own server

You can use GNU social today

In June 2013, we merged with the StatusNet project.

Visit for the latest news on GNU social.

GNU home pageFSF home pageGNU ArtGNU FunGNU's Who?Free Software DirectorySite map

“Our mission is to preserve, protect and promote the freedom to use, study, copy, modify, and redistribute computer software, and to defend the rights of Free Software users.”

The Free Software Foundation is the principal organizational sponsor of the GNU Operating System. Support GNU and the FSF by buying manuals and gear, joining the FSF as an associate member, or making a donation, either directly to the FSF or via Flattr.

back to top

Please send FSF & GNU inquiries to <>. There are also other ways to contact the FSF.
Please send broken links and other corrections or suggestions to <>.

Please see the Translations README for information on coordinating and submitting translations of this article.

Copyright © 2010, 2013 Free Software Foundation Inc.

No comment yet.
Scooped by Jan Bergmans!

The Linux Foundation

The Linux Foundation | Linux A Future |
The Linux Foundation is a non-profit consortium dedicated to fostering the growth of Linux, and promoting standardization and technical collaboration of Open Source Linux
No comment yet.
Scooped by Jan Bergmans!

IBM SYSTEMS MAGAZINE "Disaster Recovery for AIX and Linux, from Storix, Inc."

IBM SYSTEMS MAGAZINE Disaster Recovery for AIX and Linux, from Storix, Inc.
No comment yet.
Scooped by Jan Bergmans!

Hackathon Winner Docracy Is A GitHub For Legal Documents

Hackathon Winner Docracy Is A GitHub For Legal Documents | Linux A Future |
One of the winners at today’s TechCrunch Disrupt Hackathon is Docracy, an open source site where users can share and sign legal documents, similar to what GitHub provides for code. The site is the brainchild of mobile app developers Matt Hall and John Watkinson, who are the founders of app development startup Larva Labs.

Docracy is an online, opensource hub for quality legal documents like contracts, NDAs, wills, trusts and more. So startups or individuals can take their legal documents and compare them against these trusted, documents on Docracy and see what terms differ.

Hall and Watkinson were recently were signing an NDA with a client and wasn’t sure if there were any terms in the NDA that should be flagged, or that were out of the ordinary. But they found that the issue didn’t warrant spending money on a lawyer. Small business and individuals have to sign legal documents all the time but often don’t have the resources to hire a lawyer to review these documents.

It’s a great idea and certainly one that many bootstrapped startups, freelancers or individuals can use in a pinch. While Docracy’s site isn’t up yet, we’re told it will be publicly released this week.
No comment yet.
Scooped by Jan Bergmans!

Enigmail gaat versleuteld e-mailen eenvoudig maken - Security.NL

Enigmail gaat versleuteld e-mailen eenvoudig maken - Security.NL | Linux A Future |
Jan Bergmans's insight:

De makers van Enigmail, een populaire uitbreiding voor de e-mailclient Thunderbird waardoor gebruikers versleuteld kunnen e-mailen, werken aan een oplossing die versleuteld e-mailen via Thunderbird zo eenvoudig mogelijk moet maken. Voor de ontwikkeling wordt er met pretty Easy privacy samengewerkt.

Pretty Easy privacy (p=p) is een project dat vorig jaar 51.000 dollar via Indiegogo wist op te halen en als doel heeft om e-mail anoniem en versleuteld te maken. Enigmail en pretty Easy privacy zullen de p=p-technologie aan Thunderbird-gebruikers gaan aanbieden. Hoewel Enigmail de populairste oplossing is voor het versleutelen van e-mail, versturen de meeste mensen hun e-mails onversleuteld, zo stelt Patrick Brunschwig, hoofdontwikkelaar van Enigmail.

Versleuteld e-mailen is volgens Brunschwig onvermijdelijk om de privacy te beschermen. Door met p=p samen te werken moet dit veranderen. P=p biedt de mogelijkheid om berichten volautomatisch te versleutelen, zonder dat gebruikers zich met zaken als sleutelbeheer moeten bezighouden. Het is de bedoeling deze technologie ook in Enigmail te verwerken. De eerste versie van Enigmail en p=p moet in december van dit jaar verschijnen.

No comment yet.
Scooped by Jan Bergmans!

More for Less: How the Open Source Software Revolution Can Mitigate Unnecessary Expenditure

More for Less: How the Open Source Software Revolution Can Mitigate Unnecessary Expenditure | Linux A Future |
More for less

Whilst these capabilities have arrived, IT expenditure hasn’t grown. In fact, IT R&D departments are fast becoming a thing of the past as all this wonderful functionality - thanks to the global development community’s efforts - is now being leveraged by their commercial counterparts.

These commercial counterparts repackage and resell services, offer support and development services to ensure that their customers feel comfortable in adopting OSS. Managed Service Providers today provide the integration expertise businesses need to bring disparate software together as an enterprise system, and offer huge amounts of power at better rates than we have ever seen before.
The secret behind the closed source loop

With all of this new functionality, you would think Closed Source would be in trouble. But it isn’t and that is primarily down to a single important concept: brand familiarity.

Organisations pay huge amounts of money for nothing more than a name and the idea that a supplier is more intelligent and more reliable than the people who wrote the software in the first place.

Yes, that’s right, the brand that organisations are paying so much money for is effectively wrapping and selling you the same technology used by their OSS counterparts.

They’re clever, but deceptive.

The Apples, Microsofts and VMWares of the world are simply selling closed source software and platforms which either wrap around or offer equivalent functionality to open source counterparts and, at the the same time, often reduce interoperability. This doesn’t sound like a technical development to me.

Think about it: it is no coincidence that the internet revolution and open source revolutions really kicked off at roughly the same time. In other words, technical intelligence has improved thanks to global development communities, not as a result of “business computing”.

This is all very nice, but how can we can we sell OSS and the idea of technical strategy mentioned at the start to the board?

The chances are, the non-technical members of your board are attached to the brands they recognise and would resist the idea of breaking established patterns for deployment with their current closed source suppliers. This is one of the primary reasons OSS hasn’t completely taken over the closed source software industry in key sectors (OS, virtualisation and enterprise storage being examples).

However, when undertaking new strategic developments the initial costs associated with developing proof of concepts and initial prototyping can be reduced to almost nothing by taking advantage of the develop-once, deploy-to-many paradigm that is now maturing within the DevOps space. By offering the ability to develop low-cost prototypes for strategic change and new developments, you can all but zero the risk to ney-sayers at the board level who would sacrifice innovation for fiscal security.

By embracing the freedom of open source, strategies can be easily developed and assessed for viability. Once this strategic proof of concept is accepted these can be finalised and deployed easily - without extensive re-development - using closed source counterparts or by leveraging OSS on preferred corporate platforms.

The gut reaction of your board may be a reluctance to move away from big, branded, closed source software. But if your elevator pitch is how Open Source Software can often do the same for less, maybe this will get their attention.

OSS might not have taken hold of the whole market (yet), but it’s already offering us the ability to drive innovation without risk. In my experience, innovation without risk is generally an easy sell to any management team.

Strategic and technical changes require buy-in from all levels of management.
This may be a difficult sell where current business and financial agreements prohibit innovation.
The DevOps world is moving fast to standardise deployment across heterogeneous target platforms.
The net result of this is the ability to develop against different platforms to those that are used in the production environment(s) with mitigated risk.
All of this is the result of the open source development model, that has provided more innovation over the last 25 years than any other technological development.
Risk free, businesses can now trial innovative strategic development by leveraging OSS and joining the revolution that has progressed the technical landscape so much.
No comment yet.
Scooped by Jan Bergmans!

Linux Mint 17.2 "Rafaela" KDE released!

Linux Mint 17.2 "Rafaela" KDE released! | Linux A Future |
The team is proud to announce the release of Linux Mint 17.2 “Rafaela” KDE. Linux Mint 17.2 Rafaela KDE Edition Linux Mint 17.2 is a long term support release which will be supported until 2019. It comes with updated software and brings refinements and many new features to make your desktop
No comment yet.
Scooped by Jan Bergmans!

Ubuntu on phones | Ubuntu

Ubuntu on phones | Ubuntu | Linux A Future |
With a radical new paradigm that makes it quicker and easier to find content and services than ever before, the beautifully designed Ubuntu phone is the opportunity the mobile industry has been waiting for.
No comment yet.
Scooped by Jan Bergmans!

FSF Blogs: Asking Obama to protect encryption, and why that's not enough

FSF Blogs: Asking Obama to protect encryption, and why that's not enough | Linux A Future |
In addition to civil society organizations like the FSF, the
was signed by some of the most important cryptologists in the world,
including the inventors of many of the key technologies behind modern
No comment yet.
Scooped by Jan Bergmans!

Mycroft: Linux’s Own AI

Mycroft: Linux’s Own AI | Linux A Future |
Swapnil talks with Ryan Sipes, CTO of Mycroft AI, to learn more about the Mycroft project and why they chose to open source the Adapt parser.
Jan Bergmans's insight:
The future is artificially intelligent. We are already surrounded by devices that are continuously listening to every word that we speak. There is Siri, Google Now, Amazon Alexa, and Microsoft’s Cortana. The biggest problem with these AI “virtual assistants” is that users have no real control over them. They use closed source technologies to send every bit of information they collect from users back to their masters. Some industry leaders, such as Elon Musk (Tesla, SpaceX), are not huge fans of AI. To ensure that AI will not turn against humanity and start a war, they have created a non-profit organization called OpenAI. But, Linux users don’t have to worry about it. A very ambitious project called Mycroft is working on a friendly AI virtual assistant for Linux users. I spoke with Ryan Sipes, CTO of Mycroft AI to learn more about the product. The Humble Beginning When Ryan and Mycroft co-founder Joshua Montgomery, who owns a makerspace, were visiting a Kansas City makerspace called Hammerspace, they found someone working on an open source intelligent virtual assistant project called Iris. Although it was a really neat technology, it was very simple and basic. Ryan recalled that you had to say exactly the right phrase to trigger everything. The two were interested in the technology, but they didn’t like the way it had been built around a very rigid concept. ryan-sipes Ryan Sipes, CTO of Mycroft AI They figured that somewhere, someone was already doing something similar, so they hit the Internet and actually found many projects; some were dead and many others were approaching the problem in a way not suitable for the two entrepreneurs. They even tried Jasper, but despite being developers, they had hard time getting it to run. All they wanted to do was make an intelligent system for makerspace. Nothing fancy like Amazon Echo. Just a speaker hanging from the wall allowing users to do things through voice. People could ask, for example: “Where is the hammer?” and it would tell them; or you can tell it to turn the lights off in a particular room. That’s all they wanted. So, they resorted to building their own, and when they got their software ready, they realized that it was really slick. It could be used at home and office to do many things. Initially, they didn’t have any product in mind, but they decided to take it public and convert it into a product. Ryan and Josh are serial entrepreneurs, so funding the project themselves was not a problem; however, they chose to go the crowdsourcing way. “The main reason behind going to Kickstarter was market validation. We wanted to see whether there was any interest in such a product. We wanted to know if people were willing to invest money in it. And the response was overwhelming,” said Ryan. Additionally, they decided to make all of this work open source. They used open source software, including Ubuntu Snappy Core, and open hardware, such as Raspberry Pi 2 and Arduino. The public mandate was already there. There was a demand for the product. The Mycroft project raised more than $127,520 on Kickstarter and another $138,464 on Indiegogo. Once the project was fully funded, Mycroft set aside around half of the money to fulfil the Kickstarter hardware requirements and the rest of the money was used in finishing the development effort. Going Open Source Earlier this month, the developers released the Adapt intent parser as open source. When many people look at Mycroft, they think voice recognition is the important piece, but the brain of Mycroft is the Adapt intent. It takes natural language, analyzes the ultimate sentence, and then decides what action needs to be taken. That means when someone says “turn the lights off in the conference room,” Adapt grabs the intent “turn off” and identifies the entity as “conference room.” So, it makes a decision and then reaches out to whatever device is controlling the lights in the conference rooms and tells it to turn them off. That’s complex work. And, the Mycroft developers just open sourced the biggest and most powerful piece of their software. “The only way we can compete with companies like Amazon and Google is by being open source. I can’t see how we could compete with them if we had only the resources we had to work on this. Just in house, we have probably like 5 people total, so there is no way we could compete with 100% team of those big companies. But, the cool thing is 20 minutes after the adapt code was released, we had a pull request. We had our first contribution,” said Ryan. Going open source immediately started paying off. Something even more incredible happened. Just an hour after the release, core developers of the Jasper project had already downloaded the code, cloned it, forked the repo, and started working on it. So, now you have more brilliant people working on the same software to make it even. Nowhere else but in open source will you see “competitors” working together on shared technologies. Ryan recalls an interesting conversation with business people who don’t understand open source model. “When we talked to business guys and they ask what’s the point of going open source instead of proprietary, I explained it in this way: I spent no money and my software improved within 20 minutes of release, and then those business guys get it.” Going open source goes beyond small patches from contributors. It makes a project richer. Ryan said that when he talked to his family and friends about it, they would say: make it do this, make it do that. And these were not things that either Ryan or other team members had thought of. The open source development model allows other people with different ideas to do exciting things with the project. Ryan says that they see Mycroft software going beyond the hardware. It’s also Linux’s best chance at getting its own Siri, Cortana, and Alexa. Because Canonical and Mycroft are working together, there is a possibility that Ubuntu phones, tablets, IoT devices, and even the desktop may use Mycroft as their AI virtual assistant. Then, it could be used in games and robots. I actually see real potential in cars. You could use it for navigation, ask for weather, traffic situation, control your music, open and close the sunroof, windows, and so much more. And, because it’s an open source project, anyone can get involved. I wish I were able to tell my Linux desktop, “Mycroft, open the community page of Mycroft!”
No comment yet.
Scooped by Jan Bergmans!

Google says its quantum computer is 100 million times faster than PC

Google says its quantum computer is 100 million times faster than PC | Linux A Future |
Controversial D-Wave system gets thumbs up
Jan Bergmans's insight:
Russell R. Roberts, Jr.'s curator insight, December 30, 2015 3:16 PM

Talk about speed!  The controversial D-wave system has overcome its initial shortcomings and is now considered the best way to get a practical Quantum Computer online.  This marvelous computer may be marketable in a few years.  Aloha, Russ.

Scooped by Jan Bergmans!

Mozilla Firefox Web Browser — Firefox Web browser — Private Browsing with Tracking Protection — Mozilla

Mozilla Firefox Web Browser — Firefox Web browser — Private Browsing with Tracking Protection — Mozilla | Linux A Future |
Get more protection and the most privacy, only from Firefox. No other browser gives you so much control over your digital footprint.
No comment yet.
Scooped by Jan Bergmans!

Welcome - E14N

Welcome - E14N | Linux A Future |
Jan Bergmans's insight:
Doing more with less.

E14N is a software company in Montreal creating activity stream servers for the social Web. is our activity streams is a spam filter for the social Web.Open Farm Game is a farm game.GNU Social, formerly StatusNet.
PeopleEvan Prodromou

E14N Inc.
4690 rue Pontiac
Montreal, Quebec H2J 2T5

No comment yet.
Scooped by Jan Bergmans!

Learn SQL | Codecademy

Learn SQL | Codecademy | Linux A Future |
Learn to manage data with SQL. You'll master complex commands to manipulate and query data stored in relational databases.
No comment yet.
Scooped by Jan Bergmans!

How to build a server room: Back to basics

How to build a server room: Back to basics | Linux A Future |
Reg reader compiles handy checklist for SMEs
Jan Bergmans's insight:
Back to Basics

From past experience, the following still needs to be pointed out far too often:

1. A cloak room is not a server room, even if you put servers inside.

2. You cannot power 100A worth of equipment from a 16A wall socket. Not even if there are 2 of them.

3. You cannot cool the above by opening a window and puting two fans in front of the computers. A domestic AirCon unit won't do much good either. Also, don't put drippy things above sparky things.

4. Ground lines are not for decoration. They need to be used and tested regularly for safety.

5. DIY plugs cannot be wired any which way. Not even in countries that allow Line and Neutral to be swapped.

6. The circuit breakers at the end of your circuits are part of your installation.

7. You cannot protect a rack of equipment with a UPS from PC World. If you really need this, you're going to have to buy something which is very big, expensive and very very heavy. And the batteries are only good for 3 to 5 years.

8. Buildings have structural limits. You cannot put several tonnes of densely packed metal just anywhere. Know your point and rolling loads and then check the building.

9. Electrical fires are nasty. Chemical fires are worse. You need stuff to protect the installation.

10. If you want 24h service, you'll need 24h staffing. A guy that "does computers for us" won't do.

11. A 1 Gbps uplink cannot feed 48 non-blocking 1 Gbps ports.

12. Metered and managed PDUs will save your bacon one day. Buy them.

13. Label all the cables BEFORE installing them. (No, you cannot just "follow the cable" afterwards)

14. My favourite: Don't blow your entire equipment grant on computers. All the stuff above costs money.


No comment yet.
Scooped by Jan Bergmans!

How to Compile and Install Linux Kernel v4.2 Source On a Debian / Ubuntu Linux

How to Compile and Install Linux Kernel v4.2 Source On a Debian / Ubuntu Linux | Linux A Future |
No comment yet.
Scooped by Jan Bergmans!

The Linux Foundation

The Linux Foundation | Linux A Future |
The Linux Foundation is a non-profit consortium dedicated to fostering the growth of Linux, and promoting standardization and technical collaboration of Open Source Linux
No comment yet.
Scooped by Jan Bergmans!

Time to Shine: Why Desktop Linux is Taking Over

Time to Shine: Why Desktop Linux is Taking Over | Linux A Future |
Learn about Open Source software on the desktop with help from the team at LinuxIT, experts in Linux Support services.
No comment yet.
Scooped by Jan Bergmans!

OpenWRT 15.05 Preparing Improved Security & Better Networking

OpenWRT 15.05 Preparing Improved Security & Better Networking | Linux A Future |
The first release candidate to OpenWRT 15.05, the "Chaos Calmer", is now available for testing...
No comment yet.