Linux A Future
5.8K views | +2 today
Linux A Future
Linux The Rosetta Stone
Curated by Jan Bergmans
Your new post is loading...
Your new post is loading...
Scooped by Jan Bergmans!

Researchers use acoustic voxels to embed sound with data

Jan Bergmans's insight:
Columbia Engineering researchers, working with colleagues at Disney Research and MIT, have developed a new method to control sound waves, using a computational approach to inversely design acoustic filters that can fit within an arbitrary 3D shape while achieving target sound filtering properties. Led by Computer Science Professor Changxi Zheng, the team designed acoustic voxels, small, hollow, cube-shaped chambers through which sound enters and exits, as a modular system. Like Legos, the voxels can be connected to form an infinitely adjustable, complex structure. Because of their internal chambers, they can modify the acoustic filtering property of the structure -- changing their number and size or how they connect alters the acoustic result.
No comment yet.
Scooped by Jan Bergmans!

Bulgarian Government Embraces Open Source | Software | LinuxInsider

Bulgarian Government Embraces Open Source | Software | LinuxInsider | Linux A Future |
Bulgaria's Parliament recently passed legislation mandating open source software to bolster security, as well as to increase competition with commercially coded software. Amendments to the Electronic Governance Act require that all software written for the government be Free and Open Source Software-compliant. The new provisions reportedly took effect this week.
No comment yet.
Scooped by Jan Bergmans!

Kali Linux Downloads – Virtual Images

Kali Linux Downloads – Virtual Images | Linux A Future |
Kali Linux Downloads - VMware, VirtualBox and ARM Prebuilt Kali Linux Images, that we would like to share with the community.
No comment yet.
Scooped by Jan Bergmans!

re:publica TEN & EDFVR: Immersion Everywhere

re:publica TEN & EDFVR: Immersion Everywhere | Linux A Future |
Virtual Reality goggles at #rp15

The field of VR is creating a lot of excitement in terms of new hardware products, from cameras to VR goggles and full-on VR domes. The 360 degree viewing format is also gaining a foothold in content areas from numerous disciplines. Along with gaming, the potentials of VR have been discovered by music and fashion industry, science, health, film and journalism and are using the technologies in numerous ways (we think back to the great re:publica 2015 talk on the subject of "immersive journalism").

Andreas Gebhard, co-founder and CEO of re:publica, explains: "We feel that the current hype surround VR is justified and that is why we are highlighting it at re:publica TEN and featuring the new possibilities this technology provides for so many sectors, including art and music, entertainment and journalism”. To increase our impact and support synergies and dialogue, re:publica has established a new cooperation with Germany's First Professional Association for Virtual Reality (EDFVR). Arne Ludwig (executive board EDFVR), welcomed this new cooperation: "The whole world has a stake in this new world of technology and so much is yet unexplored. It's self evident that EDFVR and re:publica TEN should cooperative as immersively as possible. Using hashtag #VR HERE, we will be showing of VR, more VR and even more VR through demos, installations and experiences and invite all participants to engage in talks at the VR Lounge."

At re:publica TEN, VR will be represented throughout the conference. One attraction will be the DOMZELT in the re:lax outdoor area with a 10-meter radius. In this dome you can experience various VR content in a group. Furthermore our new experimentation space "labore:tory" will feature 3 days and 3 floors dedicated to the topic of VR. The labre:tory, in the Kühlhaus, will become a learning lab open to all participants who can try out and experiment with various VR recording equipment and technologies on the 2nd floor. The 3rd floor will feature hands-on use of VR glasses and goggles, which will enable you to dive in to new dimensions and experiences. The 2 floors will be in the capable and experienced hands of the EDFVR.

The first floor is dedicated to a different thematic focus each day: the Musicday (2 May) will look at various topics in numerous sessions, including binaural VR and 360 degree recording. We would like to thank Berlin's Senate Department for Economics, Technology and Research for supporting the topics of VR and music. Immersive Arts (3 May) highlights VR and digital art. Immersion means more than simply partaking in the content – we look to dive into the art directly and experience it in a whole new way. Sessions will look at new tools for storytelling through technology. Working practices, possibilities and new perspectives will be highlighted. Day 3 (4 May) will host #FASHIONTECH Berlin and will look at VR's integration into the fashion industry. Could it even become a fashion accessory?

Photo credit: re:publica/Gregor Fischer (CC BY 2.0)
No comment yet.
Rescooped by Jan Bergmans from Semantic Gnosis Web!

Take on Endless Electronic Projects with the Tiniest Linux Computer Yet!

VoCore: Mini Linux Computer, Take on Endless Electronic Projects with the Tiniest Linux Computer Yet!
Jan Bergmans's insight:
DESCRIPTION Mini computer boards are getting more popular by the day—but none have been quite as tiny or quite as affordable as VoCore. With this mini Linux machine, you can make a tiny router, invent a new device, build a motherboard, or even repurpose old speakers into smart wireless versions. Its small size gives you options: use it as a standalone device running OpenWrt or use it as an embedded component of a larger system. With some knowledge of electronics and the included Dock that extends the Ethernet and USB ports, the electronic world is your oyster. Works on open-source hardware Provides up to 20 GPIO lines Runs OpenWRT Linux Includes an on-board Wi-Fi adapter so you don’t need an external one Easily connects to peripheral devices Small size enables it to act as an embedded system Extends Ethernet & USB interfaces w/ the Dock Operates as a fully functional 2.4GHz Wi-Firouter Acts as a general purpose low-power COM for IoT applications Includes full hardware design & full-source code Integrates a 802.11n MAC, baseband, radio, FEM & 5-port 10/100Mbps Ethernet switch
Jan Bergmans's curator insight, April 18, 5:32 AM
DESCRIPTION Mini computer boards are getting more popular by the day—but none have been quite as tiny or quite as affordable as VoCore. With this mini Linux machine, you can make a tiny router, invent a new device, build a motherboard, or even repurpose old speakers into smart wireless versions. Its small size gives you options: use it as a standalone device running OpenWrt or use it as an embedded component of a larger system. With some knowledge of electronics and the included Dock that extends the Ethernet and USB ports, the electronic world is your oyster. Works on open-source hardware Provides up to 20 GPIO lines Runs OpenWRT Linux Includes an on-board Wi-Fi adapter so you don’t need an external one Easily connects to peripheral devices Small size enables it to act as an embedded system Extends Ethernet & USB interfaces w/ the Dock Operates as a fully functional 2.4GHz Wi-Firouter Acts as a general purpose low-power COM for IoT applications Includes full hardware design & full-source code Integrates a 802.11n MAC, baseband, radio, FEM & 5-port 10/100Mbps Ethernet switch
Scooped by Jan Bergmans!

What The Tech? LinkedIn Hackers

What The Tech? LinkedIn Hackers | Linux A Future |
LinkedIn has a 'fakes' problem. Hackers are using fake profiles to make connections with people at work. It can lead to some bad things for their employers.

Whether you're looking for a job or just business contacts, LinkedIn is the social media network of choice. CEO's, hiring managers and human resources representatives use LinkedIn as way to announce job openings and search for potential new employees.

In recent months, researcherers at Dell's Counter Threat Unit found 25 fake LinkedIn profiles being used by hackers in the middle east. The profiles look identical to other profiles on the network which include a head­shot profile picture, resume, current job and responsibilities.

The owners of the 'fake accounts' will send out connection requests to other users. Many times when those requests are accepted, users will receive information of new job openings. When they click on a link it can install malware on the users computers. As hackers increasingly attack businesses and corporations, the risk is great. Hackers can install that malware on one computer and in a matter of seconds it can infect the entire computer network.

"I think what most companies don't understand is the depth," said Jeremy Hopwood, a cyber security expert who works with companies to lock down networks and find harmful malware or viruses on company computers.

He said sometimes malware will sit inside the network for days, weeks and even months before being launched.

"Once they've been weaponized and detonated within the business, it spreads within seconds." Dell uncovered the fake LinkedIn profiles and identified what it terms 'leaders' and supporters. By connecting with themselves, it gives the impression the profiles are legit and other users are more likely to accept the connection request.

LinkedIn is now asking users to report any suspicious connections that might be fakes. The best practice, is to only accept requests from people you know for a fact are real people.
Jan Bergmans's insight:
Share your insight
No comment yet.
Scooped by Jan Bergmans!

Mark Shuttleworth Details Ubuntu 15.10 Highlights [VIDEO]

Shuttleworth explains how the .deb packaging format remains in place, even in a world where Ubuntu is embracing Snappy.
No comment yet.
Scooped by Jan Bergmans!

Mycroft: Linux’s Own AI

Mycroft: Linux’s Own AI | Linux A Future |
Swapnil talks with Ryan Sipes, CTO of Mycroft AI, to learn more about the Mycroft project and why they chose to open source the Adapt parser.
Jan Bergmans's insight:
The future is artificially intelligent. We are already surrounded by devices that are continuously listening to every word that we speak. There is Siri, Google Now, Amazon Alexa, and Microsoft’s Cortana. The biggest problem with these AI “virtual assistants” is that users have no real control over them. They use closed source technologies to send every bit of information they collect from users back to their masters. Some industry leaders, such as Elon Musk (Tesla, SpaceX), are not huge fans of AI. To ensure that AI will not turn against humanity and start a war, they have created a non-profit organization called OpenAI. But, Linux users don’t have to worry about it. A very ambitious project called Mycroft is working on a friendly AI virtual assistant for Linux users. I spoke with Ryan Sipes, CTO of Mycroft AI to learn more about the product. The Humble Beginning When Ryan and Mycroft co-founder Joshua Montgomery, who owns a makerspace, were visiting a Kansas City makerspace called Hammerspace, they found someone working on an open source intelligent virtual assistant project called Iris. Although it was a really neat technology, it was very simple and basic. Ryan recalled that you had to say exactly the right phrase to trigger everything. The two were interested in the technology, but they didn’t like the way it had been built around a very rigid concept. ryan-sipes Ryan Sipes, CTO of Mycroft AI They figured that somewhere, someone was already doing something similar, so they hit the Internet and actually found many projects; some were dead and many others were approaching the problem in a way not suitable for the two entrepreneurs. They even tried Jasper, but despite being developers, they had hard time getting it to run. All they wanted to do was make an intelligent system for makerspace. Nothing fancy like Amazon Echo. Just a speaker hanging from the wall allowing users to do things through voice. People could ask, for example: “Where is the hammer?” and it would tell them; or you can tell it to turn the lights off in a particular room. That’s all they wanted. So, they resorted to building their own, and when they got their software ready, they realized that it was really slick. It could be used at home and office to do many things. Initially, they didn’t have any product in mind, but they decided to take it public and convert it into a product. Ryan and Josh are serial entrepreneurs, so funding the project themselves was not a problem; however, they chose to go the crowdsourcing way. “The main reason behind going to Kickstarter was market validation. We wanted to see whether there was any interest in such a product. We wanted to know if people were willing to invest money in it. And the response was overwhelming,” said Ryan. Additionally, they decided to make all of this work open source. They used open source software, including Ubuntu Snappy Core, and open hardware, such as Raspberry Pi 2 and Arduino. The public mandate was already there. There was a demand for the product. The Mycroft project raised more than $127,520 on Kickstarter and another $138,464 on Indiegogo. Once the project was fully funded, Mycroft set aside around half of the money to fulfil the Kickstarter hardware requirements and the rest of the money was used in finishing the development effort. Going Open Source Earlier this month, the developers released the Adapt intent parser as open source. When many people look at Mycroft, they think voice recognition is the important piece, but the brain of Mycroft is the Adapt intent. It takes natural language, analyzes the ultimate sentence, and then decides what action needs to be taken. That means when someone says “turn the lights off in the conference room,” Adapt grabs the intent “turn off” and identifies the entity as “conference room.” So, it makes a decision and then reaches out to whatever device is controlling the lights in the conference rooms and tells it to turn them off. That’s complex work. And, the Mycroft developers just open sourced the biggest and most powerful piece of their software. “The only way we can compete with companies like Amazon and Google is by being open source. I can’t see how we could compete with them if we had only the resources we had to work on this. Just in house, we have probably like 5 people total, so there is no way we could compete with 100% team of those big companies. But, the cool thing is 20 minutes after the adapt code was released, we had a pull request. We had our first contribution,” said Ryan. Going open source immediately started paying off. Something even more incredible happened. Just an hour after the release, core developers of the Jasper project had already downloaded the code, cloned it, forked the repo, and started working on it. So, now you have more brilliant people working on the same software to make it even. Nowhere else but in open source will you see “competitors” working together on shared technologies. Ryan recalls an interesting conversation with business people who don’t understand open source model. “When we talked to business guys and they ask what’s the point of going open source instead of proprietary, I explained it in this way: I spent no money and my software improved within 20 minutes of release, and then those business guys get it.” Going open source goes beyond small patches from contributors. It makes a project richer. Ryan said that when he talked to his family and friends about it, they would say: make it do this, make it do that. And these were not things that either Ryan or other team members had thought of. The open source development model allows other people with different ideas to do exciting things with the project. Ryan says that they see Mycroft software going beyond the hardware. It’s also Linux’s best chance at getting its own Siri, Cortana, and Alexa. Because Canonical and Mycroft are working together, there is a possibility that Ubuntu phones, tablets, IoT devices, and even the desktop may use Mycroft as their AI virtual assistant. Then, it could be used in games and robots. I actually see real potential in cars. You could use it for navigation, ask for weather, traffic situation, control your music, open and close the sunroof, windows, and so much more. And, because it’s an open source project, anyone can get involved. I wish I were able to tell my Linux desktop, “Mycroft, open the community page of Mycroft!”
No comment yet.
Scooped by Jan Bergmans!

Google says its quantum computer is 100 million times faster than PC

Google says its quantum computer is 100 million times faster than PC | Linux A Future |
Controversial D-Wave system gets thumbs up
Jan Bergmans's insight:
Russell R. Roberts, Jr.'s curator insight, December 30, 2015 3:16 PM

Talk about speed!  The controversial D-wave system has overcome its initial shortcomings and is now considered the best way to get a practical Quantum Computer online.  This marvelous computer may be marketable in a few years.  Aloha, Russ.

Scooped by Jan Bergmans!

The Linux Foundation extends dedication to Linux security with new online skills training - SD Times

The Linux Foundation extends dedication to Linux security with new online skills training - SD Times | Linux A Future |
The Linux Foundation, the nonprofit advancing professional open source management for mass collaboration, has announced the availability of a new online learning course, Linux Security Fundamentals (LFS216). This self-paced course is an extension of The Linux Foundation’s dedication to helping secure the internet and other Linux and open source software and IT infrastructure.

“Open Source software underpins most of the Internet, facilitating trillions of dollars of business, but many projects lack rigorous security process,” said Nicko van Someren, chief technology officer at The Linux Foundation. “From day one, training and education play a key role in ensuring open source projects obtain a high state of security, quality and resiliency. Whether open or closed, software security must begin early on to minimize risk.”

Along with supporting the development of Linux and other mission-critical open source software, The Linux Foundation has taken steps to help ensure that the software it helps to produce is secure and users have all resources they need to be successful. Efforts include the Core Infrastructure Initiative’s Badges Program, in which open source projects like OpenStack are able to demonstrate security-conscious development. With Let’s Encrypt, The Linux Foundation and its partners have helped secure more than 5 million websites, and hope to eventually achieve a 100% secure web using HTTPS. Skills training that educates users on how to maximize system security is an essential complement to these initiatives.

The Linux Security Fundamentals class covers the basics that every IT professional working with Linux must know. It starts with an overview of computer security and touches on how security affects everyone in the chain of development, implementation, administration and end use.

Specific topics covered include:

Threats and Risk Assessment
Auditing and Detection
Application Security
Kernel Vulnerabilities
Local System Security
Network Security
Denial of Service (DoS)
Firewalling and Packet Filtering
LFS216 is intended for those involved with security related tasks at all levels. The hand-ons class uses virtual appliances to demonstrate “what happens when” rather than relying on typing exercises to configure complex servers. After completing this course, students will be able to assess current security needs, evaluate current security readiness and implement security options as required. This course is the second security offering for The Linux Foundation, the first being an instructor-led LInux Security (LFS416) course which has been offered since 2013.

“We recognize that security is a concern for any IT organization, which is why The Linux Foundation hosts initiatives such as the Core Infrastructure Initiative and Let’s Encrypt, which help make it easier to protect sensitive data and systems,” said Linux Foundation Training General Manager Clyde Seepersad. “These high-level efforts can only do so much though, so making it easier to train staff at all levels in security best practices is essential for ensuring all systems remain stable and secure.”

LFS216 is now available for enrollment for $199. In celebration of the 25th anniversary of Linux, through August 28, individuals may purchase a bundle including the new Linux Security Fundamentals course along with LFS201 – Essentials of System Administration, LFS211 – Linux Networking and Administration, and LFS265 – Software Defined Networking Fundamentals for only $250, a savings of 75%. This bundle will provide aspiring Linux system administrators with all the knowledge they need to start in the field, and prepare them for a Linux Foundation Certified Sysadmin exam.
No comment yet.
Scooped by Jan Bergmans!

22 open source tools for creatives

22 open source tools for creatives | Linux A Future |
Blender: 3D modeling, animation, video editing
InkScape: Vector graphics
GIMP: Raster image editing
Krita: Illustration
Audacity: Audio editing
VLC: Video player
Scribus: Desktop publishing
calibre Digital publishing
SIGIL: Digital publishing
'afterwriting: Screenwriting
Trelby: Screenwriting
MyPaint: Illustration
Kdenlive: Video editing
OpenShot: Video editing
Shotcut: Video editing
Natron: Compositing and post-processing
Ardour: Sound mixing and recording
Qtractor: Sound mixing and recording
Rosegarden: Music scoring
MuseScore: Music scoring
Hydrogen: Drum machine
Meshlab: Modeling clean-up for 3D printing
Jan Bergmans's insight:
Blender: 3D modeling, animation, video editing InkScape: Vector graphics GIMP: Raster image editing Krita: Illustration Audacity: Audio editing VLC: Video player Scribus: Desktop publishing calibre Digital publishing SIGIL: Digital publishing 'afterwriting: Screenwriting Trelby: Screenwriting MyPaint: Illustration Kdenlive: Video editing OpenShot: Video editing Shotcut: Video editing Natron: Compositing and post-processing Ardour: Sound mixing and recording Qtractor: Sound mixing and recording Rosegarden: Music scoring MuseScore: Music scoring Hydrogen: Drum machine Meshlab: Modeling clean-up for 3D printing
No comment yet.
Scooped by Jan Bergmans!

North American Cities Are Slow To Adopt Open Source Software - Contributed Content on Top Tech News

North American Cities Are Slow To Adopt Open Source Software - Contributed Content on Top Tech News | Linux A Future |
Cities that want to make the move to open source should take the following steps:

1. Look for upcoming end-of-life or expiry of existing proprietary licenses as an opportunity to migrate away from them to something less expensive.

2. Look at the subscription model of some critical open source software as a way to move necessary purchases to an operating expense budget as opposed to a capital expenditure budget and eliminate large budget outlays for new or renewed proprietary software.

3. Prepare a reasonable transition plan that will accommodate any training and adjustment of staff to new applications.

4. Ensure when budgeting that the total cost of ownership is considered over the lifespan of the project and not just the upfront initial costs.

5. Use software that will allow IT to run both Windows and open source software side by side during the transition period.

6. Find the political willpower to get it done. This will require action by elected officials, but it may need leadership from IT to show them what can be done.

The move to open source is inevitable as open source communities of developers continue to work on thousands of applications and as more software development companies invest in an open source model to allow for greater flexibility and lower end user prices than existing proprietary competitors. Europe has more than a decade head start on North American cities. The quality of available open source software has improved so much in that decade that the transition can be far easier for cities starting now, than it was for Munich when they got the ball rolling in Europe.

Kevin Gallagher is CEO of Inuvika Inc., a Toronto based open source company that delivers application virtualization software.

No comment yet.
Scooped by Jan Bergmans!

openSUSE Tumbleweed Getting Linux Kernel 4.6 Soon, GCC 6 Migration in Progress

openSUSE Tumbleweed Getting Linux Kernel 4.6 Soon, GCC 6 Migration in Progress | Linux A Future |
First of all, users are being informed that the first Alpha release of the upcoming openSUSE Leap 42.2 operating system is now available for download and testing. However, the development cycle for openSUSE Leap 42.2 has just started, and it looks like the final release lands in the first week of November 2016.

Secondly, openSUSE Tumbleweed users should be aware of the fact that the latest KDE Applications 16.04.1 software suite for the KDE Plasma 5.6 desktop environment has landed on May 29, along with many other software updates, such as GTK+ 3.20.6, libpng16 1.6.22, and Wine 1.9.11.

"GNOME’s GTK3 updated from 3.20.4 to 3.20.6 in the snapshot and libvirt has updated subpackages mostly for drivers as well as some for client, storage and daemon-config-network. Yast2-dns-server is available for update with version 3.1.21 in the Tumbleweed repositories," said Douglas DeMaio in today's announcement.

And now for the good news, as according to Douglas DeMaio, the Linux 4.6 kernel should land by the end of the week in the main software repositories for openSUSE Tumbleweed, along with the Perl 5.24 packages. In the meantime, the openSUSE developers are concentrating all of their efforts on the GCC 6 migration.

#openSUSE Tumbleweed#Linux kernel 4.6#KDE Applications 16.04.1#GCC 6#Perl 5.24
REDDIT IT! FLIP IT SHARE IT TWEET IT Hot right now  ·  Latest news

Microsoft Launches Windows 10 Mobile Build 14356 - Updated

PayPal Files Patent for Bitcoin Payment Device

TeamViewer Servers Go Down As Users Complain on Reddit About Getting Hacked

How to Take and Annotate Screenshots like a Pro

   openSUSE Tumbleweed to get Linux 4.6 kernel
No comment yet.
Scooped by Jan Bergmans!

What Does It Mean To Be Bilingual?

What Does It Mean To Be Bilingual? | Linux A Future |
World and language

Language creates our first connection to the world. The newborn child that takes a deep breath and cries out is both expressing itself and letting the world know it’s there. During infancy, grammar and vocabulary emerge (in all cultures, if you believe Chomsky’s idea of Universal Grammar) and influence how you engage with the world (if you subscribe to a more Whorfian view that language affects perception).

What about people who possess two possible linguistic systems to express an idea or a feeling? For a long time bilingualism was considered negative: The overwhelming opinion was that such an upbringing could cause confusion, especially in small children. Then in 1962, a study from Peal and Lambert that looked at the relationship between intelligence and language fundamentally altered the outlook. More recent studies have even claimed that bilingual people have a stronger “meta-linguistic awareness,” which applies to problem solving in areas outside of language, such as mathematics.

Although we can quantify some of the cognitive benefits of bilingualism, there are still many questions about how the bilingual brain works. Does it “choose” one of the language paths instead of the other? Is this influenced by ease, or context, or which synapses have been most strengthened over time? (This opens up a whole new can of worms – the idea of both languages being “equally strong” is also a red herring.) Researchers such as Lera Boroditsky have described differences between mono- and bilinguals in perceptions of color, and representations of time.

Brot, baguette and cognitive reference systems

The idea of having different linguistic systems can be illustrated by the difference between the German Brot and the French baguette – both essentially referring to bread. On the one hand, you’ve got that warm, golden brown, crunchy baguette, which you might dunk in a coffee or enjoy with a five-course cheese platter. On the other side is dark Brot, pure or with grains, moist and compact, healthy, delicious and filling. The words do not live in the same imaginary worlds; they conjure different memories, emotions and cultural references. They belong to different cognitive reference systems, and a bilingual person who wishes to speak of bread has a variety of means available to them.

A comparison with synesthesia illuminates the concept further. Those affected by synesthesia confuse two senses, like seeing and hearing. A synesthete might literally see music in the form of different colors, and therefore have access to two senses that help them describe the music. As a consequence their description may appear richer, more metaphorical or figurative. Many poems, as well as expressions in everyday use, draw on synesthetic principles – that’s why we speak of warm or cold colors. The more connections in the brain, the more conceptual possibilities are awoken. This so-called cognitive flexibility is associated with creativity and seems particularly pronounced among bilingual people.

Happy accidents

Weird and wonderful cross-linguistic inventions can occur when you juggle more than one language on a daily basis. If a word slips your mind, or indeed there is no other way to express something, you can grab for a solution from another language… I remember having coffee with a German friend and making a particularly excellent (okay, awful) pun. She looked up at me with a grimace and asked me “if I’d had a clown for breakfast.” She speaks both English and German, and that creative phrase is how she needed to express herself in that moment.

III – Carpe diem

Only 13% of all UN countries are officially monolingual. If you grew up in one of them (United States, Australia, England, here’s looking at you), don’t despair. It’s not too late to take the plunge. Learning a new language is like exercise for your brain: it helps stimulate and increase brain connections.

Some claim that language learning has influenced their life and personality, that they are more open, creative, confident and tolerant in the new language. It’s certainly true that languages change people – it’s not uncommon to see a different side of them when they’re speaking another tongue. So seize the day – free the clown – and start learning that language.

Improve your brain's performance with a second language.

Try Babbel here >
Pick a language to learn





Recommended Video
Pedro and Héctor discuss raising kids bilingually in Germany

Jan Bergmans's insight:
Share your insight
No comment yet.
Scooped by Jan Bergmans!

Firefox Developer Edition

Firefox Developer Edition | Linux A Future |
Built for those who build the Web. The only browser made for developers like you.
Jan Bergmans's insight:
No comment yet.
Scooped by Jan Bergmans!

Open source geeks in a world of silos

Open source geeks in a world of silos | Linux A Future |
You pretty famously kicked Google out of your life for a while, but let parts of it back in. Which side of the silo argument are you taking?

Oh, heavens. This is something I struggle with every damned day. In a nutshell: Having locked silos is a very, very bad thing. But it's hard as hell to avoid. For example: I'm a pretty die-hard Linux user, but I'm also an avid gamer. That means I tend to either have the Google Play Store (to install Android games) or Steam (for desktop Linux games) running on most of my systems. But I feel really, really dirty about it.
What arguments do you expect from your opponent, and why are they terrible?

Honestly, I have no clue what my opponent is going to say! This particular session has me and my co-presenter going mano a mano on the topic, which I expect to be rather challenging as he is an incredibly smart guy. Right now, I'm just hoping I can hold my own on why silos are so dangerous. Personal data access, personal data ownership, personal data security, longevity of software and so many other reasons are on my side, so here's hoping!
Can open source software ever hope to win against the convenience of shiny proprietary silos?

Yes. Maybe? Gosh. I hope so. Wait. What does "win" mean?
"Win" means gain mass adoption and the adulation of millennials and grandfathers alike.

Oh, lord. Millennials and Grandfathers, eh? Honestly I think mass adoption of (free and) open source alternatives to the closed, locked down application (and content) store silos will happen when the open options are, quite simply, better than their closed cousins in most ways that matter to people.

Approachability, easy of use, selection of software, promotion by the software publishers people trust... When a FOSS alternative to, say, the Google Play Store can manage to check all of those checkboxes, I have no doubt that mass adoption will follow.

The real question is, who will do it? Canonical tried with the Ubuntu Software Center—which, speaking as someone who sold software through it, was never quite ready for prime time. There have been a few other noteworthy attempts (such as Click'N'Run), but none ever worked well enough to capture significant market share.

In my opinion, the current best bet would be GNOME Software. It's not all the way there yet, but it shows promise.

I think an even bigger problem than "app store" and content silos is the prevalence of data silos—closed, online systems that store huge quantities of your data. Email. Documents. Pictures. Passwords. If all of these things are online and in closed silos, you really don't have any control over your own data.

And that scares the crap out of me.
You've been involved in open source communities for a long time, but you were recently elected to the openSUSE Board. What have you learned in the last month that surprised you?

The biggest surprise, to me, is what mean, terrible jerks my fellow openSUSE board members are. They all got together and conspired against me—they scheduled our regular board meetings for five-freaking-a.m. in the morning. Five in the morning! They gave me lame excuses like how they "live in Europe" and it was "the only time that worked for everyone." Pssht.

I am confident they are forcing me to wake up at this ungodly hour simply because they have hearts of pure ice. (Other than that, they're nice guys.)

No other big surprises yet. The openSUSE project runs itself in such an open way. I've been able to observe how it works from the outside for years. Now I'm just... less on the outside.
You gave a talk at SCALE 14x called Linux sucks, but you've published a book called Linux is Badass. Why are you flip-flopping?

Ha! Linux Sucks is, itself, the ultimate flip-flop. The first half is why it sucks. The second half is why it absolutely, without the slightest doubt, does not. I like to play devil's advocate with myself. Also, it makes for a fun event. My book Linux is Badass, on the other hand, is sort of a love poem to Linux in the form of essays. And actual poems. And a choose your own adventure story. With swear words. (It's a really weird book.)

I typically give a yearly Linux sucks at LinuxFest Northwest. (Except for last year, when I gave the Windows is awesome presentation to a packed audience at a Linux conference. That still boggles my mind.) But this year, I decided to do something a bit more... goofy. I'm calling it simply Linux is weird. It's basically a ridiculous journey through all the weirdest and most insane things about Linux. It's going to be nuts.
What LinuxFest Northwest talks are you most interested in?

It's hard for me to typically get a chance to see more than one or two presentations at an event like this. At LinuxFest Northwest I think I'm presenting three this year (Linux is weird, the one about silos that we talked about, and a third that is a Q&A with me and the openSUSE board director). When I'm not doing those, I'll probably be spending time at the openSUSE lounge (We don't have a traditional "booth" this year. We went for a full-on lounge.) giving out chameleon plushies and chatting with folks.

If I get a chance, I'd love to make it to John Sullivan's (director of the FSF) session comparing Free Software to veganism. That sounds like fun. And there's one on openQA (an automated testing platform) that is being co-presented by people from both SUSE and Red Hat. I love it when the big Linux companies come together in peace and harmony—plus, both of the presenters are friends. So if I miss that one, I'll probably never hear the end of it. And there's at least three sessions in the legal and licensing track that sound damned interesting. We'll see if I manage to make it to more than one of these.
Jan Bergmans's insight:
No comment yet.
Scooped by Jan Bergmans!

How to build a kernel module with DKMS on Linux - Xmodulo

How to build a kernel module with DKMS on Linux - Xmodulo | Linux A Future |
dkms add ixgbe/4.3.15

Build the specified module against the currently running kernel.
No comment yet.
Scooped by Jan Bergmans!

How to Detect Ransomware with FileAudit - Enterprise Network Security Blog from ISDecisions

How to Detect Ransomware with FileAudit - Enterprise Network Security Blog from ISDecisions | Linux A Future |
Detecting massive file encryption on a file server with FileAudit's mass access alerts is one of several measures to protect against ransomware attacks.
No comment yet.
Scooped by Jan Bergmans!


lfit/itpol | Linux A Future |
itpol - Useful IT policies
Jan Bergmans's insight:
Linux workstation security checklist
No comment yet.
Scooped by Jan Bergmans!

KDE Plasma 5.3 Released - Install In Ubuntu/Linux Mint, Fedora And OpenSUSE

KDE Plasma 5.3 Released - Install In Ubuntu/Linux Mint, Fedora And OpenSUSE | Linux A Future |
KDE is a well-known desktop environment for the Unix-Like systems designed for users who wants to have a nice desktop environment for their machines.
No comment yet.
Scooped by Jan Bergmans!

Rejoice, Penguinistas, Linux 4.4 is upon us

Rejoice, Penguinistas, Linux 4.4 is upon us | Linux A Future |
Emperor Penguin Linus Torvalds announced the release on Sunday evening, US time.

What's new this time around? Support for GPUs seem the headline item, with plenty of new drivers and hooks for AMD kit. Perhaps most notable is the adoption of the Virgil 3D project which makes it possible to parcel up virtual GPUs. With virtual Linux desktops now on offer from Citrix and VMware, those who want to deliver virtual desktops with workstation-esque graphics capabilities have their on-ramp to Penguin heaven.

Raspberry Pi owners also have better graphics to look forward to, thanks to a new Pi KMS driver that will be updated with acceleration code in future releases.

There's also better 64-bit ARM support and fixes for memory leaks on Intel's Skylake CPUs.

Torvalds also says the new release caught a recent problem, by “unbreaking the x86-32 'sysenter' ABI, when somebody (*cough*android-x86*cough*) misused it by not using the vdso and instead using the instruction directly.”

It will, of course, be months before the new kernel pops up in a majority of production Linux rigs. But it's out there for those who want it. And Torvalds is of course letting world+dog know he's about to start work on version 4.5. ®
No comment yet.
Scooped by Jan Bergmans!

Linux Performance Analysis in 60,000 Milliseconds

Linux Performance Analysis in 60,000 Milliseconds | Linux A Future |
First 60 Seconds: Summary

In 60 seconds you can get a high level idea of system resource usage and running processes by running the following ten commands. Look for errors and saturation metrics, as they are both easy to interpret, and then resource utilization. Saturation is where a resource has more load than it can handle, and can be exposed either as the length of a request queue, or time spent waiting.

dmesg | tail
vmstat 1
mpstat -P ALL 1
pidstat 1
iostat -xz 1
free -m
sar -n DEV 1
sar -n TCP,ETCP 1

Some of these commands require the sysstat package installed. The metrics these commands expose will help you complete some of the USE Method: a methodology for locating performance bottlenecks. This involves checking utilization, saturation, and error metrics for all resources (CPUs, memory, disks, e.t.c.). Also pay attention to when you have checked and exonerated a resource, as by process of elimination this narrows the targets to study, and directs any follow on investigation.

The following sections summarize these commands, with examples from a production system. For more information about these tools, see their main pages.
1. uptime

$ uptime
23:51:26 up 21:31, 1 user, load average: 30.02, 26.43, 19.02

This is a quick way to view the load averages, which indicate the number of tasks (processes) wanting to run. On Linux systems, these numbers include processes wanting to run on CPU, as well as processes blocked in uninterruptible I/O (usually disk I/O). This gives a high level idea of resource load (or demand), but can’t be properly understood without other tools. Worth a quick look only.

The three numbers are exponentially damped moving sum averages with a 1 minute, 5 minute, and 15 minute constant. The three numbers give us some idea of how load is changing over time. For example, if you’ve been asked to check a problem server, and the 1 minute value is much lower than the 15 minute value, then you might have logged in too late and missed the issue.

In the example above, the load averages show a recent increase, hitting 30 for the 1 minute value, compared to 19 for the 15 minute value. That the numbers are this large means a lot of something: probably CPU demand; vmstat or mpstat will confirm, which are commands 3 and 4 in this sequence.
2. dmesg | tail

$ dmesg | tail
[1880957.563150] perl invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
[1880957.563400] Out of memory: Kill process 18694 (perl) score 246 or sacrifice child
[1880957.563408] Killed process 18694 (perl) total-vm:1972392kB, anon-rss:1953348kB, file-rss:0kB
[2320864.954447] TCP: Possible SYN flooding on port 7001. Dropping request. Check SNMP counters.

This views the last 10 system messages, if there are any. Look for errors that can cause performance issues. The example above includes the oom-killer, and TCP dropping a request.

Don’t miss this step! dmesg is always worth checking.
3. vmstat 1

$ vmstat 1
procs ---------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
34 0 0 200889792 73708 591828 0 0 0 5 6 10 96 1 3 0 0
32 0 0 200889920 73708 591860 0 0 0 592 13284 4282 98 1 1 0 0
32 0 0 200890112 73708 591860 0 0 0 0 9501 2154 99 1 0 0 0
32 0 0 200889568 73712 591856 0 0 0 48 11900 2459 99 0 0 0 0
32 0 0 200890208 73712 591860 0 0 0 0 15898 4840 98 1 1 0 0

Short for virtual memory stat, vmstat(8) is a commonly available tool (first created for BSD decades ago). It prints a summary of key server statistics on each line.

vmstat was run with an argument of 1, to print one second summaries. The first line of output (in this version of vmstat) has some columns that show the average since boot, instead of the previous second. For now, skip the first line, unless you want to learn and remember which column is which.

Columns to check:

r: Number of processes running on CPU and waiting for a turn. This provides a better signal than load averages for determining CPU saturation, as it does not include I/O. To interpret: an “r” value greater than the CPU count is saturation.
free: Free memory in kilobytes. If there are too many digits to count, you have enough free memory. The “free -m” command, included as command 7, better explains the state of free memory.
si, so: Swap-ins and swap-outs. If these are non-zero, you’re out of memory.
us, sy, id, wa, st: These are breakdowns of CPU time, on average across all CPUs. They are user time, system time (kernel), idle, wait I/O, and stolen time (by other guests, or with Xen, the guest's own isolated driver domain).

The CPU time breakdowns will confirm if the CPUs are busy, by adding user + system time. A constant degree of wait I/O points to a disk bottleneck; this is where the CPUs are idle, because tasks are blocked waiting for pending disk I/O. You can treat wait I/O as another form of CPU idle, one that gives a clue as to why they are idle.

System time is necessary for I/O processing. A high system time average, over 20%, can be interesting to explore further: perhaps the kernel is processing the I/O inefficiently.

In the above example, CPU time is almost entirely in user-level, pointing to application level usage instead. The CPUs are also well over 90% utilized on average. This isn’t necessarily a problem; check for the degree of saturation using the “r” column.
4. mpstat -P ALL 1

$ mpstat -P ALL 1
Linux 3.13.0-49-generic (titanclusters-xxxxx) 07/14/2015 _x86_64_ (32 CPU)
07:38:49 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
07:38:50 PM all 98.47 0.00 0.75 0.00 0.00 0.00 0.00 0.00 0.00 0.78
07:38:50 PM 0 96.04 0.00 2.97 0.00 0.00 0.00 0.00 0.00 0.00 0.99
07:38:50 PM 1 97.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00
07:38:50 PM 2 98.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00
07:38:50 PM 3 96.97 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.03

This command prints CPU time breakdowns per CPU, which can be used to check for an imbalance. A single hot CPU can be evidence of a single-threaded application.
5. pidstat 1

$ pidstat 1
Linux 3.13.0-49-generic (titanclusters-xxxxx) 07/14/2015 _x86_64_ (32 CPU)
07:41:02 PM UID PID %usr %system %guest %CPU CPU Command
07:41:03 PM 0 9 0.00 0.94 0.00 0.94 1 rcuos/0
07:41:03 PM 0 4214 5.66 5.66 0.00 11.32 15 mesos-slave
07:41:03 PM 0 4354 0.94 0.94 0.00 1.89 8 java
07:41:03 PM 0 6521 1596.23 1.89 0.00 1598.11 27 java
07:41:03 PM 0 6564 1571.70 7.55 0.00 1579.25 28 java
07:41:03 PM 60004 60154 0.94 4.72 0.00 5.66 9 pidstat
07:41:03 PM UID PID %usr %system %guest %CPU CPU Command
07:41:04 PM 0 4214 6.00 2.00 0.00 8.00 15 mesos-slave
07:41:04 PM 0 6521 1590.00 1.00 0.00 1591.00 27 java
07:41:04 PM 0 6564 1573.00 10.00 0.00 1583.00 28 java
07:41:04 PM 108 6718 1.00 0.00 0.00 1.00 0 snmp-pass
07:41:04 PM 60004 60154 1.00 4.00 0.00 5.00 9 pidstat

Pidstat is a little like top’s per-process summary, but prints a rolling summary instead of clearing the screen. This can be useful for watching patterns over time, and also recording what you saw (copy-n-paste) into a record of your investigation.

The above example identifies two java processes as responsible for consuming CPU. The %CPU column is the total across all CPUs; 1591% shows that that java processes is consuming almost 16 CPUs.
6. iostat -xz 1

$ iostat -xz 1
Linux 3.13.0-49-generic (titanclusters-xxxxx) 07/14/2015 _x86_64_ (32 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
73.96 0.00 3.73 0.03 0.06 22.21
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
xvda 0.00 0.23 0.21 0.18 4.52 2.08 34.37 0.00 9.98 13.80 5.42 2.44 0.09
xvdb 0.01 0.00 1.02 8.94 127.97 598.53 145.79 0.00 0.43 1.78 0.28 0.25 0.25
xvdc 0.01 0.00 1.02 8.86 127.79 595.94 146.50 0.00 0.45 1.82 0.30 0.27 0.26
dm-0 0.00 0.00 0.69 2.32 10.47 31.69 28.01 0.01 3.23 0.71 3.98 0.13 0.04
dm-1 0.00 0.00 0.00 0.94 0.01 3.78 8.00 0.33 345.84 0.04 346.81 0.01 0.00
dm-2 0.00 0.00 0.09 0.07 1.35 0.36 22.50 0.00 2.55 0.23 5.62 1.78 0.03

This is a great tool for understanding block devices (disks), both the workload applied and the resulting performance. Look for:

r/s, w/s, rkB/s, wkB/s: These are the delivered reads, writes, read Kbytes, and write Kbytes per second to the device. Use these for workload characterization. A performance problem may simply be due to an excessive load applied.
await: The average time for the I/O in milliseconds. This is the time that the application suffers, as it includes both time queued and time being serviced. Larger than expected average times can be an indicator of device saturation, or device problems.
avgqu-sz: The average number of requests issued to the device. Values greater than 1 can be evidence of saturation (although devices can typically operate on requests in parallel, especially virtual devices which front multiple back-end disks.)
%util: Device utilization. This is really a busy percent, showing the time each second that the device was doing work. Values greater than 60% typically lead to poor performance (which should be seen in await), although it depends on the device. Values close to 100% usually indicate saturation.

If the storage device is a logical disk device fronting many back-end disks, then 100% utilization may just mean that some I/O is being processed 100% of the time, however, the back-end disks may be far from saturated, and may be able to handle much more work.

Bear in mind that poor performing disk I/O isn’t necessarily an application issue. Many techniques are typically used to perform I/O asynchronously, so that the application doesn’t block and suffer the latency directly (e.g., read-ahead for reads, and buffering for writes).
7. free -m

$ free -m
total used free shared buffers cached
Mem: 245998 24545 221453 83 59 541
-/+ buffers/cache: 23944 222053
Swap: 0 0 0

The right two columns show:

buffers: For the buffer cache, used for block device I/O.
cached: For the page cache, used by file systems.

We just want to check that these aren’t near-zero in size, which can lead to higher disk I/O (confirm using iostat), and worse performance. The above example looks fine, with many Mbytes in each.

The “-/+ buffers/cache” provides less confusing values for used and free memory. Linux uses free memory for the caches, but can reclaim it quickly if applications need it. So in a way the cached memory should be included in the free memory column, which this line does. There’s even a website, linuxatemyram, about this confusion.

It can be additionally confusing if ZFS on Linux is used, as we do for some services, as ZFS has its own file system cache that isn’t reflected properly by the free -m columns. It can appear that the system is low on free memory, when that memory is in fact available for use from the ZFS cache as needed.
8. sar -n DEV 1

$ sar -n DEV 1
Linux 3.13.0-49-generic (titanclusters-xxxxx) 07/14/2015 _x86_64_ (32 CPU)
12:16:48 AM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
12:16:49 AM eth0 18763.00 5032.00 20686.42 478.30 0.00 0.00 0.00 0.00
12:16:49 AM lo 14.00 14.00 1.36 1.36 0.00 0.00 0.00 0.00
12:16:49 AM docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
12:16:49 AM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
12:16:50 AM eth0 19763.00 5101.00 21999.10 482.56 0.00 0.00 0.00 0.00
12:16:50 AM lo 20.00 20.00 3.25 3.25 0.00 0.00 0.00 0.00
12:16:50 AM docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

Use this tool to check network interface throughput: rxkB/s and txkB/s, as a measure of workload, and also to check if any limit has been reached. In the above example, eth0 receive is reaching 22 Mbytes/s, which is 176 Mbits/sec (well under, say, a 1 Gbit/sec limit).

This version also has %ifutil for device utilization (max of both directions for full duplex), which is something we also use Brendan’s nicstat tool to measure. And like with nicstat, this is hard to get right, and seems to not be working in this example (0.00).
9. sar -n TCP,ETCP 1

$ sar -n TCP,ETCP 1
Linux 3.13.0-49-generic (titanclusters-xxxxx) 07/14/2015 _x86_64_ (32 CPU)
12:17:19 AM active/s passive/s iseg/s oseg/s
12:17:20 AM 1.00 0.00 10233.00 18846.00
12:17:19 AM atmptf/s estres/s retrans/s isegerr/s orsts/s
12:17:20 AM 0.00 0.00 0.00 0.00 0.00
12:17:20 AM active/s passive/s iseg/s oseg/s
12:17:21 AM 1.00 0.00 8359.00 6039.00
12:17:20 AM atmptf/s estres/s retrans/s isegerr/s orsts/s
12:17:21 AM 0.00 0.00 0.00 0.00 0.00

This is a summarized view of some key TCP metrics. These include:

active/s: Number of locally-initiated TCP connections per second (e.g., via connect()).
passive/s: Number of remotely-initiated TCP connections per second (e.g., via accept()).
retrans/s: Number of TCP retransmits per second.

The active and passive counts are often useful as a rough measure of server load: number of new accepted connections (passive), and number of downstream connections (active). It might help to think of active as outbound, and passive as inbound, but this isn’t strictly true (e.g., consider a localhost to localhost connection).

Retransmits are a sign of a network or server issue; it may be an unreliable network (e.g., the public Internet), or it may be due a server being overloaded and dropping packets. The example above shows just one new TCP connection per-second.
10. top

$ top
top - 00:15:40 up 21:56, 1 user, load average: 31.09, 29.87, 29.92
Tasks: 871 total, 1 running, 868 sleeping, 0 stopped, 2 zombie
%Cpu(s): 96.8 us, 0.4 sy, 0.0 ni, 2.7 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 25190241+total, 24921688 used, 22698073+free, 60448 buffers
KiB Swap: 0 total, 0 used, 0 free. 554208 cached Mem
20248 root 20 0 0.227t 0.012t 18748 S 3090 5.2 29812:58 java
4213 root 20 0 2722544 64640 44232 S 23.5 0.0 233:35.37 mesos-slave
66128 titancl+ 20 0 24344 2332 1172 R 1.0 0.0 0:00.07 top
5235 root 20 0 38.227g 547004 49996 S 0.7 0.2 2:02.74 java
4299 root 20 0 20.015g 2.682g 16836 S 0.3 1.1 33:14.42 java
1 root 20 0 33620 2920 1496 S 0.0 0.0 0:03.82 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:05.35 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
6 root 20 0 0 0 0 S 0.0 0.0 0:06.94 kworker/u256:0
8 root 20 0 0 0 0 S 0.0 0.0 2:38.05 rcu_sched

The top command includes many of the metrics we checked earlier. It can be handy to run it to see if anything looks wildly different from the earlier commands, which would indicate that load is variable.

A downside to top is that it is harder to see patterns over time, which may be more clear in tools like vmstat and pidstat, which provide rolling output. Evidence of intermittent issues can also be lost if you don’t pause the output quick enough (Ctrl-S to pause, Ctrl-Q to continue), and the screen clears.
Follow-on Analysis

There are many more commands and methodologies you can apply to drill deeper. See Brendan’s Linux Performance Tools tutorial from Velocity 2015, which works through over 40 commands, covering observability, benchmarking, tuning, static performance tuning, profiling, and tracing.

Tackling system reliability and performance problems at web scale is one of our passions. If you would like to join us in tackling these kinds of challenges we are hiring!
Share on Facebook
Share on LinkedIn
More from the Tech Blog

20 November 2015
Sleepy Puppy Extension for Burp Suite

Today, we are pleased to open source a Burp extension that allows security engineers to simplify the process of injecting payloads from Sleep Puppy and then tracking the XSS propagation over longer periods of time and over multiple assessments. Read More

23 November 2015
Creating Your Own EC2 Spot Market -- Part 2
Receive Tech Blog Updates

Enter your email to receive alerts whenever new posts are added.
Welcome to the club!

Your email was successfully added to our list. Look for your first email alert soon!
Written by
Brendan Gregg for Netflix Media Center
Brendan Gregg
No comment yet.