Amazing Science
791.2K views | +159 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

'Google Maps' for the body: A biomedical revolution down to a single cell

'Google Maps' for the body: A biomedical revolution down to a single cell | Amazing Science |
Scientists are using previously top-secret technology to zoom through the human body down to the level of a single cell. Scientists are also using cutting-edge microtome and MRI technology to examine how movement and weight bearing affects the movement of molecules within joints, exploring the relationship between blood, bone, lymphatics and muscle.

UNSW biomedical engineer Melissa Knothe Tate is using previously top-secret semiconductor technology to zoom through organs of the human body, down to the level of a single cell.

A world-first UNSW collaboration that uses previously top-secret technology to zoom through the human body down to the level of a single cell could be a game-changer for medicine, an international research conference in the United States has been told.

The imaging technology, developed by high-tech German optical and industrial measurement manufacturer Zeiss, was originally developed to scan silicon wafers for defects.

UNSW Professor Melissa Knothe Tate, the Paul Trainor Chair of Biomedical Engineering, is leading the project, which is using semiconductor technology to explore osteoporosis and osteoarthritis.

Using Google algorithms, Professor Knothe Tate -- an engineer and expert in cell biology and regenerative medicine -- is able to zoom in and out from the scale of the whole joint down to the cellular level "just as you would with Google Maps," reducing to "a matter of weeks analyses that once took 25 years to complete."

Her team is also using cutting-edge microtome and MRI technology to examine how movement and weight bearing affects the movement of molecules within joints, exploring the relationship between blood, bone, lymphatics and muscle. "For the first time we have the ability to go from the whole body down to how the cells are getting their nutrition and how this is all connected," said Professor Knothe Tate. "This could open the door to as yet unknown new therapies and preventions."

Professor Knothe Tate is the first to use the system in humans. She has forged a pioneering partnership with the US-based Cleveland Clinic, Brown and Stanford Universities, as well as Zeiss and Google to help crunch terabytes of data gathered from human hip studies. Similar research is underway at Harvard University and Heidelberg in Germany to map neural pathways and connections in the brains of mice.

The above story is based on materials provided by University of New South Wales.

CineversityTV's curator insight, March 30, 2015 8:53 PM

What happens with the metadata? In the public domain? Or in the greed hands of the elite.

Courtney Jones's curator insight, April 2, 2015 4:49 AM

,New advances in biomedical technology

Scooped by Dr. Stefan Gruenwald!

Brain in your pocket: Smartphone replaces thinking, study shows

Brain in your pocket: Smartphone replaces thinking, study shows | Amazing Science |

In the ancient world — circa, say, 2007 — terabytes of information were not available on sleekly designed devices that fit in our pockets. While we now can turn to iPhones and Samsung Galaxys to quickly access facts both essential and trivial — the fastest way to grandmother’s house, how many cups are in a gallon, the name of the actor who played Newman on “Seinfeld” — we once had to keep such tidbits in our heads or, perhaps, in encyclopedia sets.

With the arrival of the smartphone, such dusty tomes are unnecessary. But new research suggests our devices are more than a convenience — they may be changing the way we think. In “The brain in your pocket: Evidence that Smartphones are used to supplant thinking,” forthcoming from the journal Computers in Human Behavior, lead authors Nathaniel Barr and Gordon Pennycook of the psychology department at the University of Waterloo in Ontario said those who think more intuitively and less analytically are more likely to rely on technology.

“That people typically forego effortful analytic thinking in lieu of fast and easy intuition suggests that individuals may allow their Smartphones to do their thinking for them,” the authors wrote.

What’s the difference between intuitive and analytical thinking? In the paper, the authors cite this problem: “A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?”

The brain-teaser evokes an intuitive response: The ball must cost 10 cents, right? This response, unfortunately, is obviously wrong — 10 cents plus $1.10 equals $1.20, not $1.10. Only through analytic thinking can one arrive at the correct response: The ball costs 5 cents. (Confused? Five cents plus $1.05 equals $1.10.)

It’s just this sort of analytical thinking that avid smartphone users seem to avoid. For the paper, researchers asked subjects how much they used their smartphones, then gave them tests to measure not just their intelligence, but how they processed information.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Google collaborates with UCSB to build a quantum device that detects and corrects its own errors

Google collaborates with UCSB to build a quantum device that detects and corrects its own errors | Amazing Science |

Google launches an effort to build its own quantum computer that has the potential to change computing forever. Google is about to begin designing and building hardware for a quantum computer, a type of machine that can exploit quantum physics to solve problems that would take a conventional computer millions of years. Since 2009, Google has been working with controversial startup D-Wave Systems, which claims to make “the first commercial quantum computer.” Last year, Google purchased one of D-Wave’s machines to be able to test the machine thoroughly. But independent tests published earlier this year found no evidence that D-Wave’s computer uses quantum physics at all to solve problems more efficiently than a conventional machine.

Now, John Martinis, a professor at University of California, Santa Barbara, has joined Google to establish a new quantum hardware lab near the university. He will try to make his own versions of the kind of chip inside a D-Wave machine. Martinis has spent more than a decade working on a more proven approach to quantum computing, and built some of the largest, most error-free systems of qubits, the basic building blocks that encode information in a quantum computer.

“We would like to rethink the design and make the qubits in a different way,” says Martinis of his effort to improve on D-Wave’s hardware. “We think there’s an opportunity in the way we build our qubits to improve the machine.” Martinis has taken a joint position with Google and UCSB that will allow him to continue his own research at the university.

Quantum computers could be immensely faster than any existing computer at certain problems. That’s because qubits working together can use the quirks of quantum mechanics to quickly discard incorrect paths to a solution and home in on the correct one. However, qubits are tricky to operate because quantum states are so delicate.

Chris Monroe, a professor who leads a quantum computing lab at the University of Maryland, welcomed the news that one of the leading lights in the field was going to work on the question of whether designs like D-Wave’s can be useful. “I think this is a great development to have legitimate researchers give it a try,” he says.

Since showing off its first machine in 2007, D-Wave has irritated academic researchers by making claims for its computers without providing the evidence its critics say is needed to back them up. However, the company has attracted over $140 million in funding and sold several of its machines (see “The CIA and Jeff Bezos Bet on Quantum Computing”).

There is no question that D-Wave’s machine can perform certain calculations. And research published in 2011 showed that the machine’s chip harbors the right kind of quantum physics needed for quantum computing. But evidence is lacking that it uses that physics in the way needed to unlock the huge speedups promised by a quantum computer. It could be solving problems using only ordinary physics.

Martinis’s previous work has been focused on the conventional approach to quantum computing. He set a new milestone in the field this April, when his lab announced that it could operate five qubits together with relatively low error rates. Larger systems of such qubits could be configured to run just about any kind of algorithm depending on the problem at hand, much like a conventional computer. To be useful, a quantum computer would probably need to be built with tens of thousands of qubits or more.

Martinis was a coauthor on a paper published in Science earlier this year that took the most rigorous independent look at a D-Wave machine yet. It concluded that in the tests run on the computer, there was “no evidence of quantum speedup.” Without that, critics say, D-Wave is nothing more than an overhyped, and rather weird, conventional computer. The company counters that the tests of its machine involved the wrong kind of problems to demonstrate its benefits.

Martinis’s work on D-Wave’s machine led him into talks with Google, and to his new position. Theory and simulation suggest that it might be possible for annealers to deliver quantum speedups, and he considers it an open question. “There’s some really interesting science that people are trying to figure out,” he says.

Benjamin Chiong's curator insight, March 23, 2015 7:23 PM

Looking at Amdahl's law, it is not only the data storage that matters but every component of computer. As each piece of hardware advances, the rest of the parts should be able to keep up as well. Quantum Computing forges a world that allows massive processing power to analyze Big Data. This gives us an idea how the future would look like.

Scooped by Dr. Stefan Gruenwald!

First general learning system that can learn directly from experience to master a wide range of challenging tasks

First general learning system that can learn directly from experience to master a wide range of challenging tasks | Amazing Science |

The gamer punches in play after endless play of the Atari classic Space Invaders. Though an interminable chain of failures, the gamer adapts the gameplay strategy to reach for the highest score. But this is no human with a joystick in a 1970s basement. Artificial intelligence is learning to play Atari games. The Atari addict is a deep-learning algorithm called DQN.

This algorithm began with no previous information about Space Invaders—or, for that matter, the other 48 Atari 2600 games it is learning to play and sometimes master after two straight weeks of gameplay. In fact, it wasn't even designed to take on old video games; it is general-purpose, self-teaching computer program. Yet after watching the Atari screen and fiddling with the controls over two weeks, DQN is playing at a level that would humiliate even a professional flesh-and-blood gamer.

Volodymyr Mnih and his team of computer scientists at Google, who have just unveiled DQN in the journal Nature, says their creation is more than just an impressive gamer. Mnih says the general-purpose DQN learning algorithm could be the first rung on a ladder to artificial intelligence.

"This is the first time that anyone has built a single general learning system that can learn directly from experience to master a wide range of challenging tasks," says Demis Hassabis, a member of Google's team. The algorithm runs on little more than a powerful desktop PC with a souped up graphics card. At its core, DQN combines two separate advances in machine learning in a fascinating way. The first advance is a type of positive-reinforcement learning method called Q-learning. This is where DQN, or Deep Q-Network, gets its middle initial. Q-learning means that DQN is constantly trying to make joystick and button-pressing decisions that will get it closer to a property that computer scientists call "Q." In simple terms, Q is what the algorithm approximates to be biggest possible future reward for each decision. For Atari games, that reward is the game score.

Knowing what decisions will lead it to the high scorer's list, though, is no simple task. Keep in mind that DQN starts with zero information about each game it plays. To understand how to maximize your score in a game like Space Invaders, you have to recognize a thousand different facts: how the pixilated aliens move, the fact that shooting them gets you points, when to shoot, what shooting does, the fact that you control the tank, and many more assumptions, most of which a human player understands intuitively. And then, if the algorithm changes to a racing game, a side-scroller, or Pac-Man, it must learn an entirely new set of facts. That's where the second machine learning advance comes in. DQN is also built upon a vast and partially human brain-inspired artificial neural network. Simply put, the neural network is a complex program built to process and sort information from noise. It tells DQN what is and isn't important on the screen.

Nature Video of DQN AI

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New Artificial Lighting Tricks Human Brain into Seeing Sunlight

New Artificial Lighting Tricks Human Brain into Seeing Sunlight | Amazing Science |

Access to natural daylight has long been one of the biggest limiting factors in building design – some solutions involve reflecting real daylight from the outdoors, but until now no solution has been able to mimic natural refraction processes and fool our minds into thinking we are surrounded by actual sunlight. Developed by CoeLux in Italy, this new form of artificial light is able to dupe humans, cameras and computers alike using a thin coating of nanoparticules to simulate Rayleigh scattering, a natural process that takes place in Earth’s atmosphere causing diffuse sky radiation. It was not enough to make the lights brighter or bluer – variegation and other elements were needed as well.

The result is an effect that carries the same qualities we are used to experiencing outside, from color to light quality. The company also boasts that these photos are untouched and that their fake skylights in showrooms fool people in person just as effectively, appearing to have infinite depth just like one would expect looking up into the sky.

The potential applications are effectively endless, from lighting deep indoor spaces to replacing natural light in places where winters drag on and daylight hours are short. The company sees opportunities in areas like healthcare facilities where it may not be possible to put patients near real windows for spatial or health reasons. Currently, three lighting types are on offer to simulate various broad regions – Mediterranean, Tropical and Nordic – featuring various balances of light, shade, hue and contrast. They are also working on additional offerings, including simulated daytime sequences (sunrise through sunset) and color variations to reflect different kinds of weather conditions.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The future of electronics could ultimately lead to electrical conductors that are 100% efficient

The future of electronics could ultimately lead to electrical conductors that are 100% efficient | Amazing Science |

The future of electronics could lie in a material from its past, as researchers from The Ohio State University work to turn germanium—the material of 1940s transistors—into a potential replacement for silicon. At the American Association for the Advancement of Science meeting, assistant professor of chemistryJoshua Goldberger reported progress in developing a form of germanium called germanane.

In 2013, Goldberger’s lab at Ohio State became the first to succeed at creating one-atom-thick sheet of germanane—a sheet so thin, it can be thought of as two-dimensional. Since then, he and his team have been tinkering with the atomic bonds across the top and bottom of the sheet, and creating hybrid versions of the material that incorporate other atoms such as tin.

The goal is to make a material that not only transmits electrons 10 times faster than silicon, but is also better at absorbing and emitting light—a key feature for the advancement of efficient LEDs and lasers. “We’ve found that by tuning the nature of these bonds, we can tune the electronic structure of the material. We can increase or decrease the energy it absorbs,” Goldberger said. “So potentially we could make a material that traverses the entire electromagnetic spectrum, or absorbs different colors, depending on those bonds.”

As they create the various forms of germanane, the researchers are trying to exploit traditional silicon manufacturing methods as much as possible, to make any advancements easily adoptable by industry.

Aside from these traditional semiconductor applications, there have been numerous predictions that a tin version of the material could conduct electricity with 100 percent efficiency at room temperature. The heavier tin atom allows the material to become a 2D “topological insulator,” which conducts electricity only at its edges., Goldberger explained. Such a material is predicted to occur only with specific bonds across the top and bottom surface, such as a hydroxide bond.

Goldberger’s lab has verified that this theoretical material can be chemically stable. His lab has created germanane with up to 9 percent tin atoms incorporated, and shown that tin atoms have strong preference to bond to hydroxide above and below the sheet. His group is currently developing routes towards preparing the pure tin 2D derivatives.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

#WeAreNotWaiting: Confessions of a Diabetes Hacker

#WeAreNotWaiting: Confessions of a Diabetes Hacker | Amazing Science |

There is a quiet hacking revolution taking place in the Type 1 diabetes community. You can identify its followers by the popular hashtag #WeAreNotWaiting on Twitter. There are currently thousands of individuals running an app known as Nightscout to upload real-time blood glucose readings from their Dexcom continuous glucose monitors to their own private servers. This allows hundreds of parents to keep close watch over the health of children with Type 1. While this kind of customized surveillance can be done with current diabetes technology, it has yet to be approved by the FDA. Many have decided that we are not waiting for that agency’s blessing.

Stephen Black reports: "I was diagnosed with Type 1 four months ago. At that time, I knew nothing about diabetes. I was in disbelief when I discovered that if I wanted to see my glucose levels in real time, I would need to carry around an extra, bulky device in my pocket. If I wanted to see that data anywhere else, I would need to plug it into a computer and upload it. If a loved one wanted to check in to see if I was doing alright, they would need to call me and hope I answered. This seemed anachronistic in the wireless age. I promptly got to work on a project I have dubbed DexDrip, a wireless bluetooth bridge that would allow real-time blood glucose readings from a sensor to be delivered straight to my phone. I started by researching the Nightscout project. After doing some digging, I soon discovered an underground community of individuals working on similar projects to mine. Skirting just outside the peripheral vision of the FDA, they are working on all sorts of projects, including their own closed-loop artificial pancreas systems. Some are working in groups, others are working alone, but all share the same goal of making lives for people with diabetes easier, better, and (although they are taking personal risks) safer. Finding these people was difficult because many want to remain anonymous, but I was amazed by the community; everyone was there to help me. Once I discovered how to intercept transmissions from my Dexcom transmitter, it was pretty straightforward to steer the signal to my smartphone. The math presents a bigger challenge. Every 12 hours, the Dexcom receiver asks the user to enter his or her current blood glucose value so it can recalibrate. I couldn’t find a way to work around this automatic recalibration request. If I was going to cut the receiver out of the equation, I would need to write my own calibration algorithm."

Read Stephen Black's full report here

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Semiconductor Engineering — Will 7nm and 5nm Really Happen?

Semiconductor Engineering — Will 7nm and 5nm Really Happen? | Amazing Science |

New materials and transistors could extend Moore’s Law to 1.5nm or beyond, but there are a lot of problems ahead and a lot of unanswered questions.

As leading-edge chipmakers continue to ramp up their 28nm and 20nm devices, vendors are also updating their future technology roadmaps. In fact, IC makers are talking about their new shipment schedules for 10nm. And GlobalFoundries, Intel, Samsung and TSMC are narrowing down the options for 7nm, 5nm and beyond.

There is a high probability that IC makers can scale to 10nm, but vendors face a multitude of challenges at 7nm and beyond. The big question is whether the 7nm node will ever happen. And is 5nm even possible? The 3nm node is too far out in the future and is still up in the air.

If the industry moves beyond 10nm, it won’t be a straightforward process of simply scaling the gate length, as in previous nodes. The migration to 7nm itself requires a monumental and expensive shift towards new transistor architectures, channel materials and interconnects. It also involves the development of new fab tools and materials, which are either immature or don’t exist today.

Technically, it’s possible to make 7nm and 5nm chips in R&D. One challenge is to design and manufacture devices that meet the cost and power requirements for systems. Another challenge is to make the right technology choices, as the roadmap for the various options remains in flux.

Indeed, in the previous roadmaps among many entities, the leading transistor candidate has been the high-mobility or III-V finFET at 7nm, followed by a next-generation transistor type at 5nm.

Now, the options are all over the map. For example, according to Imec’s latest roadmap, III-V finFETs may get pushed out to 5nm, although they could still appear at 7nm. And a next-generation transistor could arrive as early as 7nm, according to Imec.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Mastering multicore: Parallelizing common algorithms

Mastering multicore: Parallelizing common algorithms | Amazing Science |

Every undergraduate computer-science major takes a course on data structures, which describes different ways of organizing data in a computer’s memory. Every data structure has its own advantages: Some are good for fast retrieval, some for efficient search, some for quick insertions and deletions, and so on.

Today, hardware manufacturers are making computer chips faster by giving them more cores, or processing units. But while some data structures are well adapted to multicore computing, others are not. In principle, doubling the number of cores should double the efficiency of a computation. With algorithms that use a common data structure called a priority queue, that’s been true for up to about eight cores — but adding any more cores actually causes performance to plummet.

At the Association for Computing Machinery’s Symposium on Principles and Practice of Parallel Programming in February, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory will describe a new way of implementing priority queues that lets them keep pace with the addition of new cores. In simulations, algorithms using their data structure continued to demonstrate performance improvement with the addition of new cores, up to a total of 80 cores.

A priority queue is a data structure that, as its name might suggest, sequences data items according to priorities assigned them when they’re stored. At any given time, only the item at the front of the queue — the highest-priority item — can be retrieved. Priority queues are central to the standard algorithms for finding the shortest path across a network and for simulating events, and they’ve been used for a host of other applications, from data compression to network scheduling.

With multicore systems, however, conflicts arise when multiple cores try to access the front of a priority queue at the same time. The problem is compounded by modern chips’ reliance on caches — high-speed memory banks where cores store local copies of frequently used data.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Top Rated Electronic Health Record Software Is Free

Top Rated Electronic Health Record Software Is Free | Amazing Science |

Earlier this month, Medscape published the results of their recent survey (here) which asked 18,575 physicians across 25 specialties to rate their Electronic Health Record (EHR) system. For overall satisfaction, the #1 ranked EHR solution was the VA’s Computerized Patient Record System ‒ also known as VistA. It was built using open‒source software and is therefore license free.

There’s also a publicly available version of VistA called OpenVista and several companies leverage a services-only business model for larger OpenVista installations. For smaller installations, a YouTube video (here) suggests the free OpenVista software can be installed in about 10 minutes ‒ bring your own hardware.

Of course free software licensing doesn’t make the hardware, installation or maintenance free, but the lack of any software licensing fees at all does reduce the overall cost ‒ especially for large installations ‒ and that can typically save millions of dollars.

Open-source software also charts a much different course for design changes that are not dependent on the resources, budgets (or revenue requirements) of independent software vendors (ISV’s).

In many ways, VistA’s top rating is no surprise because it’s the only EHR installation in the U.S. with a truly national footprint. As a single software solution, VistA is designed to support almost 9 million vets through about 1,700 different care sites around the country.

Kris Prendergast's curator insight, March 9, 2015 10:01 AM

How vendor-lock in drains customers

Scooped by Dr. Stefan Gruenwald!

Wireless technology more than 10 times faster than the best Wi-Fi is coming to market in 2015

Wireless technology more than 10 times faster than the best Wi-Fi is coming to market in 2015 | Amazing Science |

Smartphones, tablets and PCs should appear this year that can send and receive data wirelessly more than 10 times faster than a Wi-Fi connection. As well as transferring videos and other large files in a flash, this could do away with the cables used to hook PCs up to displays or projectors.

The wireless technology that will allow this is known as 60 gigahertz—after the radio frequency it uses—and by the name “WiGig.” Computing giants including Apple, Microsoft, and Sony have quietly collaborated on the new standard for years, and a handful of products featuring WiGig are already available. But the technology will get a big push this year, with several companies bringing products featuring WiGig to market.

WiGig carries data much faster than Wi-Fi because its higher frequency radio signal can be used to encode more information. The maximum speed of a wireless channel using the current 60 gigahertz protocol is seven gigabits per second (in perfect conditions). That compares to the 433 megabits per second possible via a single channel made using the most advanced Wi-Fi protocol in use today, which transmits at five megahertz. Most Wi-Fi networks use less advanced technology that operates even slower.

Qualcomm, a leading maker of mobile device processors and wireless chips, has invested heavily in WiGig. At the International Consumer Electronics Show in Las Vegas this month, the company demonstrated a wireless router for home or office use with the technology built in. That device will go on sale by the end of 2015.

Qualcomm has also designed the latest in its line of Snapdragon mobile processors to support WiGig. The “reference designs” Qualcomm shows to customers include its 60-gigahertz wireless chips, and the first devices built using the Snapdragon 810 processor are expected to go on sale in mid-2015. At CES, Qualcomm showed tablets built with that processor using WiGig to transfer video.

Those working on WiGig technology predict that demand for high definition video will make the technology necessary. The latest smartphones now record video at extremely high resolution. Grodzinsky says WiGig will start appearing in set-top boxes, making it easier to stream content from mobile devices to high definition TVs, or upload it to the Internet. Qualcomm calculates that its WiGig technology will make it possible to transfer a full-length HD movie in just three minutes.

Besides Qualcomm, Intel is preparing its own WiGig technology, and the company said at its annual developer conference last summer that WiGig chips would appear in laptops in 2015. In demos then and at CES this month, Intel showed a laptop using WiGig to connect with displays and other peripheral devices.

Samsung also expects to launch WiGig products this year. The company announced late in 2014 that it had developed its own implementation, and said it expected to commercialize it in 2015. The technology will appear in Samsung’s mobile, health-care, and smart home products.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

‘Text neck’ is becoming an ‘epidemic’ and could wreck your spine

‘Text neck’ is becoming an ‘epidemic’ and could wreck your spine | Amazing Science |

The human head weighs about a dozen pounds. But as the neck bends forward and down, the weight on the cervical spine begins to increase. At a 15-degree angle, this weight is about 27 pounds, at 30 degrees it’s 40 pounds, at 45 degrees it’s 49 pounds, and at 60 degrees it’s 60 pounds.

That’s the burden that comes with staring at a smartphone — the way millions do for hours every day, according to research published by Kenneth Hansraj in the National Library of Medicine. The study will appear next month in Surgical Technology International. Over time, researchers say, this poor posture, sometimes called “text neck,” can lead to early wear-and-tear on the spine, degeneration and even surgery.

“It is an epidemic or, at least, it’s very common,” Hansraj, chief of spine surgery at New York Spine Surgery and Rehabilitation Medicine, told The Washington Post. “Just look around you, everyone has their heads down.”

Can’t grasp the significance of 60 pounds? Imagine carrying an 8-year-old around your neck several hours per day. Smartphone users spend an average of two to four hours per day hunched over, reading e-mails, sending texts or checking social media sites. That’s 700 to 1,400 hours per year people are putting stress on their spines, according to the research. And high-schoolers might be the worst. They could conceivably spend an additional 5,000 hours in this position, Hansraj said.

“The problem is really profound in young people,” he said. “With this excessive stress in the neck, we might start seeing young people needing spine care. I would really like to see parents showing more guidance.”

Miloš Bajčetić's curator insight, January 13, 2015 1:38 AM

Hansraj gave smartphone users tips to avoid pain:


- Look down at your device with your eyes. No need to bend your neck.


- Exercise: Move your head from left to right several times. Use your hands to provide resistance and push your head against them, first forward and then backward. Stand in a doorway with your arms extended and push your chest forward to strengthen “the muscles of good posture,” Hansraj said.

Tamer Tekin's curator insight, September 30, 2015 5:06 PM

Bad position

Scooped by Dr. Stefan Gruenwald!

Intel Curie Module is a microcomputer the size of a small button designed to speed wearable device innovation

Intel Curie Module is a microcomputer the size of a small button designed to speed wearable device innovation | Amazing Science |

If 2015 becomes be The Year of Wearables, it will require computing intelligence to fit inside all sorts of things – magical things of different shapes and sizes, even beyond our imagination. That means many product designers from the world of sports, fashion, travel and other areas will turn to computer technology for the first time.

Intel Curie Module is the tiny, mighty technology that will help bring these innovations to life. Intel CEO Brian Krzanich revealed the very first prototype at the 2015 International Consumer Electronics Show, and it will be available in the second half of the year. Curie brings together essential components needed to bring digital capabilities, including wireless connectivity, to wearables.

Krzanich described it as a tiny, power-efficient solution that enables businesses to quickly and effectively create a broad range of wearable technologies. “It is power efficient and can run for extended periods of time from a coin-sized battery,” Kraznich told the CES crowd.

The Intel Curie module is a tiny hardware product based on the Intel Quark SE SoC, which is the company’s first purpose-built system on a chip for wearable devices. It contains Bluetooth low-energy radio, sensors and battery charging technologies. Krzanich described that there’s a dedicated sensor hub processor and pattern classification engine that allows it, for example, to identify different sporting activities quickly and precisely.

“This product just came fresh from our labs, so to show you that it’s working, we created a simple app,” he said. He demonstrated how the button-sized Intel Curie Module was measuring his steps and sending that data to an app on his smartphone. In the future, we will see wearable products created by companies that have historically never used silicon before, according to Mike Bell, vice president and general manager of Intel’s New Devices Group

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Could analog computing accelerate highly complex scientific computer simulations?

Could analog computing accelerate highly complex scientific computer simulations? | Amazing Science |

DARPA announced today, March 19, a Request for Information (RFI) on methods for using analog approaches to speed up computation of the complex mathematics that characterize scientific computing. “The standard digital computer cluster equipped with multiple central processing units (CPUs), each programmed to tackle a particular piece of a problem, is just not designed to solve the kinds of equations at the core of large-scale simulations, such as those describing complex fluid dynamics and plasmas,” said Vincent Tang, program manager in DARPA’s Defense Sciences Office.

These critical equations, known as partial differential equations, describe fundamental physical principles like motion, diffusion, and equilibrium, he notes. But they involve continuous rates of change over a large range of physical parameters relating to the problems of interest, so they don’t lend themselves to being broken up and solved in discrete pieces by individual CPUs. Examples of such problems include predicting the spread of an epidemic, understanding the potential impacts of climate change, or modeling the acoustical signature of a newly designed ship hull.

What if there were a processor specially designed for such equations? What might it look like? Analog computers solve equations by manipulating continuously changing values instead of discrete digital measurements, and have been around for more than a century. In the 1930s, for example, Vannevar Bush—who a decade later would help initiate and administer the Manhattan Project—created an analog “differential analyzer” that computed complex integrations through the use of a novel wheel-and-disc mechanism.

Their potential to excel at dynamical problems too challenging for today’s digital processors may today be bolstered by other recent breakthroughs, including advances in microelectromechanical systems, optical engineering, microfluidics, metamaterials and even approaches to using DNA as a computational platform. So it’s conceivable, Tang said, that novel computational substrates could exceed the performance of modern CPUs for certain specialized problems, if they can be scaled and integrated into modern computer architectures.

DARPA’s RFI is called Analog and Continuous-variable Co-processors for Efficient Scientific Simulation (ACCESS), available here: The RFI seeks new processing paradigms that have the potential to overcome current barriers in computing performance. “In general, we’re interested in information on all approaches, analog, digital, or hybrid ones, that have the potential to revolutionize how we perform scientific simulations,” Tang said.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Researchers Create A Simulated Mouse Brain in a Virtual Mouse Body

Researchers Create A Simulated Mouse Brain in a Virtual Mouse Body | Amazing Science |

scientist Marc-Oliver Gewaltig and his team at the Human Brain Project (HBP) built a model mouse brain and a model mouse body, integrating them both into a single simulation and providing a simplified but comprehensive model of how the body and the brain interact with each other. "Replicating sensory input and motor output is one of the best ways to go towards a detailed brain model analogous to the real thing," explains Gewaltig.

As computing technology improves, their goal is to build the tools and the infrastructure that will allow researchers to perform virtual experiments on mice and other virtual organisms. This virtual neurorobotics platform is just one of the collaborative interfaces being developed by the HBP. A first version of the software will be released to collaborators in April. The HBP scientists used biological data about the mouse brain collected by the Allen Brain Institute in Seattle and the Biomedical Informatics Research Network in San Diego. These data contain detailed information about the positions of the mouse brain's 75 million neurons and the connections between different regions of the brain. They integrated this information with complementary data on the shapes, sizes and connectivity of specific types of neurons collected by the Blue Brain Project in Geneva.

A simplified version of the virtual mouse brain (just 200,000 neurons) was then mapped to different parts of the mouse body, including the mouse's spinal cord, whiskers, eyes and skin. For instance, touching the mouse's whiskers activated the corresponding parts of the mouse sensory cortex. And they expect the models to improve as more data comes in and gets incorporated. For Gewaltig, building a virtual organism is an exercise in data integration. By bringing together multiple sources of data of varying detail into a single virtual model and testing this against reality, data integration provides a way of evaluating – and fostering – our own understanding of the brain. In this way, he hopes to provide a big picture of the brain by bringing together separated data sets from around the world. Gewaltig compares the exercise to the 15th century European data integration projects in geography, when scientists had to patch together known smaller scale maps. These first attempts were not to scale and were incomplete, but the resulting globes helped guide further explorations and the development of better tools for mapping the Earth, until reaching today's precision.

Read more:
Human Brain Project:
NEST simulator software :
Largest neuronalnetwork simulation using NEST :

Open Source Data Sets:
Allen Institute for Brain Science:
Bioinformatics Research Network (BIRN):

The Behaim Globe : 
Germanisches National Museum,
Department of Geodesy and Geoinformation, TU Wien,

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Dr. Google joins Mayo Clinic

Dr. Google joins Mayo Clinic | Amazing Science |
The deal to produce clinical summaries under the Mayo Clinic name for Google searches symbolizes the medical priesthood's acceptance that information technology has reshaped the doctor-patient relationship. More disruptions are already on the way.

If information is power, digitized information is distributed power. While “patient-centered care” has been directed by professionals towards patients, collaborative health – what some call “participatory medicine” or “person-centric care” ­– shifts the perspective from the patient outwards.

Collaboration means sharing. At places like Mayo and Houston’s MD Anderson Cancer Center, the doctor’s detailed notes, long seen only by other clinicians, are available through a mobile app for patients to see when they choose and share how they wish. mHealth makes the process mundane, while the content makes it an utterly radical act.

About 5 million patients nationwide currently have electronic access to open notes. Boston’s Beth Israel Deaconess Medical Center and a few other institutions are planning to allow patients to make additions and corrections to what they call “OurNotes.” Not surprisingly, many doctors remain mortified by this medical sacrilege.

Even more threatening is an imminent deluge of patient-generated health data churned out by a growing list of products from major consumer companies. Sensors are being incorporated into wearables, watches, smartphones and (in a Ford prototype) even a “car that cares” with biometric sensors in the seat and steering wheel. Sitting in your suddenly becomes telemedicine.

To be sure, traditional information channels remain. For example, a doctor-prescribed, Food and Drug Administration-approved app uses sensors and personalized analytics to prevent severe asthma attacks. Increasingly common, though, is digitized data that doesn’t need a doctor at all. For example, a Microsoft fitness band not only provides constant heart rate monitoring, according to a New York Times review, but is part of a health “platform” employing algorithms to deliver “actionable information” and contextual analysis. By comparison, “Dr. Google” belongs in a Norman Rockwell painting.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Brain makes decisions with same method used to break WW2 Enigma code

Brain makes decisions with same method used to break WW2 Enigma code | Amazing Science |

When making simple decisions, neurons in the brain apply the same statistical trick used by Alan Turing to help break Germany’s Enigma code during World War II, according to a new study in animals by researchers at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute and Department of NeuroscienceResults of the study were published Feb. 5 in Neuron.

As depicted in the film “The Imitation Game,” Alan Turing and his team of codebreakers devised the statistical technique to help them decipher German military messages encrypted with the Enigma machine. The technique today is called Wald’s sequential probability ratio test, after Columbia professor Abraham Wald, who independently developed the test to determine if batches of munitions should be shipped to the front or if they contained too many duds.

Finding pairs of messages encrypted with the same Enigma settings was critical to unlocking the code. Turing’s statistical test, in essence, decided as efficiently as possible if any two messages were a pair.

The test evaluated corresponding pairs of letters from the two messages, aligned one above the other (in the film, codebreakers are often pictured doing this in the background, sliding messages around on grids). Although the letters themselves were gibberish, Turing realized that Enigma would preserve the matching probabilities of the original messages, as some letters are more common than others.

The codebreakers assigned values to aligned pairs of letters in the two messages. Unmatched pairs were given a negative value, matched pairs a positive value.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Data Scientist on a Quest to Turn Computers Into Doctors

Data Scientist on a Quest to Turn Computers Into Doctors | Amazing Science |

Some of the world’s most brilliant minds are working as data scientists at places like Google, Facebook, and Twitter—analyzing the enormous troves of online information generated by these tech giants—and for hacker and entrepreneur Jeremy Howard, that’s a bit depressing. Howard, a data scientist himself, spent a few years as the president of the Kaggle, a kind of online data scientist community that sought to feed the growing thirst for information analysis. He came to realize that while many of Kaggle’s online data analysis competitions helped scientists make new breakthroughs, the potential of these new techniques wasn’t being fully realized. “Data science is a very sexy job at the moment,” he says. “But when I look at what a lot of data scientists are actually doing, the vast majority of work out there is on product recommendations and advertising technology and so forth.”

So, after leaving Kaggle last year, Howard decided he would find a better use for data science. Eventually, he settled on medicine. And he even did a kind of end run around the data scientists, leveraging not so much the power of the human brain but the rapidly evolving talents of artificial brains. His new company is called Enlitic, and it wants to use state-of-the-art machine learning algorithms—what’s known as “deep learning”—to diagnosis illness and disease.

Publicly revealed for the first time today, the project is only just getting off the ground—“the big opportunities are going to take years to develop,” Howard says—but it’s yet another step forward for deep learning, a form of artificial intelligence that more closely mimics the way our brains work. Facebook is exploring deep learning as a way of recognizing faces in photos. Google uses it for image tagging and voice recognition. Microsoft does real-time translation in Skype. And the list goes on.

But Howard hopes to use deep learning for something more meaningful. His basic idea is to create a system akin to the Star Trek Tricorder, though perhaps not as portable. Enlitic will gather data about a particular patient—from medical images to lab test results to doctors’ notes—and its deep learning algorithms will analyze this data in an effort to reach a diagnosis and suggest treatments. The point, Howard says, isn’t to replace doctors, but to give them the tools they need to work more effectively. With this in mind, the company will share its algorithms with clinics, hospitals, and other medical outfits, hoping they can help refine its techniques. Howard says that the health care industry has been slow to pick-up on the deep-learning trend because it was rather expensive to build the computing clusters needed to run deep learning algorithms. But that’s changing.

The real challenge, Howard says, isn’t writing algorithms but getting enough data to train those algorithms. He says Enlitic is working with a number of organizations that specialize in gathering anonymized medical data for this type of research, but he declines to reveal the names of the organizations he’s working with. And while he’s tight-lipped about the company’s technique now, he says that much of the work the company does will eventually be published in research papers.

Mike Dele's curator insight, March 20, 2015 10:00 PM

why don't we look at the possibility of creating and manufacturing human spare parts just like for cars to replace any form of problem?

Benjamin Mzhari's curator insight, March 27, 2015 8:37 AM

i fore see this type of profession becoming dynamic in the sense that it will not only look at business data but other statistics figures that will aid businesses.

Scooped by Dr. Stefan Gruenwald!

Cryptographers Could Prevent Satellite Collisions

Cryptographers Could Prevent Satellite Collisions | Amazing Science |
In February 2009 the U.S.'s Iridium 33 satellite collided with the Russian Cosmos 2251, instantly destroying both communications satellites. According to ground-based telescopes tracking Iridium and Cosmos at the time, the two should have missed each other, but onboard instrumentation data from even one of the satellites would have told a different story. Why weren't operators using this positional information?

Orbital data are actually guarded secrets: satellite owners view the locations and trajectories of their on-orbit assets as private. Corporations fear losing competitive advantage—sharing exact positioning could help rivals determine the extent of their capabilities. Meanwhile governments fear that disclosure could weaken national security. But even minor collisions can cause millions of dollars' worth of damage and send debris into the path of other satellites and spacecraft carrying humans, such as the International Space Station, which is why the Iridium-Cosmos crash prompted those in the field to find an immediate fix to the clandestine problem.

In the current working solution, the world's four largest satellite communications providers have teamed up with a trusted third party: Analytical Graphics. The company aggregates their orbital data and alerts participants when satellites are at risk. This arrangement, however, requires that all participants maintain mutual trust of the third party, a situation often difficult or impossible to arrange as more players enter the field and launch more satellites into orbit.

Now experts are thinking cryptography, which can eliminate the need for mutual trust, may be a better option. In the 1980s specialists developed algorithms that allowed many people to jointly compute a function on private data without revealing any number of secrets. In 2010 DARPA tasked teams of cryptographers to apply this technology to develop so-called secure multiparty computation (MPC) protocols for satellite data sharing. In this method, each participant loads proprietary data into its own software, which then sends messages back and forth according to a publicly specified MPC protocol. The design of the protocol guarantees that participants can compute a desired output (for example, the probability of collision) but nothing else. And because the protocol design is public, anyone involved can write their own software client—there would be no need for all parties to trust one another.
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Google wants to make your internet connection 1,000 times faster (up to 10Gbps)

Google wants to make your internet connection 1,000 times faster (up to 10Gbps) | Amazing Science |

Google is working on technology to deliver data transfer speeds over the Internet at 10 gigabits per second, 10 times faster than the connections offered by Google Fiber in Kansas City, a Google executive revealed Wednesday, according to a USA Today report. That's roughly 1,000 times faster than the average US connection speed of 7.2 megabits per second in 2014. The project is part of Google's vision of the next-generation Internet, allowing for more stable connections for data-intensive applications and greater adoption of software as a service, Google CFO Chief Patrick Pichette said during the Goldman Sachs Technology and Internet conference.

"That's where the world is going. It's going to happen," Pichette said. It may happen over a decade, but "why wouldn't we make it available in three years? That's what we're working on. There's no need to wait," he added. Few homes need 1Gbps or even 100Mbps broadband today, but existing capacity is steadily being absorbed by emerging technologies such as streaming audio and video, cloud storage, video chats, software updates, and multiplayer games.

Connections will be stretched even thinner with adoption of higher-resolution 4K video, expected to be the next bandwidth-hogging technology. Netflix, which plans to begin streaming content in 4K this year, said the 4K streams will need a 15Mbps connection, roughly twice the bandwidth needed to stream Super HD content.

Of course, Google isn't alone in its quest for faster Internet connections. A team of UK researchers announced last year that they had achieved wireless data transmission speeds of 10Gbs via visible light. Their "Li-fi" system used a micro-LED light bulb to transmit 3.5Gbps across each of the three primary colors of visible light: red, blue, and green.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Technique greatly extends duration of fragile quantum states, pointing toward practical quantum computers

Technique greatly extends duration of fragile quantum states, pointing toward practical quantum computers | Amazing Science |

Quantum computers are experimental devices that promise exponential speedups on some computational problems. Where a bit in a classical computer can represent either a 0 or a 1, a quantum bit, or qubit, can represent 0 and 1 simultaneously, letting quantum computers explore multiple problem solutions in parallel. But such “superpositions” of quantum states are, in practice, difficult to maintain.

In a paper appearing this week in Nature Communications, MIT researchers and colleagues at Brookhaven National Laboratory and the synthetic-diamond company Element Six describe a new design that in experiments extended the superposition time of a promising type of qubit a hundredfold.

In the long term, the work could lead toward practical quantum computers. But in the shorter term, it could enable the indefinite extension of quantum-secured communication links, a commercial application of quantum information technology that currently has a range of less than 100 miles.

The researchers’ qubit design employs nitrogen atoms embedded in synthetic diamond. When nitrogen atoms happen to be situated next to gaps in the diamond’s crystal lattice, they produce “nitrogen vacancies,” which enable researchers to optically control the magnetic orientation, or “spin,” of individual electrons and atomic nuclei. Spin can be up, down, or a superposition of the two.

To date, the most successful demonstrations of quantum computing have involved atoms trapped in magnetic fields. But “holding an atom in vacuum is difficult, so there’s been a big effort to try to trap them in solids,” says Dirk Englund, the Jamieson Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT and corresponding author on the new paper.

“In particular, you want a transparent solid, so you can send light in and out. Crystals are better than many other solids, like glass, in that their atoms are nice and regular and their electronic structure is well defined. And amongst all the crystals, diamond is a particularly good host for capturing an atom, because it turns out that the nuclei of diamond are mostly free of magnetic dipoles, which can cause noise on the electron spin.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Malware detection technology identifies malware without examining source cod

Malware detection technology identifies malware without examining source cod | Amazing Science |

Hyperion, new malware detection software that can quickly recognize malicious software even if the specific program has not been previously identified as a threat, has been licensed by Oak Ridge National Laboratory (ORNL) to R&K Cyber Solutions LLC (R&K).

Hyperion, which has been under development for a decade, offers more comprehensive scanning capabilities than existing cyber security methods, said one of its inventors, Stacy Prowell of the ONRL Cyber Warfare Research team. By computing and analyzing program behaviors associated with harmful intent, Hyperion can determine the software’s behavior without using its source code or even running the program.

“These behaviors can be automatically checked for known malicious operations as well as domain-specific problems,” Prowell said. “This technology helps detect vulnerabilities and can uncover malicious content before it has a chance to execute.”

“This approach is better than signature detection, which only searches for patterns of bytes,” Prowell said. “It’s easy for somebody to hide that — they can break it up and scatter it about the program so it won’t match any signature.”

“Software behavior computation is an emerging science and technology that will have a profound effect on malware analysis and software assurance,” said R&K Cyber Solutions CEO Joseph Carter. “Computed behavior based on deep functional semantics is a much-needed cyber security approach that has not been previously available. Unlike current methods, behavior computation does not look at surface structure. Rather, it looks at deeper behavioral patterns.”

Carter adds that technology’s malware analysis capabilities can be applied to multiple related cyber security problems, including software assurance in the absence of source code, hardware and software data exploitation and forensics, supply chain security analysis, anti-tamper analysis, and potential first intrusion detection systems based on behavior semantics.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Elon Musk reveals plan to put internet connectivity in space

Elon Musk reveals plan to put internet connectivity in space | Amazing Science |

At the SpaceX event held in Seattle, Elon Musk revealed his grand (and expensive) $10 billion plan to build internet connectivity in space. Musk’s vision wants to radically change the way we access internet. His plan includes putting satellites in space, between which data packets would bounce around before being passed down to Earth. Right now, data packets bounce about the various networks via routers.

Some say that Elon Musk’s ambitious project would enable a Smartphone to access the internet just like it communicates with GPS satellites. SpaceX will launch its satellites in a low orbit, so as to reduce communication lag. While geosynchronous communication satellites orbit the Earth from an altitude of 22,000 miles, SpaceX’s satellites would be orbiting the Earth from an altitude of 750 miles.

Once Musk’s system is in place, data packets would simply be sent to space, from where they would bounce about the satellites, and ultimately be sent back to Earth. “The speed of light is 40 percent faster in the vacuum of space than it is for fiber,” says Musk, which is why he believes that his unnamed SpaceX venture is the future of internet connectivity, replacing traditional routers and networks.

The project is based out of SpaceX’s new Seattle office. It will initially start out with 60 workers, but Musk predicts that the workforce may grow to over 1,000 in three to four years. Musk wants “the best engineers that either live in Seattle or that want to move to the Seattle area and work on electronics, software, structures, and power systems,” to work with SpaceX.

JebaQpt's comment, January 21, 2015 11:21 PM
Elon Musk quotes
Justin Boersma's curator insight, March 27, 2015 7:12 AM

Global internet connectivity through Low Earth Orbit satellites can prove to be incredibly useful and revolutionise the way certain information may travel, e.g. designate specific types of data to be transmitted only through this network of satellites. This would overall increase connectivity and speed across the globe, and most likely require an overhaul of current networking hardware.

Scooped by Dr. Stefan Gruenwald!

Water-soluble silicon leads to dissolvable electronics

Water-soluble silicon leads to dissolvable electronics | Amazing Science |

Researchers working in a materials science lab are literally watching their work disappear before their eyes—but intentionally so. They're developing water-soluble integrated circuits that dissolve in water or biofluids in months, weeks, or even a few days. This technology, called transient electronics, could have applications for biomedical implants, zero-waste sensors, and many other semiconductor devices.

The researchers, led by John A. Rogers at the University of Illinois at Urbana-Champaign and Fiorenzo Omenetto at Tufts University, have published a study in a recent issue of Applied Physics Letters in which they analyzed the performance and dissolution times of various semiconductor materials.

The work builds on previous research, by the authors and others, which demonstrated that silicon—the most commonly used semiconductor material in today's electronic devices—can dissolve in water. Although it would take centuries to dissolve bulk silicon, thin layers of silicon can dissolve in more reasonable times at low but significant rates of 5-90 nm/day. The silicon dissolves due to hydrolysis, in which water and silicon react to form silicic acid. Silicic acid is environmentally and biologically benign.

In the new study, the researchers analyzed the dissolution characteristics of silicon dioxide and tungsten, which they used to fabricate two electronics devices: field-effect transistors and ring oscillators. Under biocompatible conditions (37 °C, 7.4 pH), dissolution rates ranged from 1 week for the tungsten components, to between 3 months and 3 years for the silicon dioxide components. The dissolution rates can be controlled by several factors, such as the thickness of the materials, the concentration and type of ions in the solution, and the method used to deposit the silicon dioxide on the original substrate.

As shown in the microscope images, the circuits do not dissolve in a uniform, layer-by-layer mode, but instead some places dissolve more rapidly than others. This is due to mechanical fractures in the fragile circuits, which cause the solution to penetrate through the layers more in some locations than in others. Although organic electronic materials are also often biodegradable, silicon-based electronics have the advantages of an overall higher performance and the use of complementary metal-oxide-semiconductor (CMOS) fabrication processes that allow for mass-production.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

A Cyberattack Has Caused Confirmed Physical Damage for the Second Time Ever

A Cyberattack Has Caused Confirmed Physical Damage for the Second Time Ever | Amazing Science |

Amid all the noise the Sony hack generated over the holidays, a far more troubling cyber attack was largely lost in the chaos. Unless you follow security news closely, you likely missed it. In a German report released just before Christmas(.pdf), hackers had struck an unnamed steel mill in Germany. They did so by manipulating and disrupting control systems to such a degree that a blast furnace could not be properly shut down, resulting in “massive”—though unspecified—damage.

This is "only" the second confirmed case in which a wholly digital attack caused physical destruction of equipment. The first case, of course, was Stuxnet, the sophisticated digital weapon the U.S. and Israel launched against control systems in Iran in late 2007 or early 2008 to sabotage centrifuges at a uranium enrichment plant. That attack was discovered in 2010, and since then experts have warned that it was only a matter of time before other destructive attacks would occur.

Industrial control systems have been found to be riddled with vulnerabilities, though they manage critical systems in the electric grid, in water treatment plants and chemical facilities and even in hospitals and financial networks. A destructive attack on systems like these could cause even more harm than at a steel plant.

It’s not clear when exactly the attack in Germany took place. The report, issued by Germany’s Federal Office for Information Security (or BSI), indicates the attackers gained access to the steel mill through the plant’s business network, then successively worked their way into production networks to access systems controlling plant equipment. The attackers infiltrated the corporate network using a spear-phishing attack—sending targeted email that appears to come from a trusted source in order to trick the recipient into opening a malicious attachment or visiting a malicious web site where malware is downloaded to their computer. Once the attackers got a foothold on one system, they were able to explore the company’s networks, eventually compromising a “multitude” of systems, including industrial components on the production network.

No comment yet.