Amazing Science
620.1K views | +24 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

TRANSFORM - Amazing Technology Invented By MIT

TRANSFORM fuses technology and design to celebrate its transformation from a piece of still furniture to a dynamic machine driven by the stream of data and energy. Created by Professor Hiroshi Ishii and the Tangible Media Group from the MIT Media Lab, TRANSFORM aims to inspire viewers with unexpected transformations, as well as the aesthetics of the complex machine in motion.


The work is comprised of three dynamic shape displays that move more than one thousand pins up and down in realtime to transform the tabletop into a dynamic tangible display. The kinetic energy of the viewers, captured by a sensor, drives the wave motion represented by the dynamic pins.


The motion design is inspired by the dynamic interactions among wind, water and sand in nature, Escher’s representations of perpetual motion, and the attributes of sand castles built at the seashore. TRANSFORM tells the story of the conflict between nature and machine, and its reconciliation, through the ever-changing tabletop landscape.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Can Intelligent Creatures Be as Big as a Galaxy?

Can Intelligent Creatures Be as Big as a Galaxy? | Amazing Science |

We can imagine building neurons that are smaller than our own, in artificially intelligent systems. Electronic circuit elements, for example, are now substantially smaller than neurons. But they are also simpler in their behavior, and require a superstructure of support (energy, cooling, intercommunication) that takes up a substantial volume. It’s likely that the first true artificial intelligences will occupy volumes that are not so different from the size of our own bodies, despite being based on fundamentally different materials and architectures, again suggesting that there is something special about the meter scale.


If both our brains and our neurons were 10 times bigger, we’d have 10 times fewer thoughts during our lifetimes. What about on the supersize end of the spectrum? William S. Burroughs, in his novel The Ticket That Exploded, imagined that beneath a planetary surface, lies “a vast mineral consciousness near absolute zero thinking in slow formations of crystal.” The astronomer Fred Hoyle wrote dramatically and convincingly of a sentient hyper-intelligent “Black Cloud,” comparable to Earth-sun distance. His idea presaged the concept of Dyson spheres, massive structures that completely surround a star and capture most of its energy. It is also supported by calculations that my colleague Fred Adams and I are performing, that indicate that the most effective information-processing structures in the current-day galaxy might be catalyzed within the sooty winds ejected by dying red giant stars. For a few tens of thousands of years, dust-shrouded red giants provide enough energy, a large enough entropy gradient, and enough raw material to potentially out-compute the biospheres of a billion Earth-like planets.

How big could life forms like these become? Interesting thoughts require not only a complex brain, but also sufficient time for formulation. The speed of neural transmissions is about 300 kilometers per hour, implying that the signal crossing time in a human brain is about 1 millisecond. A human lifetime, then, comprises 2 trillion message-crossing times (and each crossing time is effectively amplified by rich, massively parallelized computational structuring). If both our brains and our neurons were 10 times bigger, and our lifespans and neural signaling speeds were unchanged, we’d have 10 times fewer thoughts during our lifetimes.

If our brains grew enormously to say, the size of our solar system, and featured speed-of-light signaling, the same number of message crossings would require more than the entire current age of the universe, leaving no time for evolution to work its course. If a brain were as big as our galaxy, the problem would become even more severe. From the moment of its formation, there has been time for only 10,000 or so messages to travel from one side of our galaxy to the other. We can argue, then, that, it is difficult to imagine any life-like entities with complexity rivaling the human brain that occupy scales larger than the stellar size scale. Were they to exist, they wouldn’t yet have had sufficient time to actually do anything.

The Planetary Archives / San Francisco, California's curator insight, April 5, 3:05 PM
Sure they can. They are sometimes referred to as "Gods" by primitive beings.
Scooped by Dr. Stefan Gruenwald!

Startup brings driverless taxi service to Singapore

Startup brings driverless taxi service to Singapore | Amazing Science |

An exciting “driverless race” is underway among tech giants the United States: In recent months, Google, Uber, and Tesla have made headlines for developing self-driving taxis for big cities. But a comparatively small MIT spinout, nuTonomy, has entered the race somewhat under the radar. The startup is developing a fleet of driverless taxis to serve as a more convenient form of public transit while helping reduce greenhouse gas emissions in the densely populated city-state of Singapore.


“This could make car-sharing something that is almost as convenient as having your own private car, but with the accessibility and cost of public transit,” says nuTonomy co-founder and chief technology officer Emilio Frazzoli, an MIT professor of aeronautical and astronautical engineering.


The startup’s driverless taxis follow optimal paths for picking up and dropping off passengers to reduce traffic congestion. Without the need to pay drivers, they should be cheaper than Uber and taxis. These are also electric cars, manufactured through partnerships with automakers, which produce lower levels of greenhouse gas emissions than conventional vehicles do.


Last week, nuTonomy “passed its first driving test” in Singapore, Frazzoli says — meaning its driverless taxis navigated a custom obstacle course, without incident. Now, nuTonomy is in the process of getting approval for on-road testing in a business district, called One North, designated for autonomous-vehicle testing. In a few years, Frazzoli says, nuTonomy aims to deploy thousands of driverless taxis in Singapore. The company will act as the service provider to maintain the vehicles and determine when and how they can be operated safely.


But a big question remains: Will driverless taxis put public-transit operators out of work? In Singapore, Frazzoli says, that’s unlikely. “In Singapore, they want to have more buses, but they cannot find people to drive buses at night,” he says. “Robotics will not put these people out of jobs — it will provide more capacity and support that’s needed.”


Importantly, Frazzoli adds, driverless-taxi services used for public transit, such as nuTonomy’s, could promote wider use of electric cars, as consumers won’t need to purchase the expensive cars or worry about finding charging stations. This could have a major impact on the environment: A 2015 study published in Nature Climate Change found that by 2030 autonomous taxis — specifically, more efficient hybrid and electric cars — used worldwide could produce up to 94 percent less greenhouse gas emission per mile than conventional taxis.

Russell R. Roberts, Jr.'s curator insight, March 30, 12:24 AM
The Singapore taxi startup known as nuTommy may outshine Tesla, Google, Uber and a host of companies trying to cash in on the driverless car market.  This company is a spinoff of an idea promoted by MIT 's Emilio Frazzoli, a professor of aeronautical and astronomical engineering.  This car seems right for a compact urban area such as Singapore, where owning a car is both expensive and dangerouos.  Aloha, Russ.
Scooped by Dr. Stefan Gruenwald!

Researchers develop new method of trapping multiple particles using fluidics

Researchers develop new method of trapping multiple particles using fluidics | Amazing Science |

Precise control of an individual particle or molecule is a difficult task. Controlling multiple particles simultaneously is an even more challenging endeavor. Researchers at the University of Illinois have developed a new method that relies on fluid flow to manipulate and assemble multiple particles. This new technique can trap a range of submicron- to micron-sized particles, including single DNA molecules, vesicles, drops or cells.


"This is a fundamentally new method for trapping multiple particles in solution," saidCharles M. Schroeder, a U. of I. professor of chemical and biomolecular engineering. Schroeder conducted the research with mechanical science and engineering graduate student Anish Shenoy and chemical and biomolecular engineering professorChristopher Rao.

The study results were reported in the Proceedings of the National Academy of Sciences.


Many methods exist for particle trapping, with each type using a different modality for trapping - including optical, magnetic, acoustic and electrical forces. However, many of these techniques change or perturb the system that is being observed.


"The existing techniques can be very restrictive in particle properties required for trapping, and we wanted to study a broad range of systems like bacterial cells and different types of soft particles like vesicles, bubbles and droplets," Shenoy said. None of the prevailing techniques can be used for studying this broad range of systems across multiple length scales, he said. Thus, the researchers wanted to build a technique that could be generally applied to arbitrary numbers of arbitrary kinds of particles.


Called the Stokes Trap, the method developed by Schroeder's team relies on gentle fluid flow to manipulate particles. Schroeder's group is the first to implement multiple particle trapping and assembly using fluid flow.


In order to control the movement of the particles from a set starting position to a set ending position, Shenoy and his colleagues developed an automated control algorithm that calculates which pressures are required to drive the flow fields and precisely move the particles in a small microdevice. The algorithm can solve the complex optimization problem in half a millisecond, he said.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

These new quantum dot crystals could replace silicon in super-fast, next-gen computers

These new quantum dot crystals could replace silicon in super-fast, next-gen computers | Amazing Science |

Solid, crystalline structures of incredibly tiny particles known as quantum dots have been developed by engineers in the US, and they're so close to perfect, they could be a serious contender for a silicon alternative in the super-fast computers of the future.


Just as single-crystal silicon wafers revolutionised computing technology more than 60 years ago (your phone, laptop, PC, and iPad wouldn’t exist without one), quantum dot solids could change everything about how we transmit and process information in the decades to come.


But despite the incredible potential of quantum dot crystals in computing technology, researchers have been struggling for years to organise each individual dot into a perfectly structured solid - something that’s crucial if you want to install it in a processor and run an electric charge through it.


The problem? Past efforts to build something out of quantum dots - which are made up of a mere 5,000 atoms each - have failed, because researchers couldn’t figure out how to 'glue' them together without using another type of material that messes with their performance.


"Previously, they were just thrown together, and you hoped for the best," lead researcher Tobias Hanrath from Cornell University told The Christian Science Monitor. "It was like throwing a couple thousand batteries into a bathtub and hoping you get charge flowing from one end to the other."


Instead of pursuing different chemicals and materials that could work as the 'glue' but hinder the quantum dot’s electrical properties, Hanrath and his team have figured out how to ditch the glue and stick the quantum dots to each other, Lego-style.

"If you take several quantum dots, all perfectly the same size, and you throw them together, they’ll automatically align into a bigger crystal," Hanrath says.


To achieve this, the researchers first made nanocrystals from lead and selenium, and built these into crystalline fragments. These fragments were then used to form two-dimensional, square-shaped 'superstructures' - tiny building blocks that attach to each other without the help of other atoms. 


Publishing the results in Nature Materials, the team claims that the electrical properties of these superstructures are potentially superior to all other existing semiconductor nanocrystals, and they could be used in new types of devices for super-efficient energy absorption and light emission. The structures aren’t entirely perfect though, which is a key limitation of using quantum dots as your building blocks. While every silicon atom is exactly the same size, each quantum dot can vary by about 5 percent, and even when we’re talking about something that’s a few thousand atoms small, that 5 percent size variability is all it takes to prevent perfection.


Hanrath says that’s a good and a bad thing - good because they managed to hit the limits of what can be done with quantum dot solids, but bad, because they’ve hit the limits of what can be done with quantum dot solids.


"It's the equivalent of saying, 'Now we've made a really large single-crystal wafer of silicon, and you can do good things with it,'" he says in a press release. "That's the good part, but the potentially bad part of it is, we now have a better understanding that if you wanted to improve on our results, those challenges are going to be really, really difficult.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

3-D printer and 'Gecko Grippers' head to space station

3-D printer and 'Gecko Grippers' head to space station | Amazing Science |

A United Launch Alliance Atlas 5 rocket loaded with supplies and science experiments blasted off from Florida on Tuesday, boosting an Orbital ATK cargo capsule toward the International Space Station.


The 194-foot (59-meter) rocket soared off its seaside launch pad at Cape Canaveral Air Force Station at 11:05 p.m. EDT/0305 GMT. United Launch Alliance is a partnership of Lockheed Martin and Boeing.


Perched on top of the rocket was a Cygnus capsule loaded with nearly 7,500 pounds (3,400 kg) of food, science experiments and equipment including a 3-D printer to build tools for astronauts and non-stick grippers modeled after gecko feet.


The printer works by heating plastic, metal or other materials into streams that are layered on top of each other to create three-dimensional objects.


“If we had a choice of what we could use that printer for, I’m sure we could be quite creative,” station commander Tim Kopra said during an inflight interview on Tuesday.


The experimental Gecko Gripper is a new kind of adhesive that mimics the way gecko lizards cling to surfaces without falling. It aims to test a method of attaching things in the weightless environment of space.


NASA is looking at robotic versions of gecko feet to attach sensors and other instruments onto and inside satellites.

The Gecko Gripper technology may lead to terrestrial versions of grippers that could, for example, hold flat-screen TVs to walls without anchoring systems and adhesives, said lead researcher Aaron Parness with NASA’s Jet Propulsion Laboratory in Pasadena, California.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Stanford scientists develop new technique for imaging cells and tissues under the skin

Stanford scientists develop new technique for imaging cells and tissues under the skin | Amazing Science |
Scientists have many tools at their disposal for looking at preserved tissue under a microscope in incredible detail, or peering into the living body at lower resolution. What they haven't had is a way to do both: create a three-dimensional real-time image of individual cells or even molecules in a living animal.

Now, Stanford scientists have provided the first glimpse under the skin of a living animal, showing intricate real-time details in three dimensions of the lymph and blood vessels.

The technique, called MOZART (for MOlecular imaging and characteriZation of tissue noninvasively At cellular ResoluTion), could one day allow scientists to detect tumors in the skin, colon or esophagus, or even to see the abnormal blood vessels that appear in the earliest stages of macular degeneration – a leading cause of blindness.

"We've been trying to look into the living body and see information at the level of the single cell," said Adam de la Zerda, an assistant professor of structural biology at Stanford and senior author on the paper. "Until now there has been no way do that."

De la Zerda, who is also a member of Stanford Bio-X, said the technique could allow doctors to monitor how an otherwise invisible tumor under the skin is responding to treatment, or to understand how individual cells break free from a tumor and travel to distant sites.
No comment yet.
Scooped by Dr. Stefan Gruenwald!

The Future of Wi-Fi Is 10,000 Times More Energy Efficient

The Future of Wi-Fi Is 10,000 Times More Energy Efficient | Amazing Science |

Engineering students have discovered a way to reflect Wi-Fi packets instead of broadcasting them. It’s a problem that’s rapidly getting worse as more and more devices require access to the cloud, not to mention the constant strain of searching for a good signal or boosting a weak one.


The student researchers invented a new type of hardware that uses 10,000 times less power than traditional Wi-Fi networking equipment. It’s called Passive Wi-Fi, (you canread their paper here) and it works just like a home router, just more efficiently. To give some perspective, the state of the art in low power Wi-Fi transmissions today consume 100s of milliwatts of power, whereas the technology the student researchers developed consume only 10-50 microwatts—10,000 times lower power. 


Wi-Fi typically requires two radios to communicate back and forth, and it takes a lot of energy to discern the signal from the noise because there may be several devices using the same frequency (2.4 GHz or 5 GHz). Each device has an RF transmitter that creates a radio wave and a baseband chip that encodes that radio wave with data. With Passive Wi-Fi, instead of each device using an analog radio frequency to receive and transmit a signal, just one produces a radio frequency. That frequency is relayed to your Wi-Fi-enabled device via separate, passive sensors that have only the baseband chip and an antenna and require almost no power. Those sensors pick up the signal and mirror it in a way that sends readable Wi-Fi to any device that has a Wi-Fi chipset in it. This may sound a lot like a mesh network, with the signal bouncing from antenna point to antenna point, but it’s not. A mesh network uses multiple routers, with full analog RF transmitters and digital baseband chips to receive and rebroadcast a signal.


“The low power passive device isn’t transmitting anything at all. It’s creating Wi-Fi packets just by reflection,” says Vamsi Talla, another student working on the project. “It’s a transmission technique that’s ultra low-powered, as opposed to a network device.” That “reflection” happens via a process called “backscatter,” and the students at UW have created Wi-Fi equipment that sends out a signal via backscatter instead of using a full radio signal.


Right now most devices do not have the backscatter hardware inside of them to send Wi-Fi packets back to the Internet-connected router. But if this technology takes off, it could increase the amount of devices that are connected to the Internet because it nearly nullifies previous energy constraints of making a device Wi-Fi compatible.


To be clear, Passive Wi-Fi still requires running one Wi-Fi router, and Wi-Fi routers aren’t super energy efficient. The Environmental Projection Agency even created an Energy Star certification for home networking devices in 2013 to try to encourage the manufacture of less energy intensive devices. According to the EPA’s website, “If all small network equipment sold in the United States were ENERGY STAR certified, the energy cost savings would grow to more than $590 million each year and more than 7 billion pounds of annual greenhouse gas emissions would be prevented.”  The energy savings with Passive Wi-Fi come from the Wi-Fi transmission chipset in devices that communicate via wireless Internet, not the router connected to the initial uplink.


It’s hard to say what this will do for your battery life, because there are so many components in a device that impact that—like the screen, for example. “But using Passive Wi-Fi would improve battery life by about as much as turning your Wi-Fi off would,” said Bryce Kellogg, an electrical engineering graduate student at UW who co-developed Passive Wi-Fi.


In the future, these passive sensors may even end up in our devices themselves, reflecting packets to send back to the router instead of broadcasting new transmitter waves. For now, using the hardware can reduce the energy used to spread Wi-Fi to devices.


“Our passive Wi-Fi devices now talk up to 11 megabits per second,” said Kellogg. For comparison’s sake, that’s 11 times faster than Bluetooth. One of the main selling points of devices communicating via Bluetooth rather than Wi-Fi has been Bluetooth’s comparatively low energy consumption. But Passive Wi-Fi is 1,000 times more energy efficient than Bluetooth, and the network can be secured like any Wi-Fi signal can, unlike Bluetooth.

Ra's curator insight, March 21, 3:31 PM
What's not to like?
Rescooped by Dr. Stefan Gruenwald from Virtual Neurorehabilitation!

D-Eye: a smartphone-based retinal imaging system

D-Eye: a smartphone-based retinal imaging system | Amazing Science |

D-Eye, a smartphone-based retinal imaging system, has raised $1.68 million (1.5 euros) from Innogest, Invitalia Ventures, Giuseppe e Annamaria Cottino Foundation, and Si14. 


The product was first conceived in Padua, Italy, by Dr. Andrea Russo, an ophthalmologist, when he was examining his patients. D-Eye CEO Rick Sill told MobiHealthNews that Russo got the idea when he was with a patient one day and his phone rang.


“He looked at the smartphone and said ‘I wonder if it would be possible to use the smartphone as a digital ophthalmoscope because now I could actually capture images using the smartphone. Then I’d be able to transmit those images to other doctors to view if they were so interested,’” Sill explained in an interview. “He went out and bought himself a 3D printer and started cranking out ideas for attaching a lens on top of a smartphone that would allow him to do just that, to image the retina.”

Via Daniel Perez-Marcos
Dr. Stefan Gruenwald's insight:
Digital ophthalmoscope using smartphone: such a great way of expanding the use of everyday technology!
Reid Johnson's curator insight, March 17, 10:47 AM
Digital ophthalmoscope using smartphone: such a great way of expanding the use of everyday technology!
ANDREAS SOFRONIOU's curator insight, March 18, 12:15 PM
Digital ophthalmoscope using smartphone: such a great way of expanding the use of everyday technology!
Thirumurugan's curator insight, March 26, 5:09 AM
Digital ophthalmoscope using smartphone: such a great way of expanding the use of everyday technology!
Scooped by Dr. Stefan Gruenwald!

Experiment shows magnetic chips could dramatically increase computing's energy efficiency

Experiment shows magnetic chips could dramatically increase computing's energy efficiency | Amazing Science |

In a breakthrough for energy-efficient computing, engineers at the University of California, Berkeley, have shown for the first time that magnetic chips can operate with the lowest fundamental level of energy dissipation possible under the laws of thermodynamics.


The findings, to be published Friday, March 11, 2016 in the peer-reviewed journal Science Advances, mean that dramatic reductions in power consumption are possible—as much as one-millionth the amount of energy per operation used by transistors in modern computers.


This is critical for mobile devices, which demand powerful processors that can run for a day or more on small, lightweight batteries. On a larger, industrial scale, as computing increasingly moves into 'the cloud,' the electricity demands of the giant cloud data centers are multiplying, collectively taking an increasing share of the country's—and world's—electrical grid.


"We wanted to know how small we could shrink the amount of energy needed for computing," said senior author Jeffrey Bokor, a UC Berkeley professor of electrical engineering and computer sciences and a faculty scientist at the Lawrence Berkeley National Laboratory. "The biggest challenge in designing computers and, in fact, all our electronics today is reducing their energy consumption."


Lowering energy use is a relatively recent shift in focus in chip manufacturing after decades of emphasis on packing greater numbers of increasingly tiny and faster transistors onto chips. "Making transistors go faster was requiring too much energy," said Bokor, who is also the deputy director the Center for Energy Efficient Electronics Science, a Science and Technology Center at UC Berkeley funded by the National Science Foundation. "The chips were getting so hot they'd just melt."


Researchers have been turning to alternatives to conventional transistors, which currently rely upon the movement of electrons to switch between 0s and 1s. Partly because of electrical resistance, it takes a fair amount of energy to ensure that the signal between the two states is clear and reliably distinguishable, and this results in excess heat.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from drones!

DJI launches new era of intelligent flying machines

DJI launches new era of intelligent flying machines | Amazing Science |

DJI, the world’s leading maker of unmanned aerial vehicles, on Tuesday launched the Phantom 4, the first consumer quadcopter camera (or “drone”) to use highly advanced computer vision and sensing technology to make professional aerial imaging easier for everyone.


The Phantom 4 expands on previous generations of DJI’s iconic Phantom line by adding new on-board intelligence that make piloting and shooting great shots simple through features like its Obstacle Sensing System, ActiveTrack and TapFlyfunctionality.

“With the Phantom 4, we are entering an era where even beginners can fly with confidence,” said DJI CEO Frank Wang. “People have dreamed about one day having a drone collaborate creatively with them. That day has arrived.”


The Phantom 4’s Obstacle Sensing System features two forward-facing optical sensors that scan for obstacles and automatically direct the aircraft around the impediment when possible, reducing risk of collision, while ensuring flight direction remains constant. If the system determines the craft cannot go around the obstacle, it will slow to a stop and hover until the user redirects it. Obstacle avoidance also engages if the user triggers the drone’s “Return to Home” function to reduce the risk of collision when automatically flying back to its take off point.


With ActiveTrack, the Phantom 4 breaks new ground, allowing users running the DJI Go app on iOS and Android devices to follow and keep the camera centered on the subject as it moves simply by tapping the subject on their smartphone or tablet. Perfectly-framed shots of moving joggers or cyclists, for example, simply require activating the ActiveTrack mode in the app.


The Phantom 4 understands three-dimensional images and uses machine learning to keep the object in the shot, even when the subject changes its shape or turns while moving. Users have full control over camera movement while in ActiveTrack mode – and can even move the camera around the object while it is in motion as the Phantom 4 keeps the subject framed in the center of the shot autonomously. A “pause” button on the Phantom 4’s remote controller allows the user to halt an autonomous flight at any time, leaving the drone to hover.


By using the TapFly function in the DJI Go app, users can double-tap a destination for their Phantom 4 on the screen, and the Phantom 4 calculates an optimal flight route to reach the destination, while avoiding any obstructions in its path. Tap another spot and the Phantom 4 will smoothly transition towards that destination making even the beginner pilot look like a seasoned professional.




Via Andres Flores
Andres Flores's curator insight, March 1, 10:36 PM
steve batchelder's curator insight, March 2, 2:54 AM
Kim Frye's curator insight, March 9, 11:16 AM
Rescooped by Dr. Stefan Gruenwald from levin's linkblog: Knowledge Channel!

Redesigning Wi-Fi may let devices communicate more easily

Redesigning Wi-Fi may let devices communicate more easily | Amazing Science |

A clever way forward for powering the internet of things.


Many prophets of information technology (IT) believe that the next big movement in their field will be the “internet of things”. This, they hope, will connect objects hitherto beyond the reach of IT’s tendrils so that, for example, your sofa can buzz your phone to tell you that you have left your wallet behind, or your refrigerator can order your groceries without you having to make a shopping list. That, though, will mean putting chips in your sofa, your wallet and your fridge to enable them to talk to the rest of the world. And those chips will need power, not least to run their communications.


Sometimes, this power will come from the electricity grid or a battery. But that is not always convenient. However Shyam Gollakota and his colleagues at the University of Washington, in Seattle, think they have at least part of an answer to the problem. They propose to reconfigure a chip’s communications so that they need almost no power to work.


Most conceptions of the internet of things assume the chips in sofas, wallets, fridges and so on will use technologies such as Wi-Fi and Bluetooth to communicate with each other—either directly, over short ranges, or via a base-station connected to the outside world, over longer ones. For a conventional chip to broadcast a Wi-Fi signal requires two things. First, it must generate a narrow-band carrier wave. Then, it must impress upon this wave a digital signal that a receiver can interpret.


Following Moore’s law, the components responsible for doing the impressing have become ever more efficient over the past couple of decades. Those generating the carrier wave, however, have not.


Dr Gollakota and his team reasoned that it should be possible to separate the jobs of generation and impression. The system they have designed has a central transmitter (which might be built into a Wi-Fi router) that broadcasts a pure carrier wave. Dr Gollakota’s new chips then impress binary data on this carrier wave by either reflecting it (for a one) or absorbing it (for a zero). Whether a chip reflects or absorbs the signal depends on whether or not its aerial is earthed, which is in turn controlled by a simple switch.


Not having to generate its own carrier wave reduces a chip’s power consumption ten-thousandfold, for throwing the switch requires only a minuscule amount of current. Moreover, though Dr Gollakota’s prototypes do still use batteries, this current could instead be extracted from the part of the carrier wave that is absorbed.


The chips in this system, which Dr Gollakota plans to unveil on March 17th at the USENIX Symposium on Networked Systems Design and Implementation in Santa Clara, California can, he claims, transmit data at a rate of up to 11 megabits a second to smartphones or laptops over 30 metres (100 feet) away, and through walls. Though that rate is worse than standard Wi-Fi it is ten times better than the low-energy form of Bluetooth which is the current favourite for the internet of things.

Via Levin Chin
Russell R. Roberts, Jr.'s curator insight, March 7, 11:17 PM
Fascinating developments for the expansion of Wi-Fi and the Internet of Things (IoT).  Some of the initial breakthroughs have come from Dartmouth College in New Hampshire, which has designed a "magic wand" to relay medical data from a patient wearing a Wi-Fi equipped cuff to his/her health professional. The device uses special chips requiring very little power and employs encryption to protect the data transmitter to a health care facility. Aloha, Russ.
Tony Guzman's curator insight, March 10, 10:05 AM
This article talks about the "Internet of Things" (IoT). Lately the scenarios shared here seem to be closer to reality. Thoughts?
Tony Guzman's curator insight, March 10, 10:09 AM
This article talks about the "Internet of Things" (IoT). Lately the scenarios shared here seem to be closer to reality. Thoughts?
Rescooped by Dr. Stefan Gruenwald from Systems biology and bioinformatics!

Network-based in silico drug efficacy screening

Network-based in silico drug efficacy screening | Amazing Science |

The increasing cost of drug development together with a significant drop in the number of new drug approvals raises the need for innovative approaches for target identification and efficacy prediction.

Now, scientists take advantage of our increasing understanding of the network-based origins of diseases to introduce a drug-disease proximity measure that quantifies the interplay between drugs targets and diseases. By correcting for the known biases of the interactome, proximity helps to uncover the therapeutic effect of drugs, as well as to distinguish palliative from effective treatments.

The recent analysis of 238 drugs used in 78 diseases indicates that the therapeutic effect of drugs is localized in a small network neighborhood of the disease genes and highlights efficacy issues for drugs used in Parkinson and several inflammatory disorders. Finally, network-based proximity allows to predict novel drug-disease associations that offer unprecedented opportunities for drug repurposing and the detection of adverse effects.

Via Dmitry Alexeev
Dmitry Alexeev's curator insight, February 27, 3:38 AM

if you missed that one - gives a good thought on complexity

Babara Lopez's curator insight, March 4, 8:45 PM
drug efficacy screening
Rescooped by Dr. Stefan Gruenwald from Conformable Contacts!

What’s that fossil? An app has answers.

What’s that fossil? An app has answers. | Amazing Science |

The Digital Atlas of Ancient Life is a free iOS app for iPhone and iPad that allows users to search for photos and information about fossils from three geological periods. It’s a completely packaged app that can be downloaded to a device and doesn’t require cell service for use—which can be handy in rural and remote locations, says Ohio University geologist Alycia Stigall.


Stigall and a team of Ohio University students contributed to the National Science Foundation-funded project by digitizing data on 30,000 specimens found in Ohio, Kentucky, and Indiana from the Ordovician Period, 443-453 million years ago.


Colleagues at San Jose State University and University of Kansas, which produced the app, provided data from the Pennsylvanian Period (300-323 million years ago) and the Neogene Period (23-2 million years ago). The app features data on about 800 species.


Many fossil specimens collected and described by scientists are housed in natural history museums or in laboratory drawers and are not accessible to the public, Stigall notes. But new software tools and apps now make it possible to digitize that information and put it in the hands of teachers, students, and backyard fossil enthusiasts, as well as the scientific community, she says.


The app is available at

Via YEC Geo
Dr. Stefan Gruenwald's insight:
Great application of digital technology.
Lynnette Van Dyke's curator insight, April 2, 7:19 AM
Great application of digital technology.
Leonardo Wild's curator insight, April 2, 11:27 AM
Great application of digital technology.
Renato P. dos Santos's curator insight, April 3, 7:43 AM
What’s that fossil? A free off-line app has answers.
Scooped by Dr. Stefan Gruenwald!

6 TED Talks on the influence of algorithms

6 TED Talks on the influence of algorithms | Amazing Science |
Algorithms play a big part in our day-to-day lives. From search engines to architecture, explore how these formulas affect the way we view and interact with the world around us.


We live in a world run by algorithms, computer programs that make decisions or solve problems for us. In this riveting, funny talk, Kevin Slavin shows how modern algorithms determine stock prices, espionage tactics, even the movies you watch. But, he asks: If we depend on complex algorithms to manage our daily decisions — when do we start to lose control?
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Bringing Big Neural Networks to Self-Driving Cars, Smartphones, and Drones

Bringing Big Neural Networks to Self-Driving Cars, Smartphones, and Drones | Amazing Science |

Artificial intelligence systems based on neural networks have had quite a string of recent successes: One beat human masters at the game of Go, another made up beer reviews, and another made psychedelic art. But taking these supremely complex and power-hungry systems out into the real world and installing them in portable devices is no easy feat. This February, however, at the IEEE International Solid-State Circuits Conference in San Francisco, teams from MIT, Nvidia, and the Korea Advanced Institute of Science and Technology (KAIST) brought that goal closer. They showed off prototypes of low-power chips that are designed to run artificial neural networks that could, among other things, give smartphones a bit of a clue about what they are seeing and allow self-driving cars to predict pedestrians’ movements.


Until now, neural networks—learning systems that operate analogously to networks of connected brain cells—have been much too energy intensive to run on the mobile devices that would most benefit from artificial intelligence, like smartphones, small robots, and drones. The mobile AI chips could also improve the intelligence of self-driving cars without draining their batteries or compromising their fuel economy.


Smartphone processors are on the verge of running some powerful neural networks as software. Qualcomm is sending its next-generation Snapdragonsmartphone processor to handset makers with a software-development kit to implement automatic image labeling using a neural network. This software-focused approach is a landmark, but it has its limitations. For one thing, the phone’s application can’t learn anything new by itself—it can only be trained by much more powerful computers. And neural networks experts think that more sophisticated functions will be possible if they can bake neural-net–friendly features into the circuits themselves.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Atomic "Sandblaster" Could Write and Edit 2D Circuits

Atomic "Sandblaster" Could Write and Edit 2D Circuits | Amazing Science |

The dominance of resist-based lithography in nanoscale fabrication is being slowly eclipsed by the growing emergence of physical-probe methods, such aselectron beam induced deposition  or focused ion beam milling.


Now researchers at Oak Ridge National Laboratory (ORNL) have tested the capabilities of one of these physical probe methods, known as helium ion microscopy (HIM), to see whether it may be the way forward in fabricating a next generation of two-dimensional electronic devices.

HIM is similar to other focused-ion-beam techniques in that it uses a scanning beam of helium ions to mill and cut samples. What sets HIM apart is its cleanliness. Milling or imaging with helium or neon is preferred to other ion-beam methods, since these two noble gases aren’t reactive and don’t induce any chemical side effects during the fabrication process. Imaging and milling resolution are also hugely important factors. The helium beam can be strongly collimated offering smaller features—and as a result smaller devices.


In research described in the journal Applied Materials and Interfaces,  the ORNL scientists used the HIM technique to serve as a kind of atomic-scale “sandblaster” on bulk copper indium thiophosphate (CITP). CITP is a ferroelectric material, and the HIM beam was used to introduce localized defects that effect its ferroelectric properties.


While this research only worked with bulk CITP (2-D versions will come later), ferroelecriticy in CITP is very special because this behavior is completely unexpected in a 2-D material. CITP is a layered van der Waals crystal ferroelectric—part of a family of thiophosphate molecules capable of a huge variety of metal substitutions. Besides ferroelectricity, the thiophosphate family offers a number of useful properities including semiconductivity, magnetism, anti-ferroelectricity, and piezoresponse.


By introducing localized defects into the CITP, the researchers discovered it served as a way to manipulate the properties of the material. In particular, the researchers discovered that they could control the distribution of ferroelectric domains in the material as well as enhance its conductivity.


The main point of the research was to look at what properties can be particularly appealing in novel 2-D materials and see how can they be incorporated into the next generation of devices, according to Oak Ridge staff scientist Alex Belianinov.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Bristol and Lund set a new world record in 5G wireless spectrum efficiency

Bristol and Lund set a new world record in 5G wireless spectrum efficiency | Amazing Science |

New research by engineers from the Universities of Bristol and Lund, working alongside National Instruments (NI), has demonstrated how a massive antenna system can offer a 12-fold increase in spectrum efficiency compared with current 4G cellular technology.


Multiple antenna technology, referred to as MIMO, is already used in many Wi-Fi routers and 4G cellular phone systems. Normally this involves up to four antennas at a base station. Using a flexible prototyping platform from NI based on LabVIEW system design software and PXI hardware, the Bristol configuration implements Massive MIMO, where 128 antennas are deployed at the base station.


The hardware behind this demonstration was provided to Bristol University as part of the Bristol Is Open programmable city infrastructure. Lund University has a similar setup, the LuMaMi testbed, enabling researchers at both sites to work in parallel with their development.


Bristol's Massive MIMO system used for the demo operates at a carrier frequency of 3.5GHz and supports simultaneous wireless connectivity to up to 12 single antenna clients. Each client shares a common 20MHz radio channel. Complex digital signal processing algorithms unravel the individual data streams in the space domain seen by the antenna array.


The Massive MIMO demonstration was conducted in the atrium of Bristol's Merchant Venturers Building and achieved an unprecedented bandwidth efficiency of 79.4bit/s/Hz. This equates to a sum rate throughput of 1.59Gbit/s in a 20MHz channel.


Professor Andrew Nix, Head of the CSN Group and Dean of Engineering, said: "This activity reinforces our well established propagation and system modelling work by offering a new capability in model validation for Massive MIMO architectures. This is a truly exciting time for our PhD students and opens up further opportunities for collaborative research with our national and international partners."


Ove Edfors, Professor of Radio Systems at Lund University says: "We see massive MIMO as the most promising 5G technology and we have pushed it forward together with partners in Bristol and in our EU project MAMMOET. It is a pleasure seeing those efforts materialize."


Mark Beach, Professor of Radio Systems Engineering in the Department of Electrical & Electronic Engineering and Manager of the EPSRC Centre for Doctoral Training (CDT) in Communications, added: "Massive MIMO is one of four core activities in '5G and beyond' wireless research at Bristol. This demonstration was made possible by the cohort training offered within our CDT in Communications. The CDT gives Bristol a unique edge to conduct activities at scale."


Paul Wilson, Managing Director Bristol Is Open, remarked: "This is truly outstanding work putting Bristol at the forefront of 5G wireless connectivity. We are looking forward to moving this facility outdoors in late 2016 as part of the BIO Harbourside deployment."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Neurons on a chip let drones smell bombs over a kilometer away

Neurons on a chip let drones smell bombs over a kilometer away | Amazing Science |

The defeat of Go champion Lee Se-dol by Google’s AlphaGo last week caused an outpouring of emotion more appropriate to an opera performance than a board game competition. One magazine breathlessly described “the sadness and beauty” of watching an AI outmaneuver its human opponent. Unfortunately, this kind of journalism misconstrues the situation.


Neurons still remain the most powerful piece of computation machinery on the face of the planet. More to the point, nobody throws up their hands in despair when a screwdriver removes a flathead screw better than their fingernail can, and yet the parallel is an apt one. The circuitry of the human brain has not been honed by evolution to be especially good at playing the game of Go, any more than evolution has fine-tuned our fingernails for removing screws.


Which is not to say there is no room for surprise in today’s world of rapidly advancing technological achievement. What is more impressive, however, is when computers exhibit greater skill than humans at tasks evolution has been perfecting for millions of years like exercising a sense of smell. And yet such advancements are taking place right beneath our noses, metaphorically speaking.


Recently a UK startup called Koniku released details on a drone that uses neurons embedded in a computer architecture to achieve the sense of smell exhibited by a bee. With only 64 neurons, the chip achieves a sense of smell capable of detecting explosives over one kilometer away. The accompanying video bears testimony to this amazing achievement, as the drone in question hones in on its target with almost bee like movement. In fact, the only thing missing is an incessant buzzing noise.


The use of neurons to build computer processors has the potential to be revolutionary. Anyone familiar with Moore’s Law knows we are quickly running up against the boundaries of processing power given today’s silicon architecture. While a number of scientific breakthroughs suggest ways for sidestepping this upper limit, few hold the promise that biological computing does. Neurons are immensely efficient, as compared with other kinds computing architectures, including the much touted quantum computers, which require specialized cooling units that eat up electricity by the kilowatt. In comparison, the human brain runs on a mere 20 watts of electricity.


How the Koniku team managed to meld neurons with electronic circuitry is where the science really gets interesting. One of the first hurdles encountered was finding a way to keep the neurons alive in the decidedly non-biological environment found on a circuit board. To do so, the Koniku scientists created tiny shells that encapsulate each neuron and controls the temperature, humidity and supply of nutrients. These shells also have the ability to regulate how the neurons interact with each other. This feature creates the ability to take advantage of the neuron’s unique capability for massively parallel processing.


The more important question is how easily it will be to scale this architecture. The explosive sensing chip used on the drones requires a mere 64 neurons, while Osh Agabi of the Koniku team estimates 500 neurons would be required to do the computations necessary for a self-driving car. This is a mere pittance when one compares it with the 100,000 neurons found in a piece of human brain the size of a grain of sand. Clearly the Koniku chip has a long way to go before it equals our own processing power. But the journey of a thousand thoughts begins with a single neuron.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Face-tracking software lets you make anyone say anything in real time

You know how they say, "Show me pictures or video, or it didn't happen"? Well, the days when you could trust what you see on video in real time are officially coming to an end thanks to a new kind of face tracking.


A team from Stanford, the Max Planck Institute for Informatics and the University of Erlangen-Nuremberg has produced a video demonstrating how its software, called Face2Face, in combination with a common webcam, can make any person on video appear to say anything a source actor wants them to say.


In addition to perfectly capturing the real-time talking motions of the actor and placing them seamlessly on the video subject, the software also accounts for real-time facial expressions, including distinct movements such as eyebrow raises.


To show off the system, the team used YouTube videos of U.S. President George W. Bush, Russian President Vladimir Putin and Republican presidential candidate Donald Trump. In each case, the facial masking is flawless, effectively turning the video subject into the actor's puppet.


It might be fun to mix this up with something like "Say it with Trump," but for now the software is still in the research phase. "Unfortunately, the software is currently not publicly available — it's just a research project," team member Matthias Niessner told Mashable. "However, we are thinking about commercializing it given that we are getting so many requests." We knew this kind of stuff was possible in the special effects editing room, but the ability to do it in real time — without those nagging "uncanny valley" artifacts — could change how we interpret video documentation forever.

Matt Archer's curator insight, March 24, 4:24 PM

What possible reason is there for this technology outside of supporting terrorism..?  Crazy.

Scooped by Dr. Stefan Gruenwald!

Breakthrough for cheaper lighting and flexible solar cells

Breakthrough for cheaper lighting and flexible solar cells | Amazing Science |

In more than three years of work european scientists finally made future lighting technology ready to market. They developed flexible lighting foils that can be produced roll-to-roll -- much like newspapers are printed. These devices pave the path towards cheaper solar cells and LED lighting panels. The project named TREASORES was lead by Empa scientist Frank Nüesch and combined knowhow from nine companies and six research institutes in five european countries.


In November 2012, the TREASORES project (Transparent Electrodes for Large Area Large Scale Production of Organic Optoelectronic Devices) started with the aim of developing technologies to dramatically reduce the production costs of organic electronic devices such as solar cells and LED lighting panels. Funded with 9 million Euro from the European Commission and an additional 6 million Euros from the project partners, the project has since then produced seven patent applications, a dozen peer-reviewed publications and provided inputs to international standards organisations.


Most importantly, the project has developed and scaled up production processes for several new transparent electrode and barrier materials for use in the next generation of flexible optoelectronics. Three of these electrodes-on-flexible substrates that use either carbon nanotubes, metal fibres or thin silver are either already being produced commercially, or expected to be so as of this year. The new electrodes have been tested with several types of optoelectronic devices using rolls of over 100 meters in length, and found to be especially suitable for next-generation light sources and solar cells.


The roll of OLED light sources with the project logo was made using roll-to-roll techniques at Fraunhofer Institute for Organic Electronics, Electron Beam and Plasma Technology (Fraunhofer FEP) on a thin silver electrode developed within the project by Rowo Coating GmbH. Such processing techniques promise to make light sources and solar cells much cheaper in future, but require flexible and transparent electrodes and water impermeable barriers -- which have also been developed by the TREASORES project. The electrodes from the project are technically at least as good as those currently used (made from indium tin oxide, ITO) but will be cheaper to manufacture and do not rely on the import of indium.


Tomasz Wanski from the Fraunhofer FEP said that because of the new electrodes, the OLED light source was very homogeneous over a large area, achieving an efficiency of 25 lumens per watt -- as good as the much slower sheet to sheet production process for equivalent devices. In the course of the project, new test methods were developed by the National Physical Laboratory in the UK to make sure that the electrodes would still work after being repeatedly bent -- a test that may become a standard in the field.


A further outcome of the project has been the development, testing and production scale-up of new approaches to transparent barrier foils (plastic layers that prevent oxygen and water vapour from reaching the sensitive organic electronic devices). High performance low-cost barriers were produced and it is expected that the Swiss company Amcor Flexibles Kreuzlingen will adopt this technology after further development. Such high performance barriers are essential to achieve the long device lifetimes that are necessary for commercial success -- as confirmed by a life cycle analysis (LCA) completed during the project, solar cells are only economically or ecologically worthwhile if both their efficiency and lifetime are high enough. By combining the production of barriers with electrodes (instead of using two separate plastic substrates), the project has shown that production costs can be further reduced and devices made thinner and more flexible.


The main challenge the project had to face was to make the barrier and electrode foils extremely flat, smooth and clean. Optoelectronic devices have active layers of only a few hundred nanometres (less than one percent of the width of a human hair), and even small surface irregularities or invisibly tiny dust particles can ruin the device yield or lead to uneven illumination and short lifetimes.


The TREASORES project united nine companies with six research institutes from five countries and was led by Frank Nüesch from the Swiss Federal Laboratories for Materials Science and Technology (Empa). "I am very much looking forward to seeing the first commercial products made using materials from the project in 2016," says Nüesch.

Click here to edit the content

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Tunable windows for privacy, camouflage

Tunable windows for privacy, camouflage | Amazing Science |

Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences have developed a technique that can quickly change the opacity of a window, turning it cloudy, clear or somewhere in between with the flick of a switch.


Tunable windows aren’t new but most previous technologies have relied on electrochemical reactions achieved through expensive manufacturing.  This technology, developed by David Clarke, the Extended Tarr Family Professor of Materials, and postdoctoral fellow Samuel Shian, uses geometry adjust the transparency of a window.


The research is described in journal Optics Letters. The tunable window is comprised of a sheet of glass or plastic, sandwiched between transparent, soft elastomers sprayed with a coating of silver nanowires, too small to scatter light on their own.  


But apply an electric voltage and things change quickly. With an applied voltage, the nanowires on either side of the glass are energized to move toward each other, squeezing and deforming the soft elastomer. Because the nanowires are distributed unevenly across the surface, the elastomer deforms unevenly. The resulting uneven roughness causes light to scatter, turning the glass opaque. The change happens in less than a second.

It’s like a frozen pond, said Shian.


“If the frozen pond is smooth, you can see through the ice. But if the ice is heavily scratched, you can’t see through,” said Shian. 


Clarke and Shian found that the roughness of the elastomer surface depended on the voltage, so if you wanted a window that is only light clouded, you would apply less voltage than if you wanted a totally opaque window.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Stretchable electronics with liquid metal in them quadruple in length

Stretchable electronics with liquid metal in them quadruple in length | Amazing Science |

EPFL researchers have developed conductive tracks that can be bent and stretched up to four times their original length. They could be used in artificial skin, connected clothing and on-body sensors.


Conductive tracks are usually hard printed on a board. But those recently developed at EPFL are altogether different: they are almost as flexible as rubber and can be stretched up to four times their original length and in all directions. And they can be stretched a million times without cracking or interrupting their conductivity. The invention is described in an article published today in the journal Advanced Materials.


Both solid and flexible, this new metallic and partially liquid film offers a wide range of possible applications. It could be used to make circuits that can be twisted and stretched – ideal for artificial skin on prosthetics or robotic machines. It could also be integrated into fabric and used in connected clothing. And because it follows the shape and movements of the human body, it could be used for sensors designed to monitor particular biological functions.


“We can come up with all sorts of uses, in forms that are complex, moving or that change over time,” said Hadrien Michaud, a PhD student at the Laboratory for Soft Bioelectronic Interfaces (LSBI) and one of the study authors.

Extensive research has gone into developing an elastic electronic circuit. It is a real challenge, as the components traditionally used to make circuits are rigid. Applying liquid metal to a thin film in polymer supports with elastic properties naturally seems like a promising approach.


Owing to the high surface tension of some of these liquid metals, experiments conducted so far have only produced relatively thick structures. “Using the deposition and structuring methods that we developed, it’s possible to make tracks that are very narrow – several hundredths of a nanometer thick – and very reliable,” said Stéphanie Lacour, holder of the Bertarelli Foundation Chair in Neuroprosthetic Technology and who runs the lab.


Apart from their unique fabrication technique, the researchers’ secret lies in the choice of ingredients, an alloy of gold and gallium. “Not only does gallium possess good electrical properties, but it also has a low melting point, around 30o,” said Arthur Hirsch, a PhD student at LSBI and co-author of the study. “So it melts in your hand, and, thanks to the process known as supercooling, it remains liquid at room temperature, even lower.” The layer of gold ensures the gallium remains homogeneous, preventing it from separating into droplets when it comes into contact with the polymer, which would ruin its conductivity.

here to edit the content

No comment yet.
Scooped by Dr. Stefan Gruenwald!

100 million-degrees essential for successful nuclear fusion

100 million-degrees essential for successful nuclear fusion | Amazing Science |

Some scientists believe fusion power -- the energy that powers the stars -- is the future of sustainable energy. Despite periodic breakthroughs, physicists have struggled to replicate the reaction in the lab. New research suggests scientists may have cleared another hurdle en route to synthesizing nuclear fusion.

The key, researchers say, is super hot fluid.


During fusion experiments, researchers have been frustrated by failing million-degree heating beams, destabilizing their fusion attempts before any energy is generated. A team of scientists at Australian National University believe they solved the problem using fluid dynamics.


"There was a strange wave mode which bounced the heating beams out of the experiment," researcher Zhisong Qu said in a news release. "This new way of looking at burning plasma physics allowed us to understand this previously impenetrable problem."


Qu is a theoretical physicist at the ANU Research School of Physics and Engineering and lead author of a new paper on fusion in the journal Physical Review Letters.


Earthbound scientists have been attempting to replicate stellar fusion using a strategy called magnetic confinement fusion, in which hydrogen is coaxed into plasma form and heated to temperatures ten times those found inside the center of the sun.

The problem is these super-heated beams of plasma sometimes behave in unexpected ways.


Qu and his colleagues have developed a model that simplifies how scientists explain and predict the behavior of the super-hot liquid hydrogen. The model makes sense of an unstable wave mode observed during the United States' largest fusion experiment, known as DIII-D.


The key to the model is that it attempts to explain the plasma's behavior by treating it as a liquid, instead of a collection of individual atoms. "When we looked at the plasma as a fluid we got the same answer, but everything made perfect sense," said Michael Fitzgerald, Qu's research partner and a physicist at the Culham Centre for Fusion Energy in England. "We could start using our intuition again in explaining what we saw, which is very powerful."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Optics Supervision: Can we one day do whole body imaging with regular light?

Optics Supervision: Can we one day do whole body imaging with regular light? | Amazing Science |

It seemed too good to be true, says Allard Mosk. It was 2007, and he was working with Ivo Vellekoop, a student in his group at the University of Twente in Enschede, the Netherlands, to shine a beam of visible light through a 'solid wall' — a glass slide covered with white paint — and then focus it on the other side. They did not have a particular application in mind. “I really just wanted to try this because it had never been done before,” Mosk says. And in truth, the two researchers did not expect to pick up much more than a faint blur.


But as it turned out, their very first attempt1 produced a sharp pinprick of light a hundred times brighter than they had hoped for. “This just doesn't happen on the first day of your experiment,” exclaims Mosk. “We thought we'd made a mistake and there must be a hole in our slide letting the light through!”


But there was no hole. Instead, their experiment became the first of two independent studies1, 2that were carried out that year pioneering ways to see through opaque barriers. So far it is still a laboratory exercise. But progress has been rapid.


Researchers have now managed to obtain good-quality images through thin tissues such as mouse ears3, and are working on ways to go deeper. And if they can meet the still-daunting challenges, such as dealing with tissues that move or stretch, potential applications abound. Visible-light images obtained from deep within the body might eliminate the need for intrusive biopsies, for example. Or laser light could be focused to treat aneurysms in the brain or target inoperable tumours without the need for surgery.


“Just ten years ago, we couldn't imagine high-resolution imaging down to even 1 centimetre in the body with optical light, but now that has now become a reality,” says Lihong Wang, a biomedical engineer at Washington University in St. Louis, Missouri. “Call me crazy, but I believe that we will eventually be doing whole-body imaging with optical light.”

No comment yet.