Amazing Science
655.9K views | +29 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Nobel Prize 2014 in Chemistry given for circumventing a basic law of physics, pushing the limits of microscopes

Nobel Prize 2014 in Chemistry given for circumventing a basic law of physics, pushing the limits of microscopes | Amazing Science | Scoop.it

Three scientists, two American and one German, received this year’s Nobel Prize in Chemistry for circumventing a basic law of physics and enabling microscopes to peer at the tiniest structures within living cells.

The 2014 laureates, announced Wednesday by the Royal Swedish Academy of Sciences, are Eric Betzig, 54, of the Howard Hughes Medical Institute in Virginia; Stefan W. Hell, 51, of the Max Planck Institute for Biophysical Chemistry in Germany; and William E. Moerner, 61, of Stanford University in California.


For centuries, optical microscopes — those that magnify ordinary visible light — have allowed biologists to study organisms too small to be seen with the naked eye. But a fundamental law of optics known as the diffraction limit, first described in 1873, states that the resolution can never be better than half the wavelength of light being looked at.


For visible light, that limit is about 0.2 millionths of a meter, or one-127,000th of an inch. A human hair is 500 times as wide. But a bacterium is not much larger than the size of the diffraction limit, and there was little hope of seeing details within the cell like the interaction of individual proteins. Other technology like the electron microscope, which generates images from beams of electrons instead of particles of light, achieves higher resolution, but it has other limitations, like requiring the sample to be sliced thin and placed in a vacuum. For biological research, that generally meant the subject of study had to be dead.


At first glance, circumventing the diffraction limit would seem a foolish pursuit, like trying to invent a perpetual motion machine or faster-than-light travel — doomed by fundamental limits on how the universe works. Nonetheless, Dr. Hell, who was born in Romania, started working on the problem after finishing his doctorate at the University of Heidelberg in 1990. After failing to find financing in Germany to pursue his ideas, he obtained a research position at the University of Turku in Finland in 1993. A year later, he published his theoretical proposal for achieving sharper microscopic pictures.


Dr. Hell’s insight was that by using lasers, he could restrict the glow to a very small section. That way, for structures smaller than the diffraction limit, “You can tell them apart just by making sure that one of them is off when the other is on,” he said in an interview.


Other scientists could have just taken his proposal and made it work in the laboratory long before he did, he said, adding: “I was a sort of nobody in those days. I didn’t even have a lab, really. People could have taken it as a recipe, could have done it. But they didn’t do it. Why didn’t they do it? Because they thought it wouldn’t work that way.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Creating low-cost solar energy on bendable plastic films

Creating low-cost solar energy on bendable plastic films | Amazing Science | Scoop.it
Work by PhD student Alex Barker, under the supervision of Dr Justin Hodgkiss, a senior lecturer in the School of Chemical and Physical Sciences, is helping to improve the efficiency of next generation solar cells made from materials like plastics.


The research, published recently by the prestigious international Journal of the American Chemical Society, addresses the long-standing question of how light produces charge pairs far enough apart from each other that they are free to flow as current, rather than staying bound together and ultimately just releasing heat.


The technique used by the researchers was to freeze the solar cells to -263 degrees Celsius, where charge pairs get stuck together. They then used lasers to measure the how far apart they moved as the temperatures increase.


"We found that the efficiency of polymer, or plastic-based, solar cell is determined by the ability of charge pairs to rapidly escape from each other while they are still 'hot' from the light energy," says Dr Hodgkiss, a 2011 Rutherford Discovery Fellow.


He adds that understanding how plastic solar cells work will result in more efficient and cheaper conductive materials that overcome the limitations of conventional solar cells.


"Because they're plastic and flexible, they could be rolled out to cover a tent or used as semi-transparent filters on windows."


The findings of the research settle a long-standing debate about how polymer solar cells work, and offers potential to guide the design of cheaper and more efficient materials, by isolating the key step in their development.

more...
Ms. Moon's curator insight, October 9, 2014 10:29 PM

Materials Science is a fascinating subject. Here someone thought outside conventional wisdom and created something new and better. That's what "innovation" is all about.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Invention of blue LEDs wins physics Nobel Prize of 2014

Invention of blue LEDs wins physics Nobel Prize of 2014 | Amazing Science | Scoop.it

The 2014 Nobel Prize for physics has been awarded to a trio of scientists in Japan and the US for the invention of blue light emitting diodes (LEDs). Professors Isamu Akasaki, Hiroshi Amano and Shuji Nakamura made the first blue LEDs in the early 1990s. This enabled a new generation of bright, energy-efficient white lamps, as well as colour LED screens.


The winners will share prize money of eight million kronors. They were named at a press conference in Sweden, and join a prestigious list of 196 other Physics laureates recognized since 1901.


Prof Nakamura, who was woken up in Japan to receive the news, told the press conference, "It's unbelievable." The committee chair, Prof Per Delsing, from Chalmers University of Technology in Gothenburg, emphasized the winners' dedication. "What's fascinating is that a lot of big companies really tried to do this and they failed," he said. "But these guys persisted and they tried and tried again - and eventually they actually succeeded."


Although red and green LEDs had been around for many years, blue LEDs were a long-standing challenge for scientists in both academia and industry. Without them, the three colours could not be mixed to produce the white light we now see in LED-based computer and TV screens. Furthermore, the high-energy blue light could be used to excite phosphorus and directly produce white light - the basis of the next generation of light bulb. Today, blue LEDs are found in people's pockets around the world, inside the lights and screens of smartphones. White LED lamps, meanwhile, deliver light to many offices and households. They use much less energy than both incandescent and fluorescent lamps.



Inside an LED, current is applied to a sandwich of semiconductor materials, which emit a particular wavelength of light depending on the chemical make-up of those materials. Gallium nitride was the key ingredient used by the Nobel laureates in their ground-breaking blue LEDs. Growing big enough crystals of this compound was the stumbling block that stopped many other researchers - but Profs Akasaki and Amano, working at Nagoya University in Japan, managed to grow them in 1986 on a specially-designed scaffold made partly from sapphire.


Original Announcement: http://tinyurl.com/omdtodt

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

High Efficiency Achieved for Harvesting Hydrogen Fuel From the Sun using Earth-Abundant Materials

High Efficiency Achieved for Harvesting Hydrogen Fuel From the Sun using Earth-Abundant Materials | Amazing Science | Scoop.it

Today, the journal Science published the latest development in Michael Grätzel’s laboratory at EPFL: producing hydrogen fuel from sunlight and water. By combining a pair of solar cells made with a mineral called perovskite and low cost electrodes, scientists have obtained a 12.3 percent conversion efficiency from solar energy to hydrogen, a record using earth-abundant materials as opposed to rare metals.

The race is on to optimize solar energy’s performance. More efficient silicon photovoltaic panels, dye-sensitized solar cells, concentrated cells and thermodynamic solar plants all pursue the same goal: to produce a maximum amount of electrons from sunlight. Those electrons can then be converted into electricity to turn on lights and power your refrigerator.

At the Laboratory of Photonics and Interfaces at EPFL, led by Michael Grätzel, where scientists invented dye solar cells that mimic photosynthesis in plants, they have also developed methods for generating fuels such as hydrogen through solar water splitting. To do this, they either use photoelectrochemical cells that directly split water into hydrogen and oxygen when exposed to sunlight, or they combine electricity-generating cells with an electrolyzer that separates the water molecules.

By using the latter technique, Grätzel’s post-doctoral student Jingshan Luo and his colleagues were able to obtain a performance so spectacular that their achievement is being published today in the journal Science. Their device converts into hydrogen 12.3 percent of the energy diffused by the sun on perovskite absorbers – a compound that can be obtained in the laboratory from common materials, such as those used in conventional car batteries, eliminating the need for rare-earth metals in the production of usable hydrogen fuel.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Bring thermal vision to your phone with this camera add-on

Bring thermal vision to your phone with this camera add-on | Amazing Science | Scoop.it

For the most part, smartphone peripherals can make your mobile devices even more powerful than they already are. A new add-on, dubbed Seek Thermal, aims to do just that by bringing extra imaging features to your handset. The tiny gadget can be attached to an iPhone or Android smartphone (via Lightning port and microUSB, respectively) and, thanks to a companion app, turn that otherwise common device into one with a thermal camera. Seek Thermal notes it wants to help users across different scenarios, such as being aware of what's around them at night time or, why not, look at clogged pipes throughout the household, just to mention a couple. If you're interested, be ready to pay a premium -- both the iPhone and Android models are priced at $199.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New paint-on, clear bandage not only protects wounds and burns but enables direct measurement of tissue oxygen

New paint-on, clear bandage not only protects wounds and burns but enables direct measurement of tissue oxygen | Amazing Science | Scoop.it

Inspired by a desire to help wounded soldiers, an international, multidisciplinary team of researchers led by Assistant Professor Conor L. Evans at the Wellman Center for Photomedicine of Massachusetts General Hospital (MGH) and Harvard Medical School (HMS) has created a paint-on, see-through, “smart” bandage that glows to indicate a wound’s tissue oxygenation concentration.  Because oxygen plays a critical role in healing, mapping these levels in severe wounds and burns can help to significantly improve the success of surgeries to restore limbs and physical functions. The work was published today in The Optical Society’s (OSA) open-access journal Biomedical Optics Express.


“Information about tissue oxygenation is clinically relevant but is often inaccessible due to a lack of accurate or noninvasive measurements,” explained lead author Zongxi Li, an HMS research fellow on Evans' team.
 
Now, the “smart” bandage developed by the team provides direct, noninvasive measurement of tissue oxygenation by combining three simple, compact and inexpensive components: a bright sensor molecule with a long phosphorescence lifetime and appropriate dynamic range; a bandage material compatible with the sensor molecule that conforms to the skin’s surface to form an airtight seal; and an imaging device capable of capturing the oxygen-dependent signals from the bandage with high signal-to-noise ratio.
 
This work is part of the team’s long-term program “to develop a Sensing, Monitoring And Release of Therapeutics (SMART) bandage for improved care of patients with acute or chronic wounds,” says Evans,  senior author on the Biomedical Optics Express paper.


The bandage is applied by “painting” it onto the skin’s surface as a viscous liquid, which dries to a solid thin film within a minute. Once the first layer has dried, a transparent barrier layer is then applied atop it to protect the film and slow the rate of oxygen exchange between the bandage and room air—making the bandage sensitive to the oxygen within tissue.
 
The final piece involves a camera-based readout device, which performs two functions: it provides a burst of excitation light that triggers the emission of the phosphors inside the bandage, and then it records the phosphors’ emission. “Depending on the camera’s configuration, we can measure either the brightness or color of the emitted light across the bandage or the change in brightness over time,” Li said. “Both of these signals can be used to create an oxygenation map.”  The emitted light from the bandage is bright enough that it can be acquired using a regular camera or smartphone—opening the possibility to a portable, field-ready device.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Physicists design record-breaking laser that accelerates the interaction between light and matter by ten times

Physicists design record-breaking laser that accelerates the interaction between light and matter by ten times | Amazing Science | Scoop.it

Reporting in the journal Nature Physics, physicists from Imperial College London and the Friedrich-Schiller-Universität Jena, in Germany, used semiconductor nanowires made of zinc oxide and placed them on a silver surface to create ultra-fast lasers. 


By using silver rather than a conventional glass surface, the scientists were able to shrink their nanowire lasers down to just 120 nanometres in diameter - around a thousandth the diameter of human hair.


The physicists were able to shrink the laser by using surface plasmons, which are wave-like motions of excited electrons found at the surface of metals. When light binds to these oscillations it can be focused much more tightly than usual. 


By using surface plasmons they were able to squeeze the light into a much smaller space inside the laser, which allowed the light to interact much more strongly with the zinc oxide. 


This stronger interaction accelerated the rate at which the laser could be turned on and off to ten times that of a nanowire laser using a glass surface. These are the fastest lasers recorded to date, in terms of the speed at which they can turn on and off.


Senior author Dr Rupert Oulton from the Department of Physics at Imperial College London said: “This work is so exciting because we are engineering the interaction of light and matter to drive light generation in materials much faster than it occurs naturally. When we first started working on this, I would have been happy to speed up switching speeds to a picosecond, which is one trillionth of a second. But we’ve managed to go even faster, to the point where the properties of the material itself set a speed limit.” 


PhD student Robert Röder, from Friedrich-Schiller Universität Jenasaid: “This is not only ‘world record’ regarding the switching speed. Most likely we also achieved the maximum possible speed at which such a semiconductor laser can be operated.” 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Superabsorbing ring could make light work of snaps, be the ultimate camera pixel

Superabsorbing ring could make light work of snaps, be the ultimate camera pixel | Amazing Science | Scoop.it
A quantum effect in which excited atoms team up to emit an enhanced pulse of light can be turned on its head to create 'superabsorbing' systems that could make the 'ultimate camera pixel'.


'Superradiance', a phenomenon where a group of atoms charged up with energy act collectively to release a far more intense pulse of light than they would individually, is well-known to physicists. In theory the effect can be reversed to create a device that draws in light ultra-efficiently. This could be revolutionary for devices ranging from digital cameras to solar cells. But there's a problem: the advantage of this quantum effect is strongest when the atoms are already 50% charged -- and then the system would rather release its energy back as light than absorb more.


Now a team led by Oxford University theorists believes it has found the solution to this seemingly fundamental problem. Part of the answer came from biology. 'I was inspired to study ring molecules, because they are what plants use in photosynthesis to extract energy from the Sun,' said Kieran Higgins of Oxford University's Department of Materials, who led the work. 'What we then discovered is that we should be able to go beyond nature's achievement and create a 'quantum superabsorber'.'


A report of the research is published in Nature Communications.

At the core of the new design is a molecular ring, which is charged to 50% by a laser pulse in order to reach the ideal superabsorbing state. 'Now we need to keep it in that condition' notes Kieran. For this the team propose exploiting a key property of the ring structure: each time it absorbs a photon, it becomes receptive to photons of a slightly higher energy. Charging the device is like climbing a ladder whose rungs are increasingly widely spaced.


'Let's say it starts by absorbing red light from the laser,' said Kieran, 'once it is charged to 50% it now has an appetite for yellow photons, which are higher energy. And we'd like it to absorb new yellow photons, but NOT to emit the stored red photons.' This can be achieved by embedding the device into a special crystal that suppresses red light: it makes it harder for the ring to release its existing energy, so trapping it in the 50% charged state.


The final ingredient of the design is a molecular 'wire' that draws off the energy of newly absorbed photons. 'If you built a system with a capacity of 100 energy units the idea would be to 'half-charge' it to 50 units, and the wire would then 'harvest' every unit over 50,' said Kieran. 'It's like an overflow pipe in plumbing -- it is engineered to take the energy level down to 50, but no lower.' This means that the device can handle the absorption of many photons in quick succession when it is exposed to a bright source, but in the dark it will simply sit in the superabsorbing state and efficiently grab any rare passing photon.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

'Molecular movies' will enable extraordinary gains in bioimaging, health research

'Molecular movies' will enable extraordinary gains in bioimaging, health research | Amazing Science | Scoop.it

Researchers today announced the creation of an imaging technology more powerful than anything that has existed before, and is fast enough to observe life processes as they actually happen at the molecular level. Fluorescent protein biosensors provide a technology to capture the biochemical process of life almost like a motion picture that could be viewed a frame at a time. This may allow the targeted design of next-generation biosensors to track life processes and battle diseases.

Chemical and biological actions can now be measured as they are occurring or, in old-fashioned movie parlance, one frame at a time. This will allow creation of improved biosensors to study everything from nerve impulses to cancer metastasis as it occurs.


The measurements, created by the use of short pulse lasers and bioluminescent proteins, are made in femtoseconds, which is one millionth of one billionth of a second. A femtosecond, compared to one second, is about the same as one second compared to 32 million years. That's a pretty fast shutter speed, and it should change the way biological research and physical chemistry are being done, scientists say.


Findings on the new technology were published today in Proceedings of the National Academy of Sciences, by researchers from Oregon State University and the University of Alberta.


"With this technology we're going to be able to slow down the observation of living processes and understand the exact sequences of biochemical reactions," said Chong Fang, an assistant professor of chemistry in the OSU College of Science, and lead author on the research.


"We believe this is the first time ever that you can really see chemistry in action inside a biosensor," he said. "This is a much more powerful tool to study, understand and tune biological processes."


The system uses advanced pulse laser technology that is, in itself, fairly new, and builds upon the use of "green fluorescent proteins" that are extremely popular in bioimaging and biomedicine. These remarkable proteins glow when light is shined upon them. Their discovery in 1962, and the applications that followed were the basis for a Nobel Prize in 2008.


Existing biosensor systems, however, are created largely by random chance or trial and error. By comparison, the speed of the new approach will allow scientists to "see" what is happening at the molecular level and create whatever kind of sensor they want by rational design. This will improve the study of everything from cell metabolism to nerve impulses, how a flu virus infects a person, or how a malignant tumor spreads.


"For decades, to create the sensors we have now, people have been largely shooting in the dark," Fang said. "This is a fundamental breakthrough in how to create biosensors for medical research from the bottom up. It's like daylight has finally come."

more...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New LED-based lab-on-a-chip device screens for 170,000 different molecules in blood

New LED-based lab-on-a-chip device screens for 170,000 different molecules in blood | Amazing Science | Scoop.it

Ecole polytechnique fédérale de Lausanne (EPFL; Lausanne, Switzerland) researchers have developed a new light-emitting diode (LED)-based handheld device that is able to test a large number of proteins in our body all at once. Professor Hatice Altug and postoctoral fellow Arif Cetin from EPFL in collaboration with professor Aydogan Ozcan from UCLA (Los Angeles, CA) developed the compact and inexpensive "optical lab on a chip" to quickly analyze up to 170,000 different molecules in a blood sample--simultaneously identifying insulin levels, cancer and Alzheimer markers, or even certain viruses.


Instead of analyzing the biosample by looking at the spectral properties of the sensing platforms as has traditionally been the case, this new technique uses changes in the intensity of the light to do on-chip imaging, eliminating sometimes clunky spectrometers in the process.


Only 7.5 cm high and weighing 60 g, the device is able to detect viruses and single-layer proteins down to 3 nanometers thick. Detailed in a publication in Nature Light: Science & Application, the recipe is simple and contains few ingredients: an off-the-shelf CMOS chip, an LED, and a 10 square millimeter gold plate pierced with arrays of extremely small holes less than 200 nm wide.


Nanoholes on the gold substrates are compartmented into arrays of different sections, where each section functions as an independent sensor. Sensors are coated with special biofilms that are specifically attracting targeted proteins. Consequently, multiple different proteins in the biosamples could be captured at different places on the platform and monitored simultaneously. The LED light shines on the platform, passes through the nanoscale openings and its properties are recorded onto the CMOS chip. Since light going through the nanoscale holes changes its properties depending on the presence of biomolecules, it is possible to easily deduce the number of particles trapped on the sensors.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Intel putting 3D scanners in consumer tablets next year, phones to follow

Intel putting 3D scanners in consumer tablets next year, phones to follow | Amazing Science | Scoop.it

Intel has been working on a 3D scanner small enough to fit in the bezel of even the thinnest tablets. The company aims to have the technology in tablets from 2015, with CEO Brian Krzanich telling the crowd at MakerCon in New York on Thursday that he hopes to put the technology in phones as well.


"Our goal is to just have a tablet that you can go out and buy that has this capability," Krzanich said. "Eventually within two or three years I want to be able to put it on a phone."


Krzanich and a few of his colleagues demonstrated the technology, which goes by the name "RealSense," on stage using a human model and an assistant who simply circled the model a few times while pointing a tablet at the subject. A full 3D rendering of the model slowly appeared on the screen behind the stage in just a few minutes. The resulting 3D models can be manipulated with software or sent to a 3D printer.


"The idea is you go out, you see something you like and you just capture it," Krzanich explained. He said consumer tablets with built in 3D scanners will hit the market in the third or fourth quarter of 2015, with Intel also working on putting the 3D scanning cameras on drones.


The predecessor to the 3D scanning tablets demonstrated on stage were announced earlier this month in the form of the Dell Venue 8 7000 series Android tablet sports Intel's RealSense snapshot depth camera, which brings light-field camera-like capabilities to a tablet. It will be available later this year.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Longevity science
Scoop.it!

Two medical trials shed light on how Apple's HealthKit will work

Two medical trials shed light on how Apple's HealthKit will work | Amazing Science | Scoop.it

Two prominent U.S. hospitals are preparing to launch trials with diabetics and chronic disease patients using Apple Inc's (AAPL.O) HealthKit, offering a glimpse of how the iPhone maker's ambitious take on healthcare will work in practice.


HealthKit, which is still under development, is the center of a new healthcare system by Apple. Regulated medical devices, such as glucose monitors with accompanying iPhone apps, can send information to HealthKit. With a patient's consent, Apple's service gathers data from various health apps so that it can be viewed by doctors in one place.

Stanford University Hospital doctors said they are working with Apple to let physicians track blood sugar levels for children with diabetes. Duke University is developing a pilot to track blood pressure, weight and other measurements for patients with cancer or heart disease.


The goal is to improve the accuracy and speed of reporting data, which often is done by phone and fax now. Potentially doctors would be able to warn patients of an impending problem. The pilot programs will be rolled out in the coming weeks.


Apple last week mentioned the trials in a news release announcing the latest version of its operating system for phones and tablets, iOS 8, but this is the first time any details have been made public. Apple declined to comment for this article.


Apple aims eventually to work with health care providers across the United States, including hospitals which are experimenting with using technology to improve preventative care to lower healthcare cost and make patients healthier.


Reuters previously reported that Apple is in talks with other U.S. hospitals. Stanford Children's Chief Medical Information Officer Christopher Longhurst told Reuters that Stanford and Duke were among the furthest along.


Longhurst said that in the first Stanford trial, young patients with Type 1 diabetes will be sent home with an iPod touch to monitor blood sugar levels between doctor's visits.


HealthKit makes a critical link between measuring devices, including those used at home by patients, and medical information services relied on by doctors, such as Epic Systems Corp, a partner already announced by Apple.


Medical device makers are taking part in the Stanford and Duke trials.

DexCom Inc (DXCM.O), which makes blood sugar monitoring equipment, is in talks with Apple, Stanford, and the U.S. Food and Drug Administration about integrating with HealthKit, said company Chief Technical Officer Jorge Valdes.


DexCom's device measures glucose levels through a tiny sensor inserted under the skin of the abdomen. That data is transmitted every five minutes to a hand-held receiver, which works with a blood glucose meter. The glucose measuring system then sends the information to DexCom's mobile app, on an iPhone, for instance.


Under the new system, HealthKit can scoop up the data from DexCom, as well as other app and device makers.


Data can be uploaded from HealthKit into Epic's "MyChart" application, where it can be viewed by clinicians in Epic's electronic health record.


Via Ray and Terry's
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Vivid, full-color aluminum plasmonic pixels to create the LCD color display of the future

Vivid, full-color aluminum plasmonic pixels to create the LCD color display of the future | Amazing Science | Scoop.it

The quest to create camouflaging metamaterials that can “see” colors and automatically blend into the background is one step closer to reality, thanks to a breakthrough color-display technology unveiled this week by Rice University‘s Laboratory for Nanophotonics (LANP).


The new full-color display technology uses aluminum nanorods to create the vivid red, blue and green hues found in today’s top-of-the-line LCD televisions and monitors.


The technology is described in a new study in the Early Edition of the Proceedings of the National Academy of Sciences (PNAS) (open access).

The breakthrough is the latest in a string of recent discoveries by a Rice-led team that set out in 2010 to create metamaterials capable of mimicking the camouflage abilities of cephalopods — the family of marine creatures that includes squid, octopus and cuttlefish.


“Our goal is to learn from these amazing animals so that we could create new materials with the same kind of distributed light-sensing and processing abilities that they appear to have in their skins,” said LANP Director Naomi Halas, a co-author of the PNAS study.


She is the principal investigator on a $6 million Office of Naval Research grant for a multi-institutional team that includes marine biologists Roger Hanlon of the Marine Biological Laboratory in Woods Hole, Mass., and Thomas Cronin of the University of Maryland, Baltimore County.


“We know cephalopods have some of the same proteins in their skin that we have in our retinas, so part of our challenge, as engineers, is to build a material that can ‘see’ light the way their skin sees it, and another challenge is designing systems that can react and display vivid camouflage patterns,” Halas said.


LANP’s new color display technology delivers bright red, blue and green hues from five-micron-square pixels that each contains several hundred aluminum nanorods. By varying the length of the nanorods and the spacing between them, LANP researchers Stephan Link and Jana Olson showed they could create pixels that produced dozens of colors, including rich tones of red, green and blue that are comparable to those found in high-definition LCD displays.


“Aluminum is useful because it’s compatible with microelectronic production methods, but until now the tones produced by plasmonic aluminum nanorods have been muted and washed out,” said Link, associate professor of chemistry at Rice and the lead researcher on the PNAS study. “The key advancement here was to place the nanorods in an ordered array.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Quantum camera can take photos in almost complete darkness, needs less than one photon per pixel

Quantum camera can take photos in almost complete darkness, needs less than one photon per pixel | Amazing Science | Scoop.it
Using quantum-entangled pairs of photons as the shutter trigger for a super-high-speed camera, researchers can actually create images from less than a single photon per pixel.


It’s no secret that cameras are quickly getting better at capturing images in low light. Researchers at the University of Glasgow have pushed this trend to create an imager that can work with less than 1 photon per pixel. By combining two esoteric technologies — photon heralding and compressive imaging– the team has achieved a milestone that on the surface seems impossible. Leaving aside the huge amount of math and physics required under the covers, the process itself is actually fairly straightforward and very clever.


The first half of the weird science — which is also called “ghost imaging” — is based on what are called heralded photons. Under certain circumstances pairs of quantum-entangled photons can be produced using a process called spontaneous parametric down-conversion (SPDC) and then split apart. Most of the time, when you detect one, the other one can also be detected. The detection of the first photon “heralds” the existence of the second.


The team’s imager uses a beam splitter to send one of each pair of photons it creates through the object being imaged (the camera only works for creating images of transmissive targets) and to a very sensitive single-pixel detector. It sends the other to a high-speed camera. The detector is only activated when a photon is sensed coming through the target object. When one does, the detector sends a signal to open the shutter of the camera — located at the end of the path of the other photon from the original pair — for about 15 nanoseconds. That’s long enough to record the position of the second — heralded — photon, but short enough to keep out almost all background noise. In essence, the single-pixel detector acts as a very high-speed shutter for the camera, so that it only takes pictures of photons that have passed through the target. To allow time for the shutter release signal to get from the detector to the camera, a delay line of about 70 nanoseconds is added to the photon’s path to the camera.


This use of heralded photons gets the imager’s light needs down to almost one photon per pixel — although there is still the unavoidable shot noise that comes with the Poisson distribution of photons. Compressive imaging allow the imager to deal with this noise, and to to push the boundaries even further – to less than one photon per pixel. By relying on the inherent redundancy of information in natural subjects, compressive imaging uses frequency domain information – in this case generated by performing a Discrete Cosine Transform (DCT) on the image — to essentially reconstruct portions of the image that were not directly captured.


This amazing camera isn’t just for show. The team hopes it can lead to the development of cameras for use in science research that can be used to study and document subjects that are very light-sensitive, like certain biological specimens.


Reference: arXiv:1408.6381 - "Imaging with a small number of photons"

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Mapping Wi-Fi dead zones with Helmholtz equation

Mapping Wi-Fi dead zones with Helmholtz equation | Amazing Science | Scoop.it

A home's Wi-Fi dead zones are, to most of us, a problem solved with guesswork. Your laptop streams just fine in this corner of the bedroom, but not the adjacent one; this arm of the couch is great for uploading photos, but not the other one. You avoid these places, and where the Wi-Fi works becomes a factor in the wear patterns of your home. In an effort to better understand, and possibly eradicate, his Wi-Fi dead zones, one man took the hard way: he solved the Helmholtz equation.


The Helmholtz equation models "the propagation of electronic waves" that involves using a sparse matrix to help minimize the amount of calculation a computer has to do in order to figure out the paths and interferences of waves, in this case from a Wi-Fi router. The whole process is similar to how scattered granular material, like rice or salt, will form complex patterns on top of a speaker depending on where the sound waves are hitting the surfaces.


The author of the post in question, Jason Cole, first solved the equation in two dimensions, and then applied it to his apartment's long and narrow two-bedroom layout. He wrote that he took his walls to have a very high refractive index, while empty space had a refractive index of 1.


Cole found in his simulation he could get pretty good coverage even with his router in one corner of the room, but could get "tendrils of Internet goodness" everywhere if he placed the router right in the center of the apartment. In a simulation where he gave the concrete some absorption potential, he found a map more like what he expected: excellent reception immediately around the router, and beams that shone into various rooms with periodic strong spots from the waves' interference.


When he introduced time to the system, Cole was able to simulate how his apartment might fill with waves over a certain period and eventually become an oscillating standing wave forming pockets of high activity. For instance, the Wi-Fi signal hits a pretty good curve around the doorway into the second bedroom for good reception in a band a couple of feet wide down the center; there's also surprisingly good signal behind a thick wall in the upper right corner of the floor plan.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Cesium: The element that redefined time

Cesium: The element that redefined time | Amazing Science | Scoop.it

Until about 175 years ago, it was the sun that defined time. Wherever you were, high noon was high noon, and on a clear day a quick glance up into the sky or down at a sundial told you everything you needed to know.


It was not until the 1930s that the physicist Louis Essen developed the first quartz ring clock, the most accurate timepiece of its day, and a precursor of the cesium clock.


Quartz clocks exploit the fact that quartz crystals vibrate at a very high frequency if the right electrical charge is applied to them. This is known as a resonant frequency, everything on earth has one.


It is hitting the resonant frequency of a champagne glass that - allegedly - allows a soprano to shatter it when she hits her top note. It also explains why a suspension bridge at Broughton in Lancashire collapsed in 1831. Troops marching over it inadvertently hit its "resonant frequency", setting up such a strong vibration the bolts sheared. Ever since, troops have been warned to "break step" when crossing suspension bridges.


To understand how this phenomenon helps you to measure time, think of the pendulum of a grandfather clock. The clock mechanism counts a second each time it swings.


Quartz plays the same role as a pendulum, just a lot quicker: it vibrates at a resonant frequency many thousands of times a second.

And that's where cesium comes in. It has a far higher resonant frequency even than quartz - 9,192,631,770 Hz, to be precise. This is one reason Essen used the element to make the first of the next generation of clocks - the "atomic" clocks.


Essen's quartz creation erred just one second in three years. His first atomic clock created at NPL in 1955 was accurate to one second in 1.4 million years. The cesium fountain at NPL today is accurate to one second in every 158 million years. That means it would only be a second out if it had started keeping time back in the peak of the Jurassic Period when diplodocus were lumbering around and pterodactyls wheeling in the sky.


But modern technology means these days even more staggeringly accurate clocks are possible. That's because cesium was always a compromise element when it came to timekeeping. Louis Essen chose cesium, because the frequency of its transition was at the limit of what the technology of his day could measure. We have today new ways of measuring time.


The frequency of the transition of strontium, for example, is 444,779,044,095,486.71 Hz. A strontium clock developed in the US would only have lost a second since the earth began: it is accurate to a second in five billion years. The scientists at NPL reckon optical clocks that keep time to within one second in 14 billion years are on the horizon - that's longer than the universe has been around.


Now, if such insane levels of accuracy seem pointless, then think again. Without the caesium clock, for example, satellite navigation would be impossible. GPS satellites carry synchronized cesium clocks that enable them collectively to triangulate your position and work out where on earth you are.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Captioning on Glass: Google Glass can now display captions for hard-of-hearing users

Captioning on Glass: Google Glass can now display captions for hard-of-hearing users | Amazing Science | Scoop.it

Georgia Institute of Technology researchers have created a speech-to-text Android app for Google Glass that displays captions for hard-of-hearing persons when someone is talking to them in person. “This system allows wearers like me to focus on the speaker’s lips and facial gestures, “said School of Interactive Computing Professor Jim Foley.


“If hard-of-hearing people understand the speech, the conversation can continue immediately without waiting for the caption. However, if I miss a word, I can glance at the transcription, get the word or two I need and get back into the conversation.”


“The smartphone uses the Android transcription API to convert the audio to text,” said Jay Zuerndorfer, the Georgia Tech Computer Science graduate student who developed the app. “The text is then streamed to Glass in real time.”


The “Captioning on Glass” app is now available to install from MyGlass. More information here.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Tesla CEO Elon Musk promises a self-driving model for next year

Tesla CEO Elon Musk promises a self-driving model for next year | Amazing Science | Scoop.it

Last night, Elon Musk told the world that Tesla was ready to reveal its "D" on October 9th, as well as preparing us for "something else" to expect along the way. But the CEO isn't done teasing just yet. In a recent interview with CNN Money, Musk's let it be known that a Tesla car next year "will probably be 90 percent capable of autopilot," though he didn't dive into any specifics about which model(s) this comment was in reference to.


"So 90 percent of your miles could be on auto. For sure highway travel," the Tesla boss added. Such a thingwould be possible, Musk said, by combining different sensors with image-recognition cameras, radars and long-rage ultrasonics -- which, without a doubt, paints a bright picture for future vehicles from the company. "Other car companies will follow ... Tesla is a Silicon Valley company. I mean, if we're not the leader, then shame on us."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

IBM opens a new era of computing with brain-like chip: 4096 cores, 1 million neurons, 5.4 billion transistors

IBM opens a new era of computing with brain-like chip: 4096 cores, 1 million neurons, 5.4 billion transistors | Amazing Science | Scoop.it

Scientists at IBM Research have created by far the most advanced neuromorphic (brain-like) computer chip to date. The chip, called TrueNorth, consists of 1 million programmable neurons and 256 million programmable synapses across 4096 individual neurosynaptic cores. Built on Samsung’s 28nm process and with a monstrous transistor count of 5.4 billion, this is one of the largest and most advanced computer chips ever made. Perhaps most importantly, though, TrueNorth is incredibly efficient: The chip consumes just 72 milliwatts at max load, which equates to around 400 billion synaptic operations per second per watt — or about 176,000 times more efficient than a modern CPU running the same brain-like workload, or 769 times more efficient than other state-of-the-art neuromorphic approaches. Yes, IBM is now a big step closer to building a brain on a chip.


The animal brain (which includes the human brain, of course), as you may have heard before, is by far the most efficient computer in the known universe. As you can see in the graph below, the human brain has a “clock speed” (neuron firing speed) measured in tens of hertz, and a total power consumption of around 20 watts. A modern silicon chip, despite having features that are almost on the same tiny scale as biological neurons and synapses, can consume thousands or millions times more energy to perform the same task as a human brain. As we move towards more advanced areas of computing, such as artificial general intelligence and big data analysis — areas that IBM just happens to be deeply involved with — it would really help if we had a silicon chip that was capable of brain-like efficiency.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The Internet Of Things Could Become A Big Trend in 2015

The Internet Of Things Could Become A Big Trend in 2015 | Amazing Science | Scoop.it

Goldman Sachs report on the Internet of things:


The Internet of Things (IoT) is emerging as the  third wave in the development of the Internet. The  1990s’ fixed Internet wave connected 1 billion users while the 2000s’ mobile wave connected another 2 billion.


The IoT has the potential to connect 10X as many (28 billion) “things” to the Internet by 2020, ranging from bracelets to cars.


Breakthroughs in the cost of sensors, processing power and bandwidth to connect devices are enabling ubiquitous connections right now. Early simple products like fitness trackers and thermostats are already gaining traction.


Lots of room to participate ...
Personal lives, workplace productivity and consumption will all change. Plus there will be a string of new businesses, from those that will expand the Internet “pipes”, to those that will analyze the reams of data, to those that will make new things we have not even thought of yet.


Benchmarking the future: early adopters
We see five key early verticals of adoption: Wearables, Cars, Homes, Cities, and Industrials.


Test cases for what the IoT can achieve 
Focus is on new products and sources of revenue and new ways to achieve cost efficiencies that can drive sustainable competitive advantages. 


Key to watch out for 

Privacy and security concerns. A likely source of friction on the path to adoption. 


Focus, Enablers, Platforms, & Industrials

The IoT building blocks will come from those that can web-enable devices, provide common platforms on which they can communicate, and develop new applications to capture new users.


Enablers and Platforms

We see increased share for Wi-Fi, sensors and low-cost microcontrollers. Focus is on software applications for managing communications between devices, middleware, storage, and data analytics.


Industrials

Home automation is at the forefront of the early product opportunity, while factory floor optimization may lead the efficiency side.


75 billion. That's the potential size of the Internet Things sector, which could become a multi-trillion dollar market by the end of the decade.


That's a very big number of devices as extrapolated from a Cisco report that details how many devices will be connected to the Internet of Things by 2020. That's 9.4 devices for every one of the 8 billion people that's expected to be around in seven years.


To help put that into more perspective, back in Cisco also came out with the number of devices it thinks were connected to the Internet in 2012, a number Cisco's Rob Soderbery placed at 8.7 billion. Most of the devices at the time, he acknowledged were the PCs, laptops, tablets and phones in the world. But other types of devices will soon dominate the collection of the Internet of Things, such as sensors and actuators.


By the end of the decade, a nearly nine-fold increase in the volume of devices on the Internet of Things will mean a lot of infrastructure investment and market opportunities will available in this sector. And by "a lot," I mean ginourmous. In an interview with Barron's, Cisco CEO John Chambers figures that will translate to a $14-trillion industry.


See also: Cisco Hearts Internet Of Things

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

This Device Lets Fully Paralyzed Rats Walk Again, and Human Trials Are Planned

This Device Lets Fully Paralyzed Rats Walk Again, and Human Trials Are Planned | Amazing Science | Scoop.it

In the past few years, there have been some pretty impressive breakthroughs for those suffering from partial paralysis, but a frustrating lack of successes when it comes to those who are fully paralyzed. But a new technique pioneered by scientists working on project NEUWalk at the Swiss Federal Institute for Technology (EPFL) have figured out a way to reactivate the severed spinal cords of fully paralyzed rats, allowing them to walk again via remote control. And, the researchers say, their system is just about ready for human trials.


Previous studies have had some success in using epidural electrical stimulation (EES) to improve motor control in rodents and humans with spinal cord injuries. However, electrocuting neurons in order to get allow natural walking is no easy task, and it requires extremely quick and precise stimulation. 


As the researchers wrote in a study published in Science Translational Medicine, "manual adjustment of pulse width, amplitude, and frequency" of the electrical signal being supplied to the spinal cord was required in EES treatment, until now. 


Manual adjustments don't exactly work when you're trying to walk.

The team developed algorithms that can generate and accommodate feedback in real-time during leg movement, making motion natural. Well, sort of. We’re talking about rats with severed spinal cords hooked up to electrodes being controlled by advanced algorithms, after all.


"We have complete control of the rat's hind legs," EPFL neuroscientist Grégoire Courtine said in a statement. "The rat has no voluntary control of its limbs, but the severed spinal cord can be reactivated and stimulated to perform natural walking. We can control in real-time how the rat moves forward and how high it lifts its legs."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists reconstruct 3D shape of a nanoscale crystal with atomic precision from a single image

Scientists reconstruct 3D shape of a nanoscale crystal with atomic precision from a single image | Amazing Science | Scoop.it
Scientists from Jülich and Xi’an have developed a new method with which crystal structures can be reconstructed with atomic precision in all three dimensions.


An important characteristic of nanoparticles is that they differ from other kinds of materials in that their surface determines their physical and technical properties to a much larger extent. The efficiency of catalysts, for instance, depends predominantly on the shape of the materials used and their surface texture. For this reason, physicists and material scientists are interested in being able to determine the structure of nanomaterials from all angles and through several layers, right down to the last atom. Until now, it was necessary to perform a whole series of tests from different angles to do so. However, scientists at Forschungszentrum Jülich, the Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons (ER-C) and Xi’an Jiaotong University in China have now succeeded for the first time in calculating the spatial arrangement of the atoms from just a single image from an electron microscope.


Their approach offers many advantages; radiation-sensitive samples can also be studied, which would otherwise be quickly damaged by the microscope’s high-energy electron beam. The comparatively short data acquisition time involved could even make it possible in the future to observe the transient intermediate steps of chemical reactions. Moreover, it enables a "gentle" measurement procedure to take place, to detect not only heavy but also light chemical elements, such as oxygen, which has an important function in many technologically significant materials.


"Acquiring three-dimensional information from a single two-dimensional image seems impossible at first glance. Nevertheless, it is in fact possible; we don’t obtain a simple two-dimensional projection of the three-dimensional sample as the experiment follows quantum mechanical principles instead", explains Prof. Chunlin Jia, researcher at the Jülich Peter Grünberg Institute, in Microstructure Research (PGI-5), the ER-C and at Jiaotong University. "On its way through the crystal lattice, the electron wave of the microscope acts as a highly sensitive atom detector and is influenced by each individual atom. The key point is that it does actually make a difference whether the wave front encounters an atom at the beginning or at the end of its pathway through the crystal."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Stephen Wolfram: Introducing Tweet-a-Program

Stephen Wolfram: Introducing Tweet-a-Program | Amazing Science | Scoop.it

Wouldn't it be great if you could just call up a supercomputer and ask it to do your data-wrangling for you? Actually, scratch that, no-one uses the phone anymore. What'd be really cool is if machines could respond to your queries straight from Twitter. It's a belief that's shared by Wolfram Research, which has just launched the Tweet a Program system to its computational knowledge engine, Wolfram Alpha. In a blog post, founder Stephen Wolfram explains that even complex queries can be executed within the space of 140 characters, including data visualizations.


In the Wolfram Language a little code can go a long way. And to use that fact to let everyone have some fun with the introduction of Tweet-a-ProgramCompose a tweet-length Wolfram Language program, and tweet it to @WolframTaP. TheTwitter bot will run your program in the Wolfram Cloud and tweet the result back to you. One can do a lot with Wolfram Language programs that fit in a tweet. It’s easy to make interesting patterns or even complicated fractals. Putting in some math makes it easy to get all sorts of elaborate structures and patterns.


The Wolfram Language not only knows how to compute π, as well as a zillion other algorithms; it also has a huge amount of built-in knowledge about the real world. So right in the language, you can talk about movies or countries or chemicals or whatever. And here’s a 78-character program that makes a collage of the flags of Europe, sized according to country population. There are many, many kinds of real-world knowledge built into the Wolfram Language, including some pretty obscure ones. The Wolfram Language does really well with words and text and deals with images too.

more...
Martin (Marty) Smith's curator insight, September 24, 2014 8:40 PM

Now THIS is coolest thing I read today. Ever see the great movie 3 Days of the Condor when Robert Redford calls the computer and via a series of commands tracing a phone number. That was COOL. This "tweet-a-program" to control a super computer with 140 characters is awesome. 


Scooped by Dr. Stefan Gruenwald
Scoop.it!

The first flexible graphene display paves the way for folding electronics

The first flexible graphene display paves the way for folding electronics | Amazing Science | Scoop.it
The first flexible display device based on graphene has been unveiled by scientists in the UK, who say it is the first step on the road towards next generation gadgets that can be folded, rolled or crumpled up without cracking the screen.
 
The device is the result of a collaboration between Plastic Logic, a company that specialises in flexible displays, and researchers led by Andrea Ferrari at the University of Cambridge. Although others have successfully used graphene to make screen components before, this is the first example of a flexible screen that uses graphene-based electronics.

‘What we have done here is to include graphene in the actual backplane pixel technology,’ says Ferrari. ‘This shows that in principle the properties of graphene – conductivity, flexibility and so on – can be exploited within a real-world display.’

Graphene researcher Jonathan Coleman from Trinity College Dublin in Ireland, who was not involved in the research, described the advance as a ‘major landmark’ that could help kick-start the commercialisation of graphene devices. ‘We need some sort of big win, and this could very well be it,’ he says.

The team’s prototype is an electrophoretic display containing the kind of ‘electronic ink’ found in e-readers that works by reflecting – rather than emitting – light. Plastic Logic have been working on making these displays flexible for some time by replacing the glass with bendy plastic, and using non-brittle components in the electronic layer. Graphene is an ideal material for this, as it is more flexible and more conductive than the metals currently used. The team managed to make the graphene electrode in a way that is compatible with electronics manufacturing, using solution processing rather than chemical vapour deposition, which often requires temperatures exceeding 1000°C.

‘All the major companies are trying to make bendable and flexible gadgets,’ says Ferrari. ‘We think that graphene will be a powerful addition to that, and if we manage to make the process easy, scalable and cheap enough, then it should be considered very strongly by industry.’

As current displays go, the team’s prototype is basic, capable of showing images in black and white at a resolution of 150 pixels per inch – akin to that of a basic e-reader. But Ferrari’s team are working on applying the same technology to make a graphene-based LCD and OLED displays like those used in smartphones and tablets, capable of showing full colour images and playing video. Their goal is to have these ready within the next 12 months.

Coleman thinks this target is achievable. ‘These solution processed graphene products tick a lot of the boxes that are required to develop these technologies,’ he says. ‘It’s hard to say whether they’ll get there, but I would be confident. The partners here are very well suited to achieve these goals.’
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from 21st Century Innovative Technologies and Developments as also discoveries, curiosity ( insolite)...
Scoop.it!

Incredibly Small 3D Printed Middle Ear Prosthesis is Achieved on a 3D Systems Printer

Incredibly Small 3D Printed Middle Ear Prosthesis is Achieved on a 3D Systems Printer | Amazing Science | Scoop.it

3D printing has been providing various forms of prosthetic devices such as fingers, hands, arms and legs for a short time now, mostly due to the fact that it is affordable, easy to use, faster than traditional manufacturing, and provides for total customization. Companies are also really beginning to see the potential of 3D printing in the rapid prototyping of medical products.


One company, Potomac Laser, has been in the business of specializing in and creating medical devices, as well as other unique electronic devices for over 32 years now. Located in Baltimore, Maryland, they use 3D printing, laser micromachining, micro CNC and micro drilling in their many unique projects.


Just recently, a woman by the name of Monika Kwacz, who is a researcher at the Institute of Micromechanics and Photonics at Warsaw Technical University in Poland, contacted Potomac Laser to see if they could help her 3D print something almost unheard of. She had been studying stapedotomy, which is a form of surgical procedure that aims at improving hearing loss in those who suffer from the fixation of their stapes. The stapes, which is one of the 3 tiny bones within the middle ear involved in the conduction of sound vibrations to the inner ear, is the smallest and lightest bone within the human body.


Millions of peoples in the US alone suffer from a condition called Otosclerosis, where the stapes becomes stuck in a fixed position, and can no longer efficiently receive and transmit vibrations needed for a subject to hear properly. This is mostly due to a mineralization process of the bone and surrounding tissue.  It is estimated that 10% of the world’s adult Caucasian population suffers from this condition in one form or another.


Via Gust MEES
more...