Amazing Science
Follow
Find tag "technology"
486.6K views | +85 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Super-high resolution non-destructive electron microscopy of soft materials like biomaterials

Super-high resolution non-destructive electron microscopy of soft materials like biomaterials | Amazing Science | Scoop.it

Soft matter encompasses a broad swath of materials, including liquids, polymers, gels, foam and — most importantly — biomolecules. At the heart of soft materials, governing their overall properties and capabilities, are the interactions of nano-sized components. Observing the dynamics behind these interactions is critical to understanding key biological processes, such as protein crystallization and metabolism, and could help accelerate the development of important new technologies, such as artificial photosynthesis or high-efficiency photovoltaic cells.


Observing these dynamics at sufficient resolution has been a major challenge, but this challenge is now being met with a new non-invasive nanoscale imaging technique that goes by the acronym of CLAIRE.


CLAIRE stands for “cathodoluminescence activated imaging by resonant energy transfer.” Invented by researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley, CLAIRE extends the extremely high resolution of electron microscopy to the dynamic imaging of soft matter.


“Traditional electron microscopy damages soft materials and has therefore mainly been used to provide topographical or compositional information about robust inorganic solids or fixed sections of biological specimens,” says chemist Naomi Ginsberg, who leads CLAIRE’s development and holds appointments with Berkeley Lab’s Physical Biosciences Division and its Materials Sciences Division, as well as UC Berkeley’s departments of chemistry and physics.


“CLAIRE allows us to convert electron microscopy into a new non-invasive imaging modality for studying soft materials and providing spectrally specific information about them on the nanoscale.”


Ginsberg is also a member of the Kavli Energy NanoScience Institute (Kavli-ENSI) at Berkeley. She and her research group recently demonstrated CLAIRE’s imaging capabilities by applying the technique to aluminum nanostructures and polymer films that could not have been directly imaged with electron microscopy.


“What microscopic defects in molecular solids give rise to their functional optical and electronic properties? By what potentially controllable process do such solids form from their individual microscopic components, initially in the solution phase? The answers require observing the dynamics of electronic excitations or of molecules themselves as they explore spatially heterogeneous landscapes in condensed phase systems,” Ginsberg says.


“In our demonstration, we obtained optical images of aluminum nanostructures with 46 nanometer resolution, then validated the non-invasiveness of CLAIRE by imaging a conjugated polymer film. The high resolution, speed and non-invasiveness we demonstrated with CLAIRE positions us to transform our current understanding of key biomolecular interactions.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

First Demonstration of a Surveillance Camera Powered by Ordinary Wi-Fi Broadcasts

First Demonstration of a Surveillance Camera Powered by Ordinary Wi-Fi Broadcasts | Amazing Science | Scoop.it
The ability to power remote sensors and devices using Wi-Fi signals could be the enabling technology behind the Internet of things, say electrical engineers.


One of the most significant barriers to deploying sensors, cameras, and communicators is the question of power. The task of fitting a security camera on an external wall or a temperature sensor in an attic immediately runs into the question of how to run a power cable to the device or to arrange for batteries to be replaced on a regular basis.


Then there is the Internet of things, the idea that almost every object could be fitted with a chip that broadcasts data such as its location, whether it is full or empty or whether some other parameter such as temperature or pressure is dangerously high or low.


Great things are expected of the Internet of things but only if engineers can solve one potential show-stopper of a question: how to power these numerous tiny machines. Today, we get an answer thanks to the work of Vamsi Talla and pals at the University of Washington in Seattle. These guys have developed a way to broadcast power to remote devices using an existing technology that many people already have in their living rooms: ordinary Wi-Fi. They call their new approach power over Wi-Fi or PoWi-Fi.


The University of Washington team’s approach to this is refreshingly straightforward. They simply connect an antenna to a temperature sensor, place it close to a Wi-Fi router and measure the resulting voltages in the device and for how long it can operate on this remote power source alone. The simple answer is that the voltage across the sensor is never high enough to cross the operating threshold of around 300 millivolts. However, it often comes close.


But a closer examination of the data makes for interesting reading. The problem is that Wi-Fi broadcasts are not continuous. Routers tend to broadcast on a single channel in bursts. This provides enough power for the sensor but as soon as the broadcast stops, the voltages drop. The result is that, on average, the sensor does not have enough juice to work.


That gave Talla and pals an idea. Why not program the router to broadcast noise when it is not broadcasting information and employ adjacent Wi-Fi channels to carry it so that it doesn’t interfere with data rates. And that’s exactly what they’ve done. To do this they require the electronic innards of three routers, one for each of the channels they intend to broadcast on. Wi-Fi broadcasts can be on any of 11 overlapping channels within a 72 MHz band centered on the 2.4 GHz frequency. This allows for three non-overlapping channels to be broadcast simultaneously.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Billboards of the future could show astonishing 3D effects, thanks to a new technology from Austria

Billboards of the future could show astonishing 3D effects, thanks to a new technology from Austria | Amazing Science | Scoop.it

Huge 3D Displays without 3D Glasses: A new invention opens the door to a new generation of outdoor displays. Different pictures can be seen at different angles, creating 3D effects without the need for 3D glasses.


Public screenings have become an important part of major sports events. In the future, we will be able to enjoy them in 3D, thanks to a new invention from Austrian scientists. A sophisticated laser system sends laser beams into different directions. Therefore, different pictures are visible from different angles. The angular resolution is so fine that the left eye is presented a different picture than the right one, creating a 3D effect.

In 2013, the young start-up company TriLite Technologies had the idea to develop this new kind of display, which sends beams of light directly to the viewers’ eyes. The highly interdisciplinary project was carried out together with the Vienna University of Technology.

A Start-up Company and a University
Together, TriLite and TU Vienna have created the first prototype. Currently it only has a modest resolution of five pixels by three, but it clearly shows that the system works. “We are creating a second prototype, which will display colour pictures with a higher resolution. But the crucial point is that the individual laser pixels work. Scaling it up to a display with many pixels is not a problem”, says Jörg Reitterer (TriLite Technologies and PhD-student in the team of Professor Ulrich Schmid at the Vienna University of Technology).

Every single 3D-Pixel (also called “Trixel”) consists of lasers and a moveable mirror. “The mirror directs the laser beams across the field of vision, from left to right. During that movement the laser intensity is modulated so that different laser flashes are sent into different directions”, says Ulrich Schmid. To experience the 3D effect, the viewer must be positioned in a certain distance range from the screen. If the distance is too large, both eyes receive the same image and only a normal 2D picture can be seen. The range in which the 3D effect can be experienced can be tuned according to the local requirements.

Hundreds of Images at Once
3D movies in the cinema only show two different pictures – one for each eye. The newly developed display, however, can present hundreds of pictures. Walking by the display, one can get a view of the displayed object from different sides, just like passing a real object. For this, however, a new video format is required, which has already been developed by the researchers. “Today’s 3D cinema movies can be converted into our 3D format, but we expect that new footage will be created especially for our displays – perhaps with a much larger number of cameras”, says Franz Fiedler, CTO of TriLite Technologies. 

Compared to a movie screen, the display is very vivid. Therefore it can be used outdoors, even in bright sunlight. This is not only interesting for 3D-presentations but also for targeted advertisements. Electronic Billboards could display different ads, seen from different angles. “Maybe someone wants to appeal specifically to the customers leaving the shop across the street, and a different ad is shown to the people waiting at the bus stop”, says Ferdinand Saint-Julien, CEO of TriLite Technologies. Technologically, this would not be a problem.

Entering the market

“We are very happy that the project was so successful in such a short period of time”, says Ulrich Schmid. It took only three years to get from the first designs to a working prototype. The technology has now been patented and presented in several scientific publications. The second prototype should be finished by the middle of the year, the commercial launch is scheduled for 2016.

more...
Rescooped by Dr. Stefan Gruenwald from innovation
Scoop.it!

New Camera Chip Provides Superfine 3-D Resolution

New Camera Chip Provides Superfine 3-D Resolution | Amazing Science | Scoop.it

Imagine you need to have an almost exact copy of an object. Now imagine that you can just pull your smartphone out of your pocket, take a snapshot with its integrated 3-D imager, send it to your 3-D printer, and within minutes you have reproduced a replica accurate to within microns of the original object. This feat may soon be possible because of a new, tiny high-resolution 3-D imager developed at Caltech.


Any time you want to make an exact copy of an object with a 3-D printer, the first step is to produce a high-resolution scan of the object with a 3-D camera that measures its height, width, and depth. Such 3-D imaging has been around for decades, but the most sensitive systems generally are too large and expensive to be used in consumer applications.


A cheap, compact yet highly accurate new device known as a nanophotonic coherent imager (NCI) promises to change that. Using an inexpensive silicon chip less than a millimeter square in size, the NCI provides the highest depth-measurement accuracy of any such nanophotonic 3-D imaging device.


The work, done in the laboratory of Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering in the Division of Engineering and Applied Science, is described in the February 2015 issue of Optics Express.


Via Enrico De Angelis
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New technique harnesses everyday seismic waves to image Earth

New technique harnesses everyday seismic waves to image Earth | Amazing Science | Scoop.it

A new technique developed at Stanford University harnesses the buzz of everyday human activity to map the interior of the Earth. "We think we can use it to image the subsurface of the entire continental United States," said Stanford geophysics postdoctoral researcher Nori Nakata.


Using tiny ground tremors generated by the rumble of cars and trucks across highways, the activities within offices and homes, pedestrians crossing the street and even airplanes flying overhead, a team led by Nakata created detailed three-dimensional subsurface maps of the California port city of Long Beach.


The maps, detailed in a recent issue of the Journal of Geophysical Research, marks the first successful demonstration of an elusive Earth-imaging technique, called ambient noise body wave tomography. "It's a technique that scientists have been trying to develop for more than 15 years," said Nakata, who is the Thompson Postdoctoral Fellow at the School of Earth, Energy & Environmental Sciences.


There are two major types of seismic waves: surface waves and body waves. As their name suggests, surface waves travel along the surface of the Earth. Scientists have long been able to harness surface waves to study the upper layers of the planet's crust, and recently they have even been able to extract surface waves from the so-called ambient seismic field. Also known as ambient noise, these are very weak but continuous seismic waves that are generated by colliding ocean waves, among other things.


Body waves, in contrast, travel through the Earth, and as a result can provide much better spatial resolution of the planet's interior than surface waves. "Scientists have been performing body-wave tomography with signals from earthquakes and explosives for decades," said study coauthor Jesse Lawrence, an assistant professor of geophysics at Stanford. "But you can't control when and where an earthquake happens, and explosives are expensive and often damaging."


For this reason, geophysicists have long sought to develop a way to perform body wave tomography without relying on earthquakes or resorting to explosives. This has proven challenging, however, because body waves have lower amplitudes than surface waves, and are therefore harder to observe. "Usually you need to combine and average lots and lots of data to even see them," Lawrence said.


In the new study, the Stanford team applied a new software processing technique, called a body-wave extraction filter. Nakata developed the filter to analyze ambient noise data gathered from a network of thousands of sensors that had been installed across Long Beach to monitor existing oil reservoirs beneath the city. Using this filter, the team was able to create maps that revealed details about the subsurface of Long Beach down to a depth of more than half a mile (1.1. kilometers). The body-wave maps were comparable to, and in some cases better than, existing imaging techniques.


One map, for example, clearly revealed the Newport-Inglewood fault, an active geological fault that cuts through Long Beach. This fault also shows up in surface-wave maps, but the spatial resolution of the body-wave velocity map was much higher, and revealed new information about the velocity of seismic waves traveling through the fault's surrounding rocks, which in turn provides valuable clues about their composition and organization.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New Invention captures wasted cell phone energy and feeds it back to battery

New Invention captures wasted cell phone energy and feeds it back to battery | Amazing Science | Scoop.it

Engineers  at The Ohio State University have created a circuit that makes cell phone batteries last up to 30 percent longer on a single charge. The trick: it converts some of the radio signals emanating from a phone into direct current (DC) power, which then charges the phone’s battery.

This new technology can be built into a cell phone case, adding minimal bulk and weight.


“When we communicate with a cell tower or Wi-Fi router, so much energy goes to waste,” explained Chi-Chih Chen, research associate professor of electrical and computer engineering. “We recycle some of that wasted energy back into the battery.”


“Our technology is based on harvesting energy directly from the source, explained Robert Lee, professor of electrical and computer engineering. By Lee’s reckoning, nearly 97 percent of cell phone signals never reach a destination and are simply lost. Some of the that energy can be captured.

The idea is to siphon off just enough of the radio signal to noticeably slow battery drain, but not enough to degrade voice quality or data transmission.


Cell phones broadcast in all directions at once to reach the nearest cell tower or Wi-Fi router. Chen and his colleagues came up with a system that identifies which radio signals are being wasted. It works only when a phone is transmitting.


Next, the engineers want to insert the device into a “skin” that sticks directly to a phone, or better, partner with a manufacturer to build it directly into a phone, tablet or other portable electronic device.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Biodegradable computer chips made from wood

Biodegradable computer chips made from wood | Amazing Science | Scoop.it

Portable electronics -- typically made of non-renewable, non-biodegradable and potentially toxic materials -- are discarded at an alarming rate in consumers' pursuit of the next best electronic gadget.


In an effort to alleviate the environmental burden of electronic devices, a team of University of Wisconsin-Madison researchers has collaborated with researchers in the Madison-based U.S. Department of Agriculture Forest Products Laboratory (FPL) to develop a surprising solution: a semiconductor chip made almost entirely of wood.


The research team, led by UW-Madison electrical and computer engineering professor Zhenqiang "Jack" Ma, described the new device in a paper published today (May 26, 2015) by the journal Nature Communications. The paper demonstrates the feasibility of replacing the substrate, or support layer, of a computer chip, with cellulose nanofibril (CNF), a flexible, biodegradable material made from wood.


"The majority of material in a chip is support. We only use less than a couple of micrometers for everything else," Ma says. "Now the chips are so safe you can put them in the forest and fungus will degrade it. They become as safe as fertilizer." Zhiyong Cai, project leader for an engineering composite science research group at FPL, has been developing sustainable nanomaterials since 2009.


"If you take a big tree and cut it down to the individual fiber, the most common product is paper. The dimension of the fiber is in the micron stage," Cai says. "But what if we could break it down further to the nano scale? At that scale you can make this material, very strong and transparent CNF paper."


Working with Shaoqin "Sarah" Gong, a UW-Madison professor of biomedical engineering, Cai's group addressed two key barriers to using wood-derived materials in an electronics setting: surface smoothness and thermal expansion.


"You don't want it to expand or shrink too much. Wood is a natural hydroscopic material and could attract moisture from the air and expand," Cai says. "With an epoxy coating on the surface of the CNF, we solved both the surface smoothness and the moisture barrier."

Gong and her students also have been studying bio-based polymers for more than a decade. CNF offers many benefits over current chip substrates, she says.


"The advantage of CNF over other polymers is that it's a bio-based material and most other polymers are petroleum-based polymers. Bio-based materials are sustainable, bio-compatible and biodegradable," Gong says. "And, compared to other polymers, CNF actually has a relatively low thermal expansion coefficient."

more...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Beyond Moore's law: Even after Moore’s law ends, chip costs could still halve every few years

Beyond Moore's law: Even after Moore’s law ends, chip costs could still halve every few years | Amazing Science | Scoop.it

There is a popular misconception about Moore’s law (that the number of transistors on a chip doubles every two years) which has led many to conclude that the 50-year-old prognostication is due to end shortly. This doubling of processing power, for the same cost, has continued apace since Gordon Moore, one of Intel's founders, observed the phenomenon in 1965. At the time, a few hundred transistors could be crammed on a sliver of silicon. Today’s chips can carry billions.


Whether Moore’s law is coming to an end is moot. As far as physical barriers to further shrinkage are concerned, there is no question that, having been made smaller and smaller over the decades, crucial features within transistors are approaching the size of atoms. Indeed, quantum and thermodynamic effects that occur at such microscopic dimensions have loomed large for several years.


Until now, integrated circuits have used a two-dimensional (planar) structure, with a metal gate mounted across a flat, conductive channel of silicon. The gate controls the current flowing from a source electrode at one end of the channel to a drain electrode at the other end. A small voltage applied to the gate lets current flow through the transistor. When there is no voltage on the gate, the transistor is switched off. These two binary states (on and off) are the ones and zeros that define the language of digital devices.


However, when transistors are shrunk beyond a certain point, electrons flowing from the source can tunnel their way through the insulator protecting the gate, instead of flowing direct to the drain. This leakage current wastes power, raises the temperature and, if excessive, can cause the device to fail. Leakage becomes a serious problem when insulating barriers within transistors approach thicknesses of 3 nanometres (nm) or so. Below that, leakage increases exponentially, rendering the device pretty near useless.


Intel, which sets the pace for the semiconductor industry, started preparing for the leakage problem several “nodes” (changes in feature size) ago. At the time, it was still making 32nm chips. The solution adopted was to turn a transistor’s flat conducting channel into a vertical fence (or fin) that stood proud of the substrate. Instead of just one small contact patch, this gave the gate straddling the fence three contact areas (a large one on either side of the fence and a smaller one across the top). With more control over the current flowing through the channel, leakage is reduced substantially. Intel reckons “Tri-Gate” processors switch 37% faster and use 50% less juice than conventional ones.


Having introduced the Tri-Gate transistor design (now known generically as FinFET) with its 22nm node, Intel is using the same three-dimensional architecture in its current 14nm chips, and expects to do likewise with its 10nm ones, due out later this year and in mainstream production by the middle of 2016. Beyond that, Intel says it has some ideas about how to make 7nm devices, but has yet to reveal details. The company’s road map shows question marks next to future 7nm and 5nm nodes, and peters out shortly thereafter.


At a recent event celebrating the 50th anniversary of Moore’s law, Intel’s 86-year-old chairman emeritus said his law would eventually collapse, but that “good engineering” might keep it afloat for another five to ten years. Mr Moore was presumably referring to further refinements in Tri-Gate architecture. No doubt he was also alluding to advanced fabrication processes, such as “extreme ultra-violet lithography” and “multiple patterning”, which seemingly achieve the impossible by being able to print transistor features smaller than the optical resolution of the printing system itself.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The bullet that can change direction mid-air: US military develops self-guided 'smart bullet'

The bullet that can change direction mid-air: US military develops self-guided 'smart bullet' | Amazing Science | Scoop.it

You know the phrase "dodging a bullet"? Forget about it. Probably not going to happen anymore. The U.S. military said this week it has made great progress in its effort to develop a self-steering bullet. In February, the "smart bullets" -- .50-caliber projectiles equipped with optical sensors -- passed their most successful round of live-fire tests to date, according to the Defense Advanced Research Projects Agency, or DARPA. In the tests, an experienced marksman "repeatedly hit moving and evading targets," a DARPA statement said. "Additionally," the statement said, "a novice shooter using the system for the first time hit a moving target." In other words, now you don't even have to be a good shot to hit the mark.


The system has been developed by DARPA's Extreme Accuracy Tasked Ordnance program, known as EXACTO. "True to DARPA's mission, EXACTO has demonstrated what was once thought impossible: the continuous guidance of a small-caliber bullet to target," said Jerome Dunn, DARPA program manager.


"This live-fire demonstration from a standard rifle showed that EXACTO is able to hit moving and evading targets with extreme accuracy at sniper ranges unachievable with traditional rounds. Fitting EXACTO's guidance capabilities into a small .50-caliber size is a major breakthrough and opens the door to what could be possible in future guided projectiles across all calibers," Dunn said.


Videos supplied by DARPA show the bullets making sharp turns in midair as they pursue their targets. It all conjures up images of a cartoon character frantically fleeing a bullet that follows him wherever he goes. Only, these bullets are traveling at hundreds of miles per hour. And even the Road Runner can't run that fast. DARPA says the smart bullets will also help shooters who are trying, for example, to hit targets in high winds. The goals of the EXACTO program are giving shooters accuracy at greater distances, engaging targets sooner and enhancing the safety of American troops, DARPA said.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers Create Lens to Turn Smartphone into Microscope - for 3 Cents

Researchers Create Lens to Turn Smartphone into Microscope - for 3 Cents | Amazing Science | Scoop.it

Researchers at the University of Houston have created an optical lens that can be placed on an inexpensive smartphone to amplify images by a magnitude of 120, all for just 3 cents a lens. Wei-Chuan Shih, assistant professor of electrical and computer engineering at UH, said the lens can work as a microscope, and the cost and ease of using it – it attaches directly to a smartphone camera lens, without the use of any additional device – make it ideal for use with younger students in the classroom. It also could have clinical applications, allowing small or isolated clinics to share images with specialists located elsewhere, he said.


In a paper published in the Journal of Biomedical Optics, Shih and three graduate students describe how they produced the lenses and examine the image quality. Yu-Lung Sung, a doctoral candidate, served as first author; others involved in the study include Jenn Jeang, who will start graduate school at Liberty University in Virginia this fall, and Chia-Hsiung Lee, a former graduate student at UH now working in the technology industry in Taiwan.


The lens is made of polydimethylsiloxane (PDMS), a polymer with the consistency of honey, dropped precisely on a preheated surface to cure. Lens curvature – and therefore, magnification – depends on how long and at what temperature the PDMS is heated, Sung said.

The resulting lenses are flexible, similar to a soft contact lens, although they are thicker and slightly smaller.


“Our lens can transform a smartphone camera into a microscope by simply attaching the lens without any supporting attachments or mechanism,” the researchers wrote. “The strong, yet non-permanent adhesion between PDMS and glass allows the lens to be easily detached after use. An imaging resolution of 1 (micrometer) with an optical magnification of 120X has been achieved.”


Conventional lenses are produced by mechanical polishing or injection molding of materials such as glass or plastics. Liquid lenses are available, too, but those that aren’t cured require special housing to remain stable. Other types of liquid lenses require an additional device to adhere to the smartphone.

more...
MATsolutions's curator insight, May 5, 1:22 PM

Researchers at the University of Houston have developed cheap lenses that can turn a smartphone into a microscope. The lenses, which can be made for as little as three cents a piece, amplify images by a magnitude of 120. The lenses attach to smartphones without any supporting mechanisms, and can easily be removed when necessary. 

Primary applications for these lenses are educational - researchers say students can use them as microscopes, sharing images easily by text message. The low cost of production means that if they are broken, replacements will also be cheap. The graduate students at UH have started a crowdfunding campaign to fund the development of these lenses in bulk, and you can donate on Indiegogo to help their cause. 

Diane Johnson's curator insight, May 6, 9:50 AM

Wow! How handy is this!

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Weather could be controlled using lasers

Weather could be controlled using lasers | Amazing Science | Scoop.it
Scientists are attempting to control the weather by using lasers to create clouds, induce rain and even trigger lightning.

Professor Jean-Pierre Wolf and Dr Jerome Kasparian, both biophotonics experts at the University of Geneva, have now organised a conference at the WMO next month in an attempt to find ways of speeding up research on the topic. They said: “Ultra-short lasers launched into the atmosphere have emerged as a promising prospective tool for weather modulation and climate studies.

“Such prospects include lightning control and laser-assisted condensation.”


There is a long history of attempts by scientists to control the weather, including using techniques such as cloud seeding.

This involves spraying small particles and chemicals into the air to induce water vapour to condense into clouds.


In the 1960s the United States experimented with using silver iodide in an attempt to weaken hurricanes before they made landfall. The USSR was also claimed to have flown cloud seeding missions in an attempt to create rain clouds to protect Moscow from radioactive fallout from the Chernobyl nuclear disaster.


More recently the Russian Air force has also been reported to have used bags of cement to seed clouds.


Before the 2008 Olympic Games in Beijing, the Chinese authorities used aircraft and rockets to release chemicals into the atmosphere.

Other countries have been reported to be experimenting with cloud seeding to prevent flooding or smog.


However, Professor Wolf, Dr Kasparian and their colleagues believe that lasers could provide an easier and more controllable method of changing the weather. They began studying lasers for their use as a way of monitoring changes in the air and detecting aerosols high in the atmosphere.


Experiments using varying pulses of near infra-red laser light and ultraviolet lasers have, however, shown that they cause water to condense. They have subsequently found the lasers induce tiny ice crystals to form, which are a crucial step in the formation of clouds and eventual rainfall.


In new research published in the Proceedings of the National Academy of Sciences, Professor Wolf said the laser beams create plasma channels in the air that caused ice to form.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Audi Has Made Diesel From Water And Carbon Dioxide

Audi Has Made Diesel From Water And Carbon Dioxide | Amazing Science | Scoop.it

It’s the holy grail in energy production: produce a fuel that is both carbon neutral and can be poured directly into our current cars without the need to retrofit. There are scores of companies out there trying to do just that using vegetable oil, algae, and even the microbes found in panda poop to turn bamboo into fuel.


This week, German car manufacturer Audi has declared that they have been able to create an "e-diesel," or diesel containing ethanol, by using renewable energy to produce a liquid fuel from nothing more than water and carbon dioxide. After a commissioning phase of just four months, the plant in Dresden operated by clean tech company Sunfire has managed to produce its first batch of what they’re calling “blue crude.” The product liquid is composed of long-chain hydrocarbon compounds, similar to fossil fuels, but free from sulfur and aromatics and therefore burns soot-free.


The first step in the process involves harnessing renewable energy through solar, wind or hydropower. This energy is then used to heat water to temperatures in excess of 800˚C (1472˚F). The steam is then broken down into oxygen and hydrogen through high temperature electrolysis, a process where an electric current is passed through a solution.


The hydrogen is then removed and mixed with carbon monoxide under high heat and pressure, creating a hydrocarbon product they’re calling "blue crude." Sunfire claim that the synthetic fuel is not only more environmentally friendly than fossil fuel, but that the efficiency of the overall process—from renewable power to liquid hydrocarbon—is very high at around 70%. The e-diesel can then be either mixed with regular diesel, or used as a fuel in its own right.


But all may not be as it seems. The process used by Audi is actually called the Fischer-Tropsch process and has been known by scientists since the 1920s. It was even used by the Germans to turn coal into diesel during the Second World War when fuel supplies ran short. The process is currently used by many different companies all around the world, especially in countries where reserves of oil are low but reserves of other fossils fuels, such as gas and coal, are high.


And it would seem that Audi aren’t the first to think about using biogas facilities to produce carbon neutral biofuels either. Another German company called Choren has already made an attempt at producing biofuel using biogas and the Fischer-Tropsch process. Backed by Shell and Volkswagen, the company had all the support and funding it needed, but in 2011 it filed for bankruptcy due to impracticalities in the process.


Audi readily admits that none of the processes they use are new, but claim it’s how they’re going about it that is. They say that increasing the temperature at which the water is split increases the efficiency of the process and that the waste heat can then be recovered. Whilst their announcement might not be heralding a new fossil fuel-free era, the tech of turning green power into synthetic fuel could have applications as a battery to store excess energy produced by renewables.

more...
Daniel Lindahl's curator insight, May 25, 1:47 PM

Audi has successfully made a clean, carbon neutral form of diesel fuel known as "e-diesel". This will drastically change cars and fuel research in the future. Developments like these show the growth and change of industry as a whole. 

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Technology Trends - Singularity Blog: Most Anticipated New Technologies for 2015/2016

Technology Trends - Singularity Blog: Most Anticipated New Technologies for 2015/2016 | Amazing Science | Scoop.it
Future timeline, a timeline of humanity's future, based on current trends, long-term environmental changes, advances in technology such as Moore's Law, the latest medical advances, and the evolving geopolitical landscape.


more...
Ursula Sola de Hinestrosa's curator insight, April 24, 4:50 PM

Nuevas tecnologias

AugusII's curator insight, April 25, 6:15 PM

Being up to date a must -  Learning on trends useful.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

New instrument for imaging the magnetosensitivity of photochemical reactions on a submicron scale

New instrument for imaging the magnetosensitivity of photochemical reactions on a submicron scale | Amazing Science | Scoop.it

Researchers at the University of Tokyo have succeeded in developing a new microscope capable of observing the magnetic sensitivity of photochemical reactions believed to be responsible for the ability of some animals to navigate in Earth's magnetic field, on a scale small enough to follow these reactions taking place inside sub-cellular structures.


Several species of insects, fish, birds and mammals are believed to be able to detect magnetic fields -- an ability known as magnetoreception. For example, birds are able to sense Earth's magnetic field and use it to help navigate when migrating. Recent research suggests that a group of proteins called cryptochromes and particularly the molecule flavin adenine dinucleotide (FAD) that forms part of the cryptochrome, are implicated in magnetoreception. When cryptochromes absorb blue light, they can form what are known as radical pairs. The magnetic field around the cryptochromes determines the spins of these radical pairs, altering their reactivity. However, to date there has been no way to measure the effect of magnetic fields on radical pairs in living cells.


The research group of Associate Professor Jonathan Woodward at the Graduate School of Arts and Sciences are specialists in radical pair chemistry and investigating the magnetic sensitivity of biological systems. In this latest research, PhD student Lewis Antill made measurements using a special microscope to detect radical pairs formed from FAD, and the influence of very weak magnetic fields on their reactivity, in volumes less than 4 millionths of a billionth of a liter (4 femtoliters). This was possible using a technique the group developed called TOAD (transient optical absorption detection) imaging, employing a microscope built by postdoctoral research associate Dr. Joshua Beardmore based on a design by Beardmore and Woodward.


"In the future, using another mode of the new microscope called MIM (magnetic intensity modulation), also introduced in this work, it may be possible to directly image only the magnetically sensitive regions of living cells," says Woodward. "The new imaging microscope developed in this research will enable the study of the magnetic sensitivity of photochemical reactions in a variety of important biological and other contexts, and hopefully help to unlock the secrets of animals' miraculous magnetic sense."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Biochemists devise new technique for blueprinting cell membrane proteins

Biochemists devise new technique for blueprinting cell membrane proteins | Amazing Science | Scoop.it
Biochemists from Trinity College Dublin have devised a new technique that will make the difficult but critical job of blueprinting certain proteins considerably faster, easier and cheaper.


The breakthrough will make a big splash in the field of drug discovery and development, where precise protein structure blueprints can help researchers understand how individual proteins work. Critically, these blueprints can show weaknesses that allow drug developers to draw up specific battle plans in the fight against diseases and infections.

Professor of Membrane Structural and Functional Biology at Trinity, Martin Caffrey, is the senior author of the research, which has just been published in the international peer-reviewed journal Acta Crystallographica D.


He said: "This is a truly exciting development. We have demonstrated the method on a variety of cell membrane proteins, some of which act as transporters. It will work with existing equipment at a host of facilities worldwide, and it is very simple to implement."


Over 50% of drugs on the market target cell membrane proteins, which are vital for the everyday functioning of complex cellular processes. They act as transporters to ensure that specific molecules enter and leave our cells, as signal interpreters important in decoding messages and initiating responses, and as agents that speed up appropriate responses.


The major challenge facing researchers is the production of large membrane protein crystals, which are used to determine the precise 3-D structural blueprints. That challenge has now been lessened thanks to the Trinity biochemists' advent - the in meso in situ serial crystallography (IMISX) method.


Beforehand, researchers needed to harvest protein crystals and cool them at inhospitable temperatures in a complex set of events that was damaging, inefficient and prone to error. The IMISX method allows researchers to determine structural blueprints as and where the crystals grow.


Professor Caffrey added: "The best part of this is that these proteins are as close to being 'live' and yet packaged in the crystals we need to determine their structure as they could ever be. As a result, this breakthrough is likely to supplant existing protocols and will make the early stages of drug development considerably more efficient."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

DNA methylation test makes it easier to pinpoint identical twin responsible for a crime

DNA methylation test makes it easier to pinpoint identical twin responsible for a crime | Amazing Science | Scoop.it
You can run, but you can't hide: A new DNA test makes it easier to differentiate between identical twins in forensic cases.


Although short tandem repeat profiling is extremely powerful in identifying individuals from crime scene stains, it is unable to differentiate between monozygotic (MZ) twins. Efforts to address this include mutation analysis through whole genome sequencing and through DNA methylation studies. Methylation of DNA is affected by environmental factors; thus, as MZ twins age, their DNA methylation patterns change. This can be characterized by bisulfite treatment followed by pyrosequencing. However, this can be time-consuming and expensive; thus, it is unlikely to be widely used by investigators. If the sequences are different, then in theory the melting temperature should be different. Thus, the aim of a recent study was to assess whether high-resolution melt curve analysis can be used to differentiate between MZ twins. Five sets of MZ twins provided buccal swabs that underwent extraction, quantification, bisulfite treatment, polymerase chain reaction amplification and high-resolution melting curve analysis targeting two markers, Alu-E2F3 and Alu-SP. Significant differences were observed between all MZ twins targeting Alu-E2F3 and in four of five MZ twins targeting Alu-SP (P < 0.05). Thus, it has been demonstrated that bisulfite treatment followed by high-resolution melting curve analysis could be used to differentiate between MZ twins.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Industry 4.0: A 'fourth industrial revolution' is about to begin (in Germany)

Industry 4.0: A 'fourth industrial revolution' is about to begin (in Germany) | Amazing Science | Scoop.it

Factories are about to get smarter. The machines that make everything from our phones to our sandwiches rely on creaking technology -- but not for long. "We will have a fourth industrial revolution," says professor Detlef Zühlke, a lead researcher in the factories of the future. And that fourth revolution is all about making factories less stupid.


Zühlke and his team have spent the past decade developing a new standard for factories, a sort of internet of things for manufacturing. "There will be hundreds of thousands of computers everywhere," Zühlke tells WIRED.co.uk. "Some of these technologies will be disruptive".


In Germany this impending revolution is known as Industry 4.0, with the government shovelling close to €500m (£357m) into developing the technology. In ChinaJapan, South Korea and the USA big steps are also being made to create global standards and systems that will make factories smarter. The rest of the world, Zühlke claims, is "quite inactive". Zühlke is head of one of the largest research centers for smart factory technology in the world. The facility, located at the German Artificial Intelligence Research Centre (DFKID) in the south-western city of Kaiserslautern, houses a row of boxes packed with wires and circuitry.


At first it looks like any factory, but then you notice all the machines are on wheels. This, Zühlke explains, is the factory of the future. His vision is based on cyber physical systems, combining mechanical systems with electronics to connect everything together. And the wheels? One day different modules in the factory could potentially drive themselves around to allow factories to alter the production line. For now, moving the modules is done by humans.


The demo factory is currently producing business card holders. Each module performs a different task and they can be rearranged into any order, with the modules able to understand when it is their turn to carry out a task. A storage module feeds into an engraver, a robot arm, a laser marker, a quality control module and so forth. New modules can be added at any time, a process Zühlke compares to playing with Lego.


The idea owes a lot to how we've all been using home computers for years. For more than a decade it has been easy to plug in a new printer or other USB device and have it instantly recognized. On a computer this is known as "plug and play", in a factory Zühlke describes it as "plug and produce". A key breakthrough has been the development of a USB port on an industrial scale, Zühlke explains. This cable, which looks more like a giant hose, sends data and pressurized air to modules in a smart factory, with a control centre receiving information back.


In two years Zühlke expects the first wave of factories using smart technology to be fully operational, with widespread adoption in factories around the world in the next decade. For now, smart factories remain a research project.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Chemists discover key reaction mechanism behind the highly touted sodium-oxygen battery

Chemists discover key reaction mechanism behind the highly touted sodium-oxygen battery | Amazing Science | Scoop.it

Chemists at the University of Waterloo have discovered the key reaction that takes place in sodium-air batteries that could pave the way for development of the so-called holy grail of electrochemical energy storage. The key lies in Nazar's group discovery of the so-called proton phase transfer catalyst. By isolating its role in the battery's discharge and recharge reactions, Nazar and colleagues were not only able to boost the battery's capacity, they achieved a near-perfect recharge of the cell. When the researchers eliminated the catalyst from the system, they found the battery no longer worked. Unlike the traditional solid-state battery design, a metal-oxygen battery uses a gas cathode that takes oxygen and combines it with a metal such as sodium or lithium to form a metal oxide, storing electrons in the process. Applying an electric current reverses the reaction and reverts the metal to its original form.


Understanding how sodium-oxygen batteries work has implications for developing the more powerful lithium-oxygen battery, which is has been seen as the holy grail of electrochemical energy storage. Their results appear in the journal Nature Chemistry.


"Our new understanding brings together a lot of different, disconnected bits of a puzzle that have allowed us to assemble the full picture," says Nazar, a Chemistry professor in the Faculty of Science.


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Medical ‘millirobots’ could replace invasive surgery

Medical ‘millirobots’ could replace invasive surgery | Amazing Science | Scoop.it

Using a “Gauss gun” principle, an MRI machine drives a “millirobot” through a hypodermic needle into your spinal cord and guides it into your brain to release life-threatening fluid buildup.


University of Houston researchers have developed a concept for MRI-powered millimeter-size “millirobots” that could one day perform unprecedented minimally invasive medical treatments. This technology could be used to treat hydrocephalus, for example. Current treatments require drilling through the skull to implant pressure-relieving shunts, said Aaron T. Becker, assistant professor of electrical and computer engineering at the University of Houston. But MRI scanners alone don’t produce enough force to pierce tissues (or insert needles). So the researchers drew upon the principle of the “Gauss gun.”


Here’s how the a Gauss gun works: a single steel ball rolls down a chamber, setting off a chain reaction when it smashes into the next ball, etc., until the last ball flies forward, moving much more quickly the initial ball. Based on that concept, the researchers imagine a medical robot with a barrel self-assembled from three small high-impact 3D-printed plastic components, with slender titanium rod spacers separating two steel balls.


Aaron T. Becker, assistant professor of electrical and computer engineering at the University of Houston, said the potential technology could be used to treat hydrocephalus and other conditions, allowing surgeons to avoid current treatments that require cutting through the skull to implant pressure-relieving shunts.


Becker was first author of a paper presented at ICRA, the conference of the IEEE Robotics and Automation Society, nominated for best conference paper and best medical robotics paper. “Hydrocephalus, among other conditions, is a candidate for correction by our millirobots because the ventricles are fluid-filled and connect to the spinal canal,” Becker said. “Our noninvasive approach would eventually require simply a hypodermic needle or lumbar puncture to introduce the components into the spinal canal, and the components could be steered out of the body afterwards.”


Future work will focus on exploring clinical context, miniaturizing the device, and optimizing material selection.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New Device Delivers Medicine One Molecule at a Time

New Device Delivers Medicine One Molecule at a Time | Amazing Science | Scoop.it

It's smaller than your index finger, and it might be the future of implantable devices to treat a fractured spine, pinched nerve, or neurological disorder like epilepsy.

As they report in the journal Science, a team of engineers and medical researchers in Sweden has just designed a pinpoint-accurate implantable drug pump. It delivers medicine with such precision that it requires only 1 percent of the drugs doctors would otherwise need to deploy. As it demonstrated in tests on seven rats, the tiny pump can attach directly to the spine (at the root of a nerve) and inject its medicine molecule by molecule.


"In theory, we could tell you exactly how many molecules our device is delivering," says Amanda Jonsson, the bio-electronical engineer at Sweden's Linköping University who led the team. "These very small dosages could help avoid drug side effects, or be useful for medicines that we simply can't use at larger doses."


The technology is based on a compact but complicated piece of laboratory equipment called an ion pump. To put it simply, as electric current enters the ion pump one electron at a time, medicine is flung out the other end one molecule at a time. One caveat: Because of this setup, only medicines that can be electrically charged can be used with the pump. But that includes more pain medicines than you might think, including morphine and other opiates.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Internet-of-Things Radio Chip Consumes Very Little Power to Save a Lot

Internet-of-Things Radio Chip Consumes Very Little Power to Save a Lot | Amazing Science | Scoop.it

At this year’s Consumer Electronics Show in Las Vegas, the big theme was the “Internet of things” — the idea that everything in the human environment, from kitchen appliances to industrial equipment, could be equipped with sensors and processors that can exchange data, helping with maintenance and the coordination of tasks.

Realizing that vision, however, requires transmitters that are powerful enough to broadcast to devices dozens of yards away but energy-efficient enough to last for months — or even to harvest energy from heat or mechanical vibrations.


“A key challenge is designing these circuits with extremely low standby power, because most of these devices are just sitting idling, waiting for some event to trigger a communication,” explains Anantha Chandrakasan, the Joseph F. and Nancy P. Keithley Professor in Electrical Engineering at MIT. “When it’s on, you want to be as efficient as possible, and when it’s off, you want to really cut off the off-state power, the leakage power.”


This week, at the Institute of Electrical and Electronics Engineers’ International Solid-State Circuits Conference, Chandrakasan’s group will present a new transmitter design that reduces off-state leakage 100-fold. At the same time, it provides adequate power for Bluetooth transmission, or for the even longer-range 802.15.4 wireless-communication protocol.


“The trick is that we borrow techniques that we use to reduce the leakage power in digital circuits,” Chandrakasan explains. The basic element of a digital circuit is a transistor, in which two electrical leads are connected by a semiconducting material, such as silicon. In their native states, semiconductors are not particularly good conductors. But in a transistor, the semiconductor has a second wire sitting on top of it, which runs perpendicularly to the electrical leads. Sending a positive charge through this wire — known as the gate — draws electrons toward it. The concentration of electrons creates a bridge that current can cross between the leads.


To generate the negative charge efficiently, the MIT researchers use a circuit known as a charge pump, which is a small network of capacitors — electronic components that can store charge — and switches. When the charge pump is exposed to the voltage that drives the chip, charge builds up in one of the capacitors. Throwing one of the switches connects the positive end of the capacitor to the ground, causing a current to flow out the other end. This process is repeated over and over. The only real power drain comes from throwing the switch, which happens about 15 times a second.


To make the transmitter more efficient when it’s active, the researchers adopted techniques that have long been a feature of work in Chandrakasan’s group. Ordinarily, the frequency at which a transmitter can broadcast is a function of its voltage. But the MIT researchers decomposed the problem of generating an electromagnetic signal into discrete steps, only some of which require higher voltages. For those steps, the circuit uses capacitors and inductors to increase voltage locally. That keeps the overall voltage of the circuit down, while still enabling high-frequency transmissions.


What those efficiencies mean for battery life depends on how frequently the transmitter is operational. But if it can get away with broadcasting only every hour or so, the researchers’ circuit can reduce power consumption 100-fold.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists tune X-rays with tiny mirrors

Scientists tune X-rays with tiny mirrors | Amazing Science | Scoop.it

The secret of X-ray science – like so much else – is in the timing. Scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have created a new way of manipulating high-intensity X-rays, which will allow researchers to select extremely brief but precise X-ray bursts for their experiments.


The new technology, developed by a team of scientists from Argonne’s Center for Nanoscale Materials (CNM) and the Advanced Photon Source (APS), involves a small microelectromechanical system (MEMS) mirror only as wide as a few hairs.


MEMS are microscale devices fabricated using silicon wafers in facilities that make integrated circuits. The MEMS device acts as an ultrafast mirror reflecting X-rays at precise times and specific angles.


“Extremely compact devices such as this promise a revolution in our ability to manipulate photons coming from synchrotron light sources, not only providing an on-off switch enabling ultrahigh time-resolution studies, but ultimately promising new ways to steer, filter, and shape X-ray pulses as well,” said Stephen Streiffer, Associate Laboratory Director for Photon Sciences and Director of the Advanced Photon Source. “This is a premier example of the innovation that results from collaboration between nanoscientists and X-ray scientists.”


The device that the Argonne researchers developed essentially consists of a tiny diffracting mirror that oscillates at high speeds. As the mirror tilts rapidly back and forth, it creates an optical filter that selects only the X-ray pulses desired for the experiment. Only the light that is diffracted from the mirror goes on to hit the sample, and by adjusting the speed at which the MEMS mirror oscillates, researchers can control the timing of the X-ray pulses.


According to Argonne nanoscientist Daniel Lopez, one of the lead authors on the paper, the device works because of the relationship between the frequency of the mirror’s oscillation and the timing of the positioning of the perfect angle for the incoming X-ray. “If you sit on a Ferris wheel holding a mirror, you will see flashes of light every time the wheel is at the perfect spot for sunlight to hit it. The speed of the Ferris wheel determines the frequency of the flashes you see,” he said.


“The Argonne team’s work is incredibly exciting because it creates a new class of devices for controlling X-rays,” added Paul Evans, a professor of materials science at the University of Wisconsin-Madison. “They have found a way to significantly shrink the optics, which is great because smaller means faster, cheaper to make, and much more versatile.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers probe chemistry, topography and mechanics with one single instrument

Researchers probe chemistry, topography and mechanics with one single instrument | Amazing Science | Scoop.it

The probe of an atomic force microscope (AFM) scans a surface to reveal details at a resolution 1,000 times greater than that of an optical microscope. That makes AFM the premier tool for analyzing physical features, but it cannot tell scientists anything about chemistry. For that they turn to the mass spectrometer (MS).


Now, scientists at the Department of Energy's Oak Ridge National Laboratory have combined these cornerstone capabilities into one instrument that can probe a sample in three dimensions and overlay information about the topography of its surface, the atomic-scale mechanical behavior near the surface, and the chemistry at and under the surface. This multimodal imaging will allow scientists to explore thin films of phase-separated polymers important for energy conversion and storage. Their results are published in ACS Nano, a journal of the American Chemical Society.


"Combining the two capabilities marries the best of both worlds," said project leader Olga Ovchinnikova, who co-led the study with Gary Van Berkel, head of ORNL's Organic and Biological Mass Spectrometry Group. "For the same location, you get not only precise location and physical characterization, but also precise chemical information."


Added Van Berkel, "This is the first time that we've shown that you can use multiple methods through the atomic force microscope. We demonstrated for the first time that you could collect diverse data sets together without changing probes and without changing the sample."


The new technique for functional imaging allows probing of regions on the order of billionths of meters, or nanometers, to characterize a sample's surface hills and valleys, its elasticity (or "bounciness") throughout deeper layers, and its chemical composition. Previously, AFM tips could penetrate only 20 nanometers to explore a substance's ability to expand and contract. Adding a thermal desorption probe to the mix allowed scientists to probe deeper, as the technique cooks matter off the surface and removes it as deep down as 140 nanometers. The MS's precise chemical analysis of compounds gave the new technique unprecedented ability to characterize samples.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

No more science fiction: 3D holographic images are here to stay

No more science fiction: 3D holographic images are here to stay | Amazing Science | Scoop.it

The research efforts in nanotechnology have significantly advanced development of display devices. Graphene, an atomic layer of carbon material that won scientists Andre Geim and Konstantin Novoselov the 2010 Nobel Prize in Physics, has emerged as a key component for flexible and wearable displaying devices. Owing to its fascinating electronic and optical properties, and high mechanical strength, graphene has been mainly used as touch screens in wearable devices such as mobiles. This technical advance has enabled devices such as smart watches, fitness bands and smart headsets to transition from science fiction into reality, even though the display is still 2D flat.


But wearable displaying devices, in particular devices with a floating display, will remain one of the most significant trends in the industry, which is projected to double every two years and exceed US$12 billion in the year of 2018.


In a paper, published in Nature Communications, we show how our technology realizes wide viewing-angle and full-color floating 3D display in graphene based materials. Ultimately this will help to transform wearable displaying devices into floating 3D displays.

A graphene enabled floating display is based on the principle of holography invented by Dennis Gabor, who was awarded the Nobel Prize in Physics in 1971. The idea of optical holography provides a revolutionary method for recording and displaying both 3D amplitude and phase of an optical wave that comes from an object of interest.


The physical realization of high definition and wide viewing angle holographic 3D displays relies on the generation of a digital holographic screen which is composed of many small pixels. These pixels are used to bend light carrying the information for display. The angle of bending is measured by the refractive index of the screen material – according to the holographic correlation.


The smaller the refractive index pixels, the larger the bending angle once the beam passes through the hologram. This nanometer size of pixels is of great significance for the reconstructed 3D object to be vividly viewed in a wide angle. The process is complex but the key physical step is to control the heating of photoreduction of graphene oxides, derivatives of graphene with analogous physical structures but presence of additional oxygen groups. Through a photoreduction process, without involving any temperature increment, graphene oxides can be reduced toward graphene by absorbing a single femtosecond pulsed laser beam.


During the photoreduction, a change in the refractive index can be created. Through such a photoreduction we are able to create holographically-correlated refractive index pixel at the nanometer scale. This technique enables the reconstructed floating 3D object to be vividly and naturally viewed in a wide angle up to 52 degrees.


This result corresponds to an improvement in viewing angles by one-order-of-magnitude compared with the current available 3D holographic displays based on liquid crystal phase modulators, limited to a few degrees. In addition, the constant refractive index change over the visible spectra in reduced graphene oxides enables full-color 3D display.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The first hijacking of a medical telerobot raises important questions over the security of remote surgery

The first hijacking of a medical telerobot raises important questions over the security of remote surgery | Amazing Science | Scoop.it

A crucial bottleneck that prevents life-saving surgery being performed in many parts of the world is the lack of trained surgeons. One way to get around this is to make better use of the ones that are available. Sending them over great distances to perform operations is clearly inefficient because of the time that has to be spent travelling. So an increasingly important alternative is the possibility of telesurgery with an expert in one place controlling a robot in another that physically performs the necessary cutting and dicing. Indeed, the sale of medical robots is increasing at a rate of 20 percent per year.


But while the advantages are clear, the disadvantages have been less well explored. Telesurgery relies on cutting edge technologies in fields as diverse as computing, robotics, communications, ergonomics, and so on. And anybody familiar with these areas will tell you that they are far from failsafe.


Today, Tamara Bonaci and pals at the University of Washington in Seattle examine the special pitfalls associated with the communications technology involved in telesurgery. In particular, they show how a malicious attacker can disrupt the behavior of a telerobot during surgery and even take over such a robot, the first time a medical robot has been hacked in this way.


The first telesurgery took place in 2001 with a surgeon in New York successfully removing the gall bladder of a patient in Strasbourg in France, more than 6,000 kilometers away. The communications ran over a dedicated fiber provided by a telecommunications company specifically for the operation. That’s an expensive option since dedicated fibers can cost tens of thousands of dollars.


Since then, surgeons have carried out numerous remote operations and begun to experiment with ordinary communications links over the Internet, which are significantly cheaper. Although there are no recorded incidents in which the communications infrastructure has caused problems during a telesurgery operation, there are still questions over security and privacy which have never been full answered.

more...
No comment yet.