Amazing Science
676.4K views | +55 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

Microplasma transistors for extreme environments, like nuclear reactors

Microplasma transistors for extreme environments, like nuclear reactors | Amazing Science |

University of Utah electrical engineers fabricated the smallest plasma transistors that can withstand the high temperatures and ionizing radiation found in a nuclear reactor.

Such transistors someday might enable smartphones that take and collect medical X-rays on a battlefield, and devices to measure air quality in real time.

“These plasma-based electronics can be used to control and guide robots to conduct tasks inside the nuclear reactor,” says Massood Tabib-Azar, a professor of electrical and computer engineering.

“Microplasma transistors in a circuit can also control nuclear reactors if something goes wrong, and also could work in the event of nuclear attack.” The most commonly used type of transistor is called a metal oxide semiconductor field effect transistor, or MOSFET.

Plasma-based transistors, which use charged gases or plasma to conduct electricity at extremely high temperatures, are employed currently in light sources, medical instruments and certain displays under direct sunlight (but not plasma TVs, which are different). These microscale devices are about 500 microns long, or roughly the width of five human hairs. They operate at more than 300 volts, requiring special high-voltage sources. Standard electrical outlets in the United States operate at 110 volts.

The new devices designed by the University of Utah engineers are the smallest such microscale plasma transistors to date. They measure 1 micron to 6 microns in length, or as much as 500 times smaller than current state-of-the-art microplasma devices, and operate at one-sixth the voltage. They also can operate at temperatures up to 1,450 degrees Fahrenheit. Since nuclear radiation ionizes gases into plasma, this extreme environment makes it easier for plasma devices to operate.

“Plasmas are great for extreme environments because they are based on gases such as helium, argon and neon that can withstand high temperatures,” says Tabib-Azar. “This transistor has the potential to start a new class of electronic devices that are happy to work in a nuclear environment.”

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science!

Fish appear to be absent from the ocean's greatest depths due to biochemistry

Fish appear to be absent from the ocean's greatest depths due to biochemistry | Amazing Science |
The ability of deep sea fish to plumb new depths may be constrained by biochemistry, new research by an international team has found.

Fish appear to be absent from the ocean's greatest depths, the trenches from 8,400–11,000 m. The reason is unknown, but hydrostatic pressure is suspected. We propose that the answer is the need for high levels of trimethylamine oxide (TMAO, common in many marine animals), a potent stabilizer capable of counteracting the destabilization of proteins by pressure. TMAO is known to increase with depth in bony fishes (teleosts) down to 4,900 m. By capturing the world's second-deepest known fish, the hadal snailfish Notoliparis kermadecensis from 7,000 m, we find that they have the highest recorded TMAO contents, which, moreover, yield an extrapolated maximum for fish at about 8,200 m. This is previously unidentified evidence that biochemistry may constrain depth for a large taxonomic group.

Via Mariaschnee
No comment yet.
Scooped by Dr. Stefan Gruenwald!

▶ "Design of a Superconducting Quantum Computer" - Talk by John Martinis

Superconducting quantum computing is now at an important crossroad, where "proof of concept" experiments involving small numbers of qubits can be transitioned to more challenging and systematic approaches that could actually lead to building a quantum computer. Our optimism is based on two recent developments: a new hardware architecture for error detection based on "surface codes" [1], and recent improvements in the coherence of superconducting qubits [2]. I will explain how the surface code is a major advance for quantum computing, as it allows one to use qubits with realistic fidelities, and has a connection architecture that is compatible with integrated circuit technology. Additionally, the surface code allows quantum error detection to be understood using simple principles. I will also discuss how the hardware characteristics of superconducting qubits map into this architecture, and review recent results that suggest gate errors can be reduced to below that needed for the error detection threshold. 


[1] Austin G. Fowler, Matteo Mariantoni, John M. Martinis and Andrew N. Cleland, PRA 86, 032324 (2012). 
[2] R. Barends, J. Kelly, A. Megrant, D. Sank, E. Jeffrey, Y. Chen, Y. Yin, B. Chiaro, J. Mutus, C. Neill, P. O'Malley, P. Roushan, J. Wenner, T. C. White, A. N. Cleland and John M. Martinis, arXiv:1304:2322.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Craig Venter Wants to Build the World’s Biggest Database for Genome Information

Craig Venter Wants to Build the World’s Biggest Database for Genome Information | Amazing Science |
Craig Venter’s new company wants to improve human longevity by creating the world’s largest, most comprehensive database of genetic and physiological information.

Human Longevity, based in San Diego, says it will sequence some 40,000 human genomes per year to start, using Illumina’s new high-throughput sequencing machines (Illumina Has the First $1,000 Genome).

Eventually, it plans to work its way up to 100,000 genomes per year. The company will also sequence the genomes of the body’s multitudes of microbial inhabitants, called the microbiome, and analyze the thousands of metabolites that can be found in blood and other patient samples.

By combining these disparate types of data, the new company hopes to make inroads into the enigmatic process of aging and the many diseases, including cancer and heart disease, that are strongly associated with it. “Aging is exerting a force on humans that is exposing us to diseases, and the diseases are idiosyncratic, partly based on genetics, partly on environment,” says Leonard Guarente, who researches aging at MIT and is not involved in the company. “The hope for many of us who study aging is that by having interventions that hit key pathways in aging, we can affect disease.”

To that end, Human Longevity will collaborate with Metabolon, a company based in Durham, North Carolina, to profile the metabolites circulating in the bloodstreams of study participants. Metabolon was an early pioneer in the field of metabolomics, which catalogues the amino acids, fats, and other small molecules in a blood or other sample to develop more accurate diagnostic tests for diseases (Metabolomics).

Metabolon uses mass spectrometry to identify small molecules in a sample. In a human blood sample, there are around 1,200 different types; Metabolon’s process can also determine the amount of each one present. While genome sequencing can provide information about inherited risk of disease and some hints of the likelihood that a person will have a long life, metabolic data provides information on how environment, diet, and other features of an individual’s life affect health.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Recreation of Species: How far back can we go?

Recreation of Species: How far back can we go? | Amazing Science |

The lesson of the Jurassic Park tragedy was clear — man and dinosaur were not meant to coexist. It’s lucky then that dinosaur fossils are far too old to contain any genetic material that could be used for cloning. DNA breaks down over time, even when kept in ideal conditions, and a study of extinct moa bones has revealed an estimate of the half-life for our genes.

It might be odd to think of DNA having a half-life, as it’s usually associated with radioactive material — but as it measures the time taken for half of something to decay, it makes sense to talk about old samples of DNA in the same way. For example, uranium-235, the fissile material that can be used in nuclear power plants (and nuclear weapons), has a half-life of 703.8 million years. DNA, by comparison, doesn’t fare so well — according to a study of 158 samples of moa bones between 500 and 6,000 years old, DNA appears to have a half-life of around 521 years.

A study in the Proceedings of the Royal Society B saw palaeogeneticists from the universities of Perth and Copenhagen drilling into the bones of 158 different moa, the largest of the flightless birds which used to dominated New Zealand’s odd and unique ecosystem before the arrival of humans. The bones had all been collected from within a five kilometre radius, and they were estimated to have been buried at an average temperature of 13 degrees Celsius since the birds died. Their similar preservation conditions were key to ensuring that a reliable figure for the DNA decomposition could be found.

Averaging out the results from the different bones gave the average half-life of 521 years. That result is caveated, of course, as there are many factors that can also affect the rate of decay — soil acidity, bone health, extreme temperature, humidity, and so on. However, it does provide a baseline against which to assess the viability of obtaining DNA samples from future finds.

If there is a lot of DNA, preserved in absolutely ideal conditions, then it might hang around for several thousand years. Samples of Neanderthal DNA have been found in ancient teeth as old as 100,000 years old, and New Scientist reports that there have also been tiny fragments of DNA from insects and plants hundreds of thousands of years old found in ice cores, but these are too decayed to be used for cloning.

The moa could theoretically be cloned, if a good enough DNA sample is found. The moa is generally thought to have been hunted to extinction by the Maori residents of New Zealand before the arrival of European settlers in the 1700s, which isn’t too long enough by DNA standards. Or, a better candidate might be the woolly mammoth — intact specimens have been found frozen into the permafrost (including very recently by a boy out walking his dog), and it is thought that it will eventually be possible to implant a mammoth embryo into an elephant’s uterus, which will grow into a full-on baby mammoth. We may even be able to reintroduce them into the wild, which is really the least we could do after driving them to extinction in the first place.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

NASA-funded study finds: Our civilization faces similar threats of collapse as the Mayans experienced

NASA-funded study finds: Our civilization faces similar threats of collapse as the Mayans experienced | Amazing Science |

Natural and social scientists develop new model of how 'perfect storm' of crises could unravel global system.

A new study sponsored by Nasa's Goddard Space Flight Center has highlighted the prospect that global industrial civilization could collapse in coming decades due to unsustainable resource exploitation and increasingly unequal wealth distribution.

Noting that warnings of 'collapse' are often seen to be fringe or controversial, the study attempts to make sense of compelling historical data showing that "the process of rise-and-collapse is actually a recurrent cycle found throughout history." Cases of severe civilizational disruption due to "precipitous collapse - often lasting centuries - have been quite common."

The research project is based on a new cross-disciplinary 'Human And Nature DYnamical' (HANDY) model, led by applied mathematician Safa Motesharrei of the US National Science Foundation-supported National Socio-Environmental Synthesis Center, in association with a team of natural and social scientists. The study based on the HANDY model has been accepted for publication in the peer-reviewed Elsevier journal, Ecological Economics.

It finds that according to the historical record even advanced, complex civilizations are susceptible to collapse, raising questions about the sustainability of modern civilization: "The fall of the Roman Empire, and the equally (if not more) advanced Han, Mauryan, and Gupta Empires, as well as so many advanced Mesopotamian Empires, are all testimony to the fact that advanced, sophisticated, complex, and creative civilizations can be both fragile and impermanent."

By investigating the human-nature dynamics of these past cases of collapse, the project identifies the most salient interrelated factors which explain civilizational decline, and which may help determine the risk of collapse today: namely, Population, Climate, Water, Agriculture, and Energy.

These factors can lead to collapse when they converge to generate two crucial social features: "the stretching of resources due to the strain placed on the ecological carrying capacity"; and "the economic stratification of society into Elites [rich] and Masses (or "Commoners") [poor]" These social phenomena have played "a central role in the character or in the process of the collapse," in all such cases over "the last five thousand years."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Plastic shopping bags can be converted to fine diesel fuel

Plastic shopping bags can be converted to fine diesel fuel | Amazing Science |

Plastic shopping bags, an abundant source of litter on land and at sea, can be converted into diesel, natural gas and other useful petroleum products, researchers report. The conversion produces significantly more energy than it requires and results in transportation fuels -- diesel, for example -- that can be blended with existing ultra-low-sulfur diesels and biodiesels.

The conversion produces significantly more energy than it requires and results in transportation fuels -- diesel, for example -- that can be blended with existing ultra-low-sulfur diesels and biodiesels. Other products, such as natural gas, naphtha (a solvent), gasoline, waxes and lubricating oils such as engine oil and hydraulic oil also can be obtained from shopping bags.

A report of the new study appears in the journal Fuel Processing TechnologyThere are other advantages to the approach, which involves heating the bags in an oxygen-free chamber, a process called pyrolysis, said Brajendra Kumar Sharma, a senior research scientist at the Illinois Sustainable Technology Center who led the research. The ISTC is a division of the Prairie Research Institute at the University of Illinois.

"You can get only 50 to 55 percent fuel from the distillation of petroleum crude oil," Sharma said. "But since this plastic is made from petroleum in the first place, we can recover almost 80 percent fuel from it through distillation."

Americans throw away about 100 billion plastic shopping bags each year, according to the Worldwatch Institute. The U.S. Environmental Protection Agency reports that only about 13 percent are recycled. The rest of the bags end up in landfills or escape to the wild, blowing across the landscape and entering waterways.

Plastic bags make up a sizeable portion of the plastic debris in giant ocean garbage patches that are killing wildlife and littering beaches. Plastic bags "have been detected as far north and south as the poles," the researchers wrote.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Face-To-Face: Crude Mugshots built from DNA data alone

Face-To-Face: Crude Mugshots built from DNA data alone | Amazing Science |
Computer program crudely predicts a facial structure from genetic variations.

Researchers have now shown how 24 gene variants can be used to construct crude models of facial structure. Thus, leaving a hair at a crime scene could one day be as damning as leaving a photograph of your face. Researchers have developed a computer program that can create a crude three-dimensional (3D) model of a face from a DNA sample.

Using genes to predict eye and hair color is relatively easy. But the complex structure of the face makes it more valuable as a forensic tool — and more difficult to connect to genetic variation, says anthropologist Mark Shriver of Pennsylvania State University in University Park, who led the work, published today in PLOS Genetics1.

Shriver and his colleagues took high-resolution images of the faces of 592 people of mixed European and West African ancestry living in the United States, Brazil and Cape Verde. They used these images to create 3D models, laying a grid of more than 7,000 data points on the surface of the digital face and determining by how much particular points on a given face varied from the average: whether the nose was flatter, for instance, or the cheekbones wider. They had volunteers rate the faces on a scale of masculinity and femininity, as well as on perceived ethnicity.

Next, the authors compared the volunteers’ genomes to identify points at which the DNA differed by a single base, called a single nucleotide polymorphism (SNP). To narrow down the search, they focused on genes thought to be involved in facial development, such as those that shape the head in early embryonic development, and those that are mutated in disorders associated with features such as cleft palate. Then, taking into account the person’s sex and ancestry, they calculated the statistical likelihood that a given SNP was involved in determining a particular facial feature.

This pinpointed 24 SNPs across 20 genes that were significantly associated with facial shape. A computer program the team developed using the data can turn a DNA sequence from an unknown individual into a predictive 3D facial model (see 'Face to face'). Shriver says that the group is now trying to integrate more people and genes, and look at additional traits, such as hair texture and sex-specific differences.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

By screening over 1,000 different compounds, scientists have identified one that "vacuolizes" glioblastoma cells

By screening over 1,000 different compounds, scientists have identified one that "vacuolizes" glioblastoma cells | Amazing Science |

Currently, only 5 percent of patients with GBM survive longer than three years, and the average life expectancy of a patient is 15 months. Even when aggressive therapies are implemented, "GBM is essentially incurable," the researchers wrote in the study. So identifying vulnerabilities in this cancer's cells is an essential step in the development of new drug therapies.

The compound that eventually caught researchers' attention is called "Vacquinol-1," and although it certainly did kill cancer cells, it did so in a way that was unlike anything else they'd seen.

The molecule works by shutting off the cells' ability to control what gets in and out of their walls. This causes bag-like vessels filled with water and other materials, called vacuoles, to accumulate in the cells. Under these conditions, the cells eventually reach capacity and explode. But what's truly remarkable is that the noncancerous cell types that surround the cancer cells remain intact, so the treatment is actually GBM-specific.

Treatments that work in a petri dish, however, don't always work in the living. So the scientists set up a second test involving mice, and fed the compound to a group of mice with brain tumors for five days. The treatment did not cause any severe side effects. Instead, the tumors stopped growing and eventually disappeared. Moreover, six of the eight mice who received the treatment survived for 80 days following the feeding period — about 50 days longer than the mice who weren't given the drug.

Ravi Bellamkonda, a neuroscientist at the Georgia Institute of Technology who did not participate in the study, wrote in an email to The Verge that he was intrigued by the idea of being able to induce death in glioma cells specifically. "They have shown very exciting results."

But Bellamkonda also expressed some reservations about the approach, because the study's researchers had to administer what he calls "relatively high levels of the drug" to make it work. Furthermore, it's unclear if the concentrations of the drug need to be higher elsewhere in the body to reach the cancer cells — which might induce side-effects that would have been difficult to identify in the study. "This said, enhancing survival by several fold in aggressive tumor models is encouraging," he wrote. "I'd love to see more studies."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Nanoscale graphene origami cages set world record for densest hydrogen storage

Nanoscale graphene origami cages set world record for densest hydrogen storage | Amazing Science |

The U.S. Department of Energy is searching for ways to make storing energy with hydrogen a practical possibility, and they set up some goals for onboard automotive hydrogen storage systems with a driving range of 300 miles or more: the Department had hoped that by 2017, a research team could pack in 5.5 percent hydrogen by weight, and that by 2020, it could be stretched to 7.5 percent. Li’s team has already crossed that threshold, with a hydrogen storage density of 9.5 percent hydrogen by weight.  The team has also demonstrated the potential to reach an even higher density, a future research goal.

“Just like paper origami, which can make complicated 3-D structures from 2-D paper, graphene origami allows us to design and fabricate carbon nanostructures that are not naturally existing but have desirable properties,” said Li, an Associate Professor of Mechanical Engineering, a member of the Maryland NanoCenter and the University of Maryland Energy Research Center (UMERC), and a Keystone professor in the A. James Clark School of Engineering. Forming a graphene nanocage: (a) Patterned graphene is suitably hydrogenated (by bonding hydrogen atoms to the carbon atoms of planar graphene, thus warping it) and then folded (b-f) into a nanocage via electric-field assistance. (Credit: Shuze Zhu and Teng Li/ACS Nano“In this paper, we show that graphene nanocages can be used for hydrogen storage with extraordinary capacity, holding the promise to exceed the year 2020 goal of the U.S. Department of Energy on hydrogen storage,” Li explained to KurzweilAI in an email interview.

“Paper origami has existed for more than a millennium. Such a concept has been explored to enable the formation of complicated 3D structures from 2D building blocks in recent years, such as micro-robots and actuators. In these developments, the building block materials are still bulk materials, with a final resulting 3D structure of size on the order of millimeters. “The graphene origami we demonstrate in this paper uses the thinnest yet strongest materials ever made (one atom thick), leading to a nanocage on the order of several nanometers. Another unique feature of [Hydrogenation-assisted graphene origami] HAGO that does not exist in conventional origami is that programmable opening and closing of HAGO-enabled nanostructures can be controlled via an external electric field.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Anesthesia may have lingering side effects on the brain, even years after an operation

Anesthesia may have lingering side effects on the brain, even years after an operation | Amazing Science |

Two and a half years ago Susan Baker spent three hours under general anesthesia as surgeons fused several vertebrae in her spine. Everything went smoothly, and for the first six hours after her operation, Baker, then an 81-year-old professor at the Johns Hopkins Bloomberg School of Public Health, was recovering well. That night, however, she hallucinated a fire raging through the hospital toward her room. Petrified, she repeatedly buzzed the nurses' station, pleading for help. The next day she was back to her usual self. “It was the most terrifying experience I have ever had,” she says.

Baker's waking nightmare was a symptom of postoperative delirium, a state of serious confusion and memory loss that sometimes follows anesthesia.

Anesthesia comes in three main types. Local anesthesia, the mildest form, merely numbs a very small area, such as a single tooth. Regional anesthesia desensitizes a large section of someone's body by injecting drugs into the spine that block nerve signals to the brain. Often a patient getting regional anesthesia also takes a relatively small dose of a powerful sedative drug, such as propofol—not enough to put them under but enough to alter brain activity in a way that makes the person less aware and responsive.

General anesthesia relies on a cocktail of drugs that renders patients completely unconscious, prevents them from moving and blocks any memories of the surgery. Although anesthetic drugs have been around since 1846, many questions remain as to how exactly they work. To date, the strongest evidence suggests that the drugs are effective in part because they bind to and incapacitate several different proteins on the surface of neurons that are essential for regulating sleep, attention, learning and memory. In addition, it seems that interrupting the usual activity of neurons may disrupt communication between far-flung regions of the brain, which somehow triggers unconsciousness.

When postoperative delirium was first recognized, researchers wondered whether certain anesthetic drugs—but not others—deserved the blame. Yet studies comparing specific drugs and rates of delirium in patients after surgery have always been scant and inconclusive. “No particular anesthetic has been exonerated in patients,” says Roderic G. Eckenhoff, a professor of anesthesiology at the University of Pennsylvania. But “we can't say yet that there is an anesthetic that patients should not get.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

CODE_n: Data Visualizations in Grande Scale Shown at CeBit 2014

CODE_n: Data Visualizations in Grande Scale Shown at CeBit 2014 | Amazing Science |

I guess that CODE_n [], developed by design agency Kram/Weisshaar, is best appreciated when perceived in the flesh, that is at the Hannover Fairgrounds during CeBit 2014 in Hannover, Germany.

CODE_n consists of more than 3.000 square meters (approx. 33,000 ft2) of ink-jet printed textile membranes, stretching more than 260 meters of floor-to-ceiling tera-pixel graphics. The 12.5 terapixel, 90-meter long wall-like canopy titled "Retrospective Trending", shows over 400 lexical frequency timelines ranging from the years 1800 to 2008, each generated using Google's Ngram tool. The hundreds of search terms relate to ethnographic themes of politics, economics, engineering, science, technology, mathematics, and philosophy, resulting in the output of historical trajectories of word usage over time.

The 6.2 terapixel "Hydrosphere Hyperwall" is a visualization of the global ocean as dynamic pathways, polychrome swathes of sea climate, data-collecting swarms of mini robots and sea animals, as well as plumes of narrow current systems. NASA's ECCO2 maps were interwoven with directional arrows that specify wind direction and data vectors that represent buoys, cargo floats, research ships, wave gliders, sea creatures and research stations.

Finally, the 6.6 terapixel "Human Connectome" is a morphological map of the human brain. Consisting of several million multi-coloured fibre bundles and white matter tracts that were captured by diffusion-MRIs, the structural descriptions of the human mind were generated at 40 times the scale of the human body. The 3D map of human neural connections visualizes brain dynamics on an ultra-macro scale as well as the infinitesimal cell-scale.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

NASA Study Finds That Amazon Rainforest Inhales More Carbon than It Emits

NASA Study Finds That Amazon Rainforest Inhales More Carbon than It Emits | Amazing Science |

A new NASA-led study seven years in the making has confirmed that natural forests in the Amazon remove more carbon dioxide from the atmosphere than they emit, therefore reducing global warming. This finding resolves a long-standing debate about a key component of the overall carbon balance of the Amazon basin.

The Amazon's carbon balance is a matter of life and death: living trees take carbon dioxide out of the air as they grow, and dead trees put the greenhouse gas back into the air as they decompose. The new study, published in Nature Communications on March 18, is the first to measure tree deaths caused by natural processes throughout the Amazon forest, even in remote areas where no data have been collected at ground level.

Fernando Espírito-Santo of NASA’s Jet Propulsion Laboratory, Pasadena, Calif., lead author of the study, created new techniques to analyze satellite and other data. He found that each year, dead Amazonian trees emit an estimated 1.9 billion tons (1.7 billion metric tons) of carbon to the atmosphere. To compare this with Amazon carbon absorption, the researchers used censuses of forest growth and different modeling scenarios that accounted for uncertainties. In every scenario, carbon absorption by living trees outweighed emissions from the dead ones, indicating that the prevailing effect in natural forests of the Amazon is absorption.

Until now, scientists had only been able to estimate the Amazon's carbon balance from limited observations in small forest areas called plots. On these plots, the forest removes more carbon than it emits, but the scientific community has been vigorously debating how well the plots represent all the natural processes in the huge Amazon region. That debate began with the discovery in the 1990s that large areas of the forest can be killed off by intense storms in events called blowdowns.

Espírito-Santo said that the idea for the study arose from a 2006 workshop where scientists from several nations came together to identify NASA satellite instruments that might help them better understand the carbon cycle of the Amazon. In the years since then, he worked with 21 coauthors in five nations to measure the carbon impacts of tree deaths in the Amazon from all natural causes -- from large-area blowdowns to single trees that died of old age. He used airborne lidar data, satellite images, and a 10-year set of plot measurements collected by the University of Leeds, England, under the leadership of Emanuel Gloor and Oliver Phillips. He estimates that he himself spent a year-and-a-half doing fieldwork in the Amazon.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Graphene light detector first to span infrared spectrum making night vision contact lenses possible

Graphene light detector first to span infrared spectrum making night vision contact lenses possible | Amazing Science |

The first room-temperature light detector that can sense the full infrared spectrum has the potential to put heat vision technology into a contact lens. Unlike comparable mid- and far-infrared detectors currently on the market, the detector developed by University of Michigan engineering researchers doesn't need bulky cooling equipment to work.

"We can make the entire design super-thin," said Zhaohui Zhong, assistant professor of electrical and computer engineering. "It can be stacked on a contact lens or integrated with a cell phone."

Infrared light starts at wavelengths just longer than those of visible red light and stretches to wavelengths up to a millimeter long. Infrared vision may be best known for spotting people and animals in the dark and heat leaks in houses, but it can also help doctors monitor blood flow, identify chemicals in the environment and allow art historians to see Paul Gauguin's sketches under layers of paint.

Unlike the visible spectrum, which conventional cameras capture with a single chip, infrared imaging requires a combination of technologies to see near-, mid- and far-infrared radiation all at once. Still more challenging, the mid-infrared and far-infrared sensors typically need to be at very cold temperatures.

Graphene, a single layer of carbon atoms, could sense the whole infrared spectrum—plus visible and ultraviolet light. But until now, it hasn't been viable for infrared detection because it can't capture enough light to generate a detectable electrical signal. With one-atom thickness, it only absorbs about 2.3 percent of the light that hits it. If the light can't produce an electrical signal, graphene can't be used as a sensor.

"The challenge for the current generation of graphene-based detectors is that their sensitivity is typically very poor," Zhong said. "It's a hundred to a thousand times lower than what a commercial device would require."

To overcome that hurdle, Zhong and Ted Norris, the Gerard A. Mourou Professor of Electrical Engineering and Computer Science, worked with graduate students to design a new way of generating the electrical signal. Rather than trying to directly measure the electrons that are freed when light hits the graphene, they amplified the signal by looking instead at how the light-induced electrical charges in the graphene affect a nearby current.

"Our work pioneered a new way to detect light," Zhong said. "We envision that people will be able to adopt this same mechanism in other material and device platforms."

The Science & Education team's curator insight, March 23, 2014 6:31 AM

Graphene is the subject of a huge amount of research, including at Waurn Ponds, and is turning out to be really useful. Take the idea of contact lenses with large drop of saline.

Scooped by Dr. Stefan Gruenwald!

Diagnosis by Light: How to Shrink Chemical Labs Onto Optical Fibers

Diagnosis by Light: How to Shrink Chemical Labs Onto Optical Fibers | Amazing Science |

Lab-on-fiber sensors could monitor the environment and hunt for disease inside your body.

Imagine an entire laboratory that fits inside a case the size of a tablet computer. The lab would include an instrument for reading out results and an array of attachable microsize probes for detecting molecules in a fluid sample, such as blood or saliva. Each probe could be used to diagnose one of many different diseases and health conditions and could be replaced for just a few cents.

This scenario is by no means a pipe dream. The key to achieving it will be optical glass fibers—more or less the same as the ones that already span the globe, ferrying voluminous streams of data and voice traffic at unmatchable speeds. Their tiny diameter, dirt-cheap cost, and huge information-carrying capacity make these fibers ideal platforms for inexpensive, high-quality chemical sensors.

We call this technology a lab on fiber. Beyond being an affordable alternative to a traditional laboratory, it could take on tasks not possible now. For instance, it could be snaked inside industrial machines to ensure product quality and test for leaks. It could monitor waterways and waste systems, survey the oceans, or warn against chemical warfare. One day, maybe as soon as a decade from now, it could be injected into humans to look for disease orstudy the metabolism of drugs inside the body.

It will probably be at least five years before lab-on-fiber instruments are ready for commercial use. For example, a remaining major challenge is figuring out how to toughen the surface coating on the probes so that they can be stored for several months without becoming unstable and thereby losing their ability to bind with target molecules.

Nevertheless, lab-on-fiber technology is tantalizingly close to being able to compete in cost and performance with today’s diagnostic tools for many applications. One of the first might very well be a blood test: Imagine turning on your home lab kit, pricking your finger, and blotting the blood on an array of fiber probes. In just a few minutes, the machine would automatically e-mail the results to your doctor, who could get back to you within hours if there was a problem. Meanwhile, you could get on with the rest of your day.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Perfect memory, enhanced vision, an expert golf swing: The future of brain implants

Perfect memory, enhanced vision, an expert golf swing: The future of brain implants | Amazing Science |
How soon can we expect to see brain implants for perfect memory, enhanced vision, hypernormal focus or an expert golf swing? We're closer than you might think.

What would you give for a retinal chip that let you see in the dark or for a next-generation cochlear implant that let you hear any conversation in a noisy restaurant, no matter how loud? Or for a memory chip, wired directly into your brain's hippocampus, that gave you perfect recall of everything you read? Or for an implanted interface with the Internet that automatically translated a clearly articulated silent thought ("the French sun king") into an online search that digested the relevant Wikipedia page and projected a summary directly into your brain?

Science fiction? Perhaps not for very much longer. Brain implants today are where laser eye surgery was several decades ago. They are not risk-free and make sense only for a narrowly defined set of patients—but they are a sign of things to come.

Unlike pacemakers, dental crowns or implantable insulin pumps, neuroprosthetics—devices that restore or supplement the mind's capacities with electronics inserted directly into the nervous system—change how we perceive the world and move through it. For better or worse, these devices become part of who we are.

Neuroprosthetics aren't new. They have been around commercially for three decades, in the form of the cochlear implants used in the ears (the outer reaches of the nervous system) of more than 300,000 hearing-impaired people around the world. Last year, the Food and Drug Administration approved the first retinal implant, made by the company Second Sight.

Both technologies exploit the same principle: An external device, either a microphone or a video camera, captures sounds or images and processes them, using the results to drive a set of electrodes that stimulate either the auditory or the optic nerve, approximating the naturally occurring output from the ear or the eye.

Laura E. Mirian, PhD's curator insight, March 22, 2014 11:00 AM

Is this really necessary when we live only 100 years or less?

Rescooped by Dr. Stefan Gruenwald from Fragments of Science!

New explanation for zebra-stripe pattern discovered in Earth's inner Van Allen radiation belt

New explanation for zebra-stripe pattern discovered in Earth's inner Van Allen radiation belt | Amazing Science |

A weird zebra-stripe pattern discovered in Earth's inner Van Allen radiation belt is generated by the planet's rotation, according to new research.

The study, reported in the journal Nature, changes science's understanding of Earth's radiation belts, and may provide new insights into the complicated dynamics of similar belts around other planets. It also overturns previously held views that they were caused by geomagnetic storms from the Sun.

"These radiation belts were discovered more than 50 years ago, so it's very rewarding and exciting to find something completely unexpected," says the study's lead author Dr Sasha Ukhorskiy of Johns Hopkins University in Maryland.

"These were some of the first scientific discoveries ever made in space, and we're still finding out new things."

Earth's Van Allen radiation belts are composed of electrons and ions held in place by the planet's magnetic field. Early data from NASA's twin Van Allen Probe spacecraft detected the unusual zebra-stripe pattern in the energy spectra of electrons in the inner radiation belt. The pattern is caused by different electron energy densities at different altitudes.

"A cross-section of these stripes shows a recurring regular up and down pattern of electron energy intensity and flux distribution," says Ukhorskiy.

"This happens because the inner belt is very stable and not as dynamically variable as the outer radiation belt, or the recently discovered third belt."

The phenomenon was thought to be generated by interactions with space weather events from the Sun, such as Coronal Mass Ejections, and the strength of the solar wind flux.

Via Mariaschnee
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Assembling a Colossus: Loblolly pine genome is largest ever sequenced - 7 times bigger than the human genome

Assembling a Colossus: Loblolly pine genome is largest ever sequenced - 7 times bigger than the human genome | Amazing Science |

The loblolly pine genome is big. Bloated with retrotransposons and other repetitive sequences, it is seven times larger than the human genome and easily big enough to overwhelm standard genome assembly methods.

This forced the loblolly pine genome sequencing team, led by David Neale at the University of California, Davis, to look for ways to reduce the enormous complexity of their task. The draft genome sequence, described in the latest issue of GENETICS and the journal Genome Biology, was pieced together from over 16 billion sequence reads. Spanning around 23 billion base pairs, it only just beats out the Norway spruce as the largest genome ever sequenced, but it is substantially more complete. For example, the N50 scaffold size of the current loblolly assembly is 66.9 Kbp, compared to 0.72 Kbp in the Norway spruce.

So how did they do it? One strategy was to generate most of the sequence from part of a single pine nut. This tiny source material was the mega-gametophyte, which is the haploid tissue that provides nutrients to the developing diploid embryo. Despite the limited amount of DNA that can be extracted from this source, the reduced complexity of a haploid genome makes it easier to assemble. To link up all the sequence fragments from the haploid genome, the team also created DNA libraries from diploid needles of the parent genotype.

But this still left the assembly team, led by Steven Salzberg at Johns Hopkins University and James Yorke at the University of Maryland, with more data than their computational methods could handle.

The solution was a method of pre-processing the data into “super reads”, or larger chunks of contiguous haploid sequence that condensed many individual reads. In essence, they were dealing with the unambiguous parts of the problem first, and getting rid a huge amount of overlapping and redundant data in the process.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Magnetic behavior discovery could advance nuclear fusion

Magnetic behavior discovery could advance nuclear fusion | Amazing Science |

Inspired by the space physics behind solar flares and the aurora, a team of researchers from the University of Michigan and Princeton has uncovered a new kind of magnetic behavior that could help make nuclear fusion reactions easier to start.

Fusion is widely considered the ultimate goal of nuclear energy. While fission leaves behind radioactive waste that must be stored safely, fusion generates helium, a harmless element that is becoming scarce. Just 250 kilograms of fusion fuel can match the energy production of 2.7 million tons of coal.

Unfortunately, it is very difficult to get a fusion reaction going.

"We have to compress the fuel to a temperature and density similar to the core of a star," said Alexander Thomas, assistant professor of nuclear engineering and radiological sciences.

Once those conditions are reached, the hydrogen fuel begins to fuse into helium. This is how young stars burn, compressed by their own gravity.

On Earth, it takes so much power to push the fuel atoms together that researchers end up putting in more energy than they get out. But by understanding a newly discovered magnetic phenomenon, the team suggests that the ignition of nuclear fusion could be made more efficient.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Nineteen new praying mantis species discovered in Central and South America

Nineteen new praying mantis species discovered in Central and South America | Amazing Science |

Nineteen new species of a tree-living praying mantis family have been discovered, tripling the group’s diversity at a stroke.

The bark mantises (Liturgusa Saussure) from Central and South America were found in tropical forests and among specimens kept in museums.

Many of the newly described species are known only from a few specimens collected before 1950 from locations now heavily impacted by agriculture or urban development.

“Based on this study, we can predict that mantis groups with similar habitat specialization in Africa, Asia and Australia will also be far more diverse than what is currently known,” said Dr Gavin Svenson, curator of invertebrate zoology at the Cleveland Museum of Natural History in the US.

“Many of these groups have never been studied other than by the scientists that originally described some of the species, which in some cases is more than 100 years ago. This is exciting because enormous potential exists for advancing our understanding of praying mantis diversity just by looking within our existing museum collections and conducting a few field expeditions.”

Michael Mazo's curator insight, December 13, 2014 8:34 PM

Species are being discovered everyday, from bacteria to strains of infectious virus. Many of these new founded species originate in suitable living conditions like the one we find in Central America. This article gave in-depth context the new praying mantis species found in Central America. I thought a finding a couple new species would be remarkable but to find 19 different species of the praying mantis is beyond just an achievement. The habitat in this area allows for a creature like the praying mantis to thrive in Central America. Im sure this won't be the last species found in such a diverse location 

Scooped by Dr. Stefan Gruenwald!

Biologists discover fish with a previously unknown type of eye

Biologists discover fish with a previously unknown type of eye | Amazing Science |

The University of Tübingen's Institute of Anatomy has discovered a fish with a previously unknown type of eye. The aptly-named glasshead barreleye lives at depths of 800 to 1000 meters. It has a cylindrical eye pointing upwards to see prey, predators or potential mates silhouetted against the gloomy light above. But the eye also has a mirror-like second retina which can detect bioluminescent flashes created by deep-sea denizens to the sides and below, reports Professor Hans-Joachim Wagner in the latest Proceedings of the Royal Society B.

Professor Wagner examined an 18cm long glasshead barreleye, rhynchohyalus natalensis, caught in the Tasman Sea between Australia and New Zealand, as part of an international research project. The results were unexpected – reflector eyes are usually only found in invertebrates, such as mollusks and crustaceans, although one other vertebrate, the deep-sea brownsnout spookfish or dolichopteryx longipes, also uses a combination of reflective and refractive lenses in its eyes. The light coming from below is focused onto a second retina by a curved mirror composed of many layers of small reflective plates made of guanine crystals, giving the fish a much bigger field of vision.

The glasshead barreleye is therefore one of only two vertebrates known to have reflector eyes; but significantly, although rhynchohyalus natalensis and dolichopteryx longipes belong to the same family, their reflective lenses have a different structure and appear to have developed from different kinds of tissue. That indicates that two related but different genera took different paths to arrive at a similar solution – the reflective optics and a second retina to supplement the limited vision of the conventional refractive cylindrical eye.

The prisms in the brownsnout spookfish eye grew out of a layer of pigment on the retina and the angle of the reflective crystals varies depending on their position within the mirror, but in the glasshead barreleye, the crystals are flatter and images are formed depending on the roundness of the reflective surface. "The mirror here is formed from the silvery skin of the eye, and the crystals are arrayed almost parallel to the surface of the mirror," says Wagner. Models of the reflector showed that it is capable of throwing a bright, sharp image onto the retina below. "Obviously, a broad field of vision is an advantage even at great depths if similar structures develop independently to ensure it."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Humans can distinguish at least one trillion different odors, study shows

Humans can distinguish at least one trillion different odors, study shows | Amazing Science |

In a world perfumed by freshly popped popcorn and exhaust fumes, where sea breezes can mingle with the scents of sweet flowers or wet paint, new research has found that humans are capable of discriminating at least one trillion different odors. Howard Hughes Medical Institute (HHMI) scientists determined that our sense of smell is prepared to recognize this vast olfactory palette after testing individuals' ability to recognize differences between complex odors mixed in the laboratory.

It has been said for decades that humans are capable of discriminating between 10,000 different odors. The number is cited in scientific literature and appears in popular magazines. "It's the generally accepted number," says HHMI investigator Leslie Vosshall, who studies olfaction at the Rockefeller University. "Our analysis shows that the human capacity for discriminating smells is much larger than anyone anticipated."

Vosshall and her colleagues published their findings March 21, 2014, in the journalScience. "I hope our paper will overturn this terrible reputation that humans have for not being good smellers," she says.

Vosshall had long been bothered by the idea that humans were limited to smelling 10,000 odors – an estimate that was made in the 1920s, and not backed by any data. "Objectively, everybody should have known that that 10,000 number had to be wrong," she says. For one thing, it didn't make sense that humans should sense far fewer smells than colors. In the human eye, Vosshall explains, three light receptors work together to see up to 10 million colors. In contrast, the typical person's nose has 400 olfactory receptors.

But no one had tested humans' olfactory capacity. "We know exactly the range of sound frequencies that people can hear, not because someone made it up, but because it was tested. We didn't just make up the fact that humans can't see infrared or ultraviolet light. Somebody took the time to test it," Vosshall says. "For smell, nobody ever took the time to test."

Vosshall and Andreas Keller, a senior scientist in her lab at Rockefeller University, knew they couldn't test people's reactions to 10,000 or more odors, but they knew they could come up with a better estimate. They devised a strategy to present their research subjects with complex mixtures of different odors, and then ask whether their subjects could tell them apart.

They used 128 different odorant molecules to concoct their mixtures. The collection included diverse molecules that individually might evoke grass, or citrus, or various chemicals. But when combined into random mixtures of 10, 20, or 30, Vosshall says, they became largely unfamiliar. "We didn't want them to be explicitly recognizable, so most of our mixtures were pretty nasty and weird," she says. "We wanted people to pay attention to 'here's this really complex thing – can I pick another complex thing as being different?'"

The scientists presented their volunteers with three vials of scents at a time: two matched, and one different. Volunteers were asked to identify the one scent that was different from the others. Each volunteer made 264 such comparisons. Vosshall and her colleagues tallied how often their 26 subjects were able to correctly identify the correct outlier. From there, they extrapolated how many different scents the average person would be able to discriminate if they were presented with all the possible mixtures that could be made from their 128 odorants. "It's like the way the census works: to count the number of people who live in the United States, you don't knock on every single door, you sample and then extrapolate," she explains. "That's how I like to think of this study. We knocked on a few doors." In this way, they estimated that the average person can discriminate between at least one trillion different odors.

mirikelam's curator insight, February 18, 2015 4:02 AM

Prêts pour de nouvelles découvertes olfactives !

Scooped by Dr. Stefan Gruenwald!

Nanoscale optical switch breaks miniaturization barrier

Nanoscale optical switch breaks miniaturization barrier | Amazing Science |

A new ultra-fast, ultra-small optical switch could advance the day when photons replace electrons in consumer products ranging from cell phones to automobiles. It was developed by a team of scientists from Vanderbilt University, University of Alabama-Birmingham, and Los Alamos National Laboratory.

Described in the March 12 issue of the journal Nano Letters, the new optical device can turn on and off trillions of times per second. It consists of individual switches that are only 200 nanometers in diameter — much smaller than the current generation of optical switches. It overcomes one of the major technical barriers to the spread of electronic devices that detect and control light: miniaturizing the size of ultrafast optical switches.

The ultrafast switch is made out of a metamaterial (artificial material) engineered to have properties that are not found in nature. The metamaterial consists of nanoscale particles of vanadium dioxide (VO2) — a crystalline solid that can rapidly switch back and forth between an opaque, metallic phase and a transparent, semiconducting phase — which are deposited on a glass substrate and coated with a “nanomesh” of tiny gold nanoparticles.

The scientists report that bathing these gold nanoparticles with brief pulses from an ultrafast laser generates hot electrons in the gold particles that jump into the vanadium dioxide and cause it to undergo its phase change in a few trillionths of a second.

“We had previously triggered this transition in vanadium dioxide nanoparticles directly with lasers and we wanted to see if we could do it with electrons as well,” said Richard Haglund, Stevenson Professor of Physics at Vanderbilt, who led the study. “Not only does it work, but the injection of hot electrons from the gold nanoparticles also triggers the transformation with one fifth to one tenth as much energy input required by shining the laser directly on the bare VO2.”

Both industry and government are investing heavily in efforts to integrate optics and electronics, because it is generally considered to be the next step in the evolution of information and communications technology. Intel, Hewlett-Packard and IBM have been building chips with increasing optical functionality for the last five years that operate at gigahertz speeds, one thousandth that of the VO2 switch.

Scooped by Dr. Stefan Gruenwald!

Scientists are making paint that never fades, by mimicking iridescent bird feathers

Scientists are making paint that never fades, by mimicking iridescent bird feathers | Amazing Science |

Among the taxidermal specimens in Harvard’s Museum of Comparative Zoology, past centuries-old fur coats, arises a flicker of brilliant blue. This is the spangled cotinga. Surprisingly, the cotinga is about as old as everything in the room, but its color is still as dazzling as the day it was brought to the museum. The cotinga—or rather its feathers—achieve this effect through structural color.

Unlike color that we usually think of, which arises from paints and dyes absorbing certain wavelengths of light and reflecting the remainder, structural color is created when an object’s very nanostructure amplifies a specific wavelength. Cells in the cotinga’s feathers have a series of tiny pores spaced just right so that blues (and not much of anything else) are reflected back to our eyes. Because of this, if the feathers were thoroughly pulverized, the formation of pores and therefore the color would be lost. It also means that the same color could be produced from an entirely different material, if one could recreate the same pattern made by the feathers' pores.

Researchers led by Vinothan N. Manoharan at the Harvard School of Engineering and Applied Sciences want to recreate this effect, giving man-made materials structural color. Producing structural color is not easy, though; it often requires a material’s molecules to be in a very specific crystalline pattern, like the natural structure of an opal, which reflects a wide array of colors. But the pores on the cotinga’s feathers lack a regular order and are therefore a prime target for imitation.

Manoharan's lab has devised a system where microcapsules are filled with a disordered solution of even smaller particles suspended in water. When the microcapsule is partly dried out, it shrinks, bringing the particles closer and closer together. Eventually the average distance between all the particles will give rise to a specific reflected color from the capsule. Shrink the capsule a bit more, and they become another color, and then another.

“Most color you get in paints, coatings or cosmetics, even, comes from the selective absorption and reflection of light. What that means is that the material is absorbing some energy, and that means that over time, the material will fade,” says Manoharan.

The sun’s energy pummels the molecules in conventional pigments. Eventually, the molecules simply deteriorate and no longer absorb the colors they used to, leading to sun bleaching. Manoharan’s group is currently testing their innovation to see if it can create an effectively ageless color.

Electronic display technology—for example, e-readers—might also benefit from this advance. The microcapsules could be used in displays that create pixels with colored particles rather than LEDs, liquid crystals, or black-and-white “electronic ink.”

“We think it could be possible to create a full-color display that won’t fade over time,” says Manoharan. “The dream is that you could have a piece of flexible plastic that you can put graphics on in full color and read in bright sunlight.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Three-Fourths of Flu-Infected People Won’t Know It: Study Shows

Three-Fourths of Flu-Infected People Won’t Know It: Study Shows | Amazing Science |
Nearly three fourths of all influenza infections each season are asymptomatic, according to a recent study, indicating officials know less about the influenza virus than once thought.

Influenza causes roughly 250 000—500 000 deaths worldwide each year.[1] In the 20th century there were three influenza pandemics for which there are varying mortality estimates: 1918 A/H1N1 at least 20—40 million excess deaths, 1957 A/H2N2 about 4 million excess deaths, and 1968 A/H3N2 about 2 million excess deaths.[2—4] In 2009 a new pandemic virus,[5] influenza A(H1N1) pdm09, emerged in Mexico[6] and spread globally over 2009—10, causing an estimated 200 000 respiratory deaths and 83 000 cardiovascular deaths during the first 12 months of circulation.[7] WHO declared an end to the pandemic on Aug 10, 2010.[8] However, a further pandemic wave occurred in some European and other countries outside North America[9] in 2010—11 with reports of excess deaths in, for example, England.[10]

Internationally, influenza activity surveillance provides real-time information to inform prevention and control policy.[11] Surveillance focuses on cases seeking medical attention: the so-called tip of the iceberg of infection. Underestimation of the number of community cases leads to overestimates of severity.[12, 13] Heightened concern during a pandemic can change patient consultation thresholds and clinician recording and investigation behavior, thus distorting surveillance information.[14] Information on the community burden of influenza is key to informing control,[15] but is not routinely collected. For example, influenza transmission models, which are widely used to consider the efficacy and cost-effectiveness of vaccines, antivirals, and non-pharmaceutical countermeasures, depend on valid epidemiological estimates of the community occurrence of disease.

Testing the blood of the participants at the end of each season allowed researchers to conclude that while a great many participants proved to have been infected each year, approximately 77 percent of them never displayed any symptoms of infection, proving to be asymptomatic. Whether or not the flu infections cause unusual adverse symptoms in any of these infected participants remained unclear.

What does this mean? The study concluded that the human body might be more capable at fighting off or at least quelling the influenza virus than what is commonly thought. Health organization in North America and Europe both push for vaccination for the seasonal influenza virus in order to help stifle the virus's spread, but it has been revealed by past studies of these same organizations that the influenza virus only proves effective at preventing an infection 50 to 60 percent of the time.

The study was funded by the Medical Research Council and Wellcome Trust and published by The Lancet on March 17, 2014.

No comment yet.