Amazing Science
432.5K views | +14 today
Scooped by Dr. Stefan Gruenwald
onto Amazing Science!

Fracking and Shale Oil Won’t Lead to U.S. Energy Independence

Fracking and Shale Oil Won’t Lead to U.S. Energy Independence | Amazing Science |

The United States could see a surge in oil production that could make it the world’s leading oil producer within a decade, according to a new report from the International Energy Agency. But that lead will likely be temporary, and it still won’t allow the United States to stop importing oil. Barring technological breakthroughs in oil production and major reductions in consumption, the United States will need to rely on oil from outside its borders for the foreseeable future.


This week’s IEA report predicts that a relatively new technology for extracting oil from shale rock could make the United States the world’s leading oil producer within a decade, beating the current leader, Saudi Arabia. The idea that the U.S. could overtake Saudi Arabia, even temporarily, is a stunning development after years of seemingly inexorable declines in domestic oil production. U.S. production had fallen from 10 million barrels a day in the 1980s to 6.9 barrels per day in 2008, even as consumption increased from 15.7 million barrels per day in 1985 to 19.5 million barrels per day in 2008. The IEA estimates that production could reach 11.1 million barrels per day by 2020, almost entirely because of increases in the production of shale oil, which is extracted using the same horizontal drilling and fracking techniques that have flooded the U.S. with cheap natural gas.


As of the end of 2011, production had already increased to 8.1 million barrels per day, almost entirely because of shale oil. Production from two major shale resources in the U.S.—the Bakken formation in North Dakota and Montana and the Eagle Ford shale in Texas, now total about 900,000 barrels per day. In comparison, Saudi Arabia is expected to produce 10.6 million barrels per day in 2020.The shale oil resource, however, is limited. The IEA expects production to start gradually declining by the mid-2020s, at which time Saudi Arabia will reclaim the top spot.


Shale oil is creating a surge in U.S. oil production in part because it’s easy to find, says David Houseknecht, a scientist at the U.S. Geological Survey. The oil is spread over large areas, compared to the relatively small pockets of more conventional oil deposits in the United States. So whereas wildcatters drilling for conventional oil might come up empty two-thirds of the time or more, over 95 percent of shale oil wells strike oil.


Just how much shale oil can be produced—and how fast—depends heavily on two factors: the price of oil, and how easy it is to overcome possible local objections to oil fracking, says Richard Sears, a former executive at Royal Dutch Shell and a visiting scientist at MIT. Oil shale costs significantly more to produce than oil in Saudi Arabia and many other parts of the world, so for oil companies to go after this resource, oil prices need to stay relatively high. It’s hard to put a firm number on it, but Sears estimates that $50 to $60 a barrel would be enough, compared to the $85 per barrel price of oil now. Houseknecht puts the cost of production at closer to $70 a barrel. Although costs for producing conventional oil in the Middle East also vary, they typically don’t change more than $10 per barrel.

James Krall's curator insight, October 7, 2013 12:42 AM

I really think the U.S. should stop importing so much oil from the middle-east because you never know when they could possibly cut us off due to political differences or something. At least if we were to produce some and import some they'd make the price at the pump go lower for us. Which would boost the economy. But I think that if we are to become a leading producer of oil in the world, I think we should make ourselfs independant in ways of producing energy for ourselves. I think it would just lead to less problems and make it easier for everybody. 

Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

20,000+ FREE Online Science and Technology Lectures from Top Universities

20,000+ FREE Online Science and Technology Lectures from Top Universities | Amazing Science |

NOTE: To subscribe to the RSS feed of Amazing Science, copy into the URL field of your browser and click "subscribe".


This newsletter is aggregated from over 1450 news sources:


All my Tweets and Scoop.It! posts sorted and searchable:



You can search through all the articles semantically on my

archived twitter feed


NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen)  and display all the relevant postings SORTED by TOPICS.


You can also type your own query:


e.g., you are looking for articles involving "dna" as a keyword

Or CLICK on the little

FUNNEL symbol at the

 top right of the screen


MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video 

Siegfried Holle's curator insight, July 4, 2014 8:45 AM

Your knowledge is your strength and power 

Saberes Sin Fronteras Ong's curator insight, November 30, 2014 5:33 PM

Acceso gratuito a documentos de las mejores universidades del mundo

♥ princess leia ♥'s curator insight, December 28, 2014 11:58 AM

WoW  .. Expand  your mind!! It has room to grow!!! 

Scooped by Dr. Stefan Gruenwald!

Scientists watch live taste cells in action

Scientists watch live taste cells in action | Amazing Science |

Scientists have for the first time captured live images of the process of taste sensation on the tongue. The international team imaged single cells on the tongue of a mouse with a specially designed microscope system. "We've watched live taste cells capture and process molecules with different tastes," said biomedical engineer Dr Steve Lee, from the ANU Research School of Engineering.

There are more than 2,000 taste buds on the human tongue, which can distinguish at least five tastes: salty, sweet, sour, bitter and umami.However the relationship between the many taste cells within a taste bud, and our perception of taste has been a long standing mystery, said Professor Seok-Hyun Yun from Harvard Medical School. "With this new imaging tool we have shown that each taste bud contains taste cells for different tastes," said Professor Yun.

The team also discovered that taste cells responded not only to molecules contacting the surface of the tongue, but also to molecules in the blood circulation." We were surprised by the close association between taste cells and blood vessels around them," said Assistant Professor Myunghwan (Mark) Choi, from the Sungkyunkwan University in South Korea. "We think that tasting might be more complex than we expected, and involve an interaction between the food taken orally and blood composition," he said.

The team imaged the tongue by shining a bright infrared laser on to the mouse's tongue, which caused different parts of the tongue and the flavor molecules to fluoresce. The scientists captured the fluorescence from the tongue with a technique known as intravital multiphoton microscopy. They were able to pick out the individual taste cells within each taste bud, as well as blood vessels up to 240 microns below the surface of the tongue. The breakthrough complements recent studies by other research groups that identified the areas in the brain associated with taste.

The team now hopes to develop an experiment to monitor the brain while imaging the tongue to track the full process of taste sensation. However to fully understand the complex interactions that form our basic sense of taste could take years, Dr Lee said. "Until we can simultaneously capture both the neurological and physiological events, we can't fully unravel the logic behind taste," he said.

The research has been published in the latest edition of Nature Publishing Group's Scientific Reports

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Complex genetic ancestry of Americans uncovered

Complex genetic ancestry of Americans uncovered | Amazing Science |

By comparing the genes of current-day North and South Americans with African and European populations, an Oxford University study has found the genetic fingerprints of the slave trade and colonization that shaped migrations to the Americas hundreds of years ago.

The study published in Nature Communications found that:

  • While Spaniards provide the majority of European ancestry in continental American Hispanic/Latino populations, the most common European genetic source in African-Americans and Barbadians comes from Great Britain.
  • The Basques, a distinct ethnic group spread across current-day Spain and France, provided a small but distinct genetic contribution to current-day Continental South American populations, including the Maya in Mexico.
  • The Caribbean Islands of Puerto Rico and the Dominican Republic are genetically similar to each other and distinct from the other populations, probably reflecting a different migration pattern between the Caribbean and mainland America.
  • Compared to South Americans, people from Caribbean countries (such as the Barbados) had a larger genetic contribution from Africa.
  • The ancestors of current-day Yoruba people from West Africa (one of the largest African ethnic groups) provided the largest contribution of genes from Africa to all current-day American populations.
  • The proportion of African ancestry varied across the continent, from virtually zero (in the Maya people from Mexico) to 87% in current-day Barbados.
  • South Italy and Sicily also provided a significant European genetic contribution to Colombia and Puerto Rico, in line with the known history of Italian emigrants to the Americas in the late 19th and early 20th century.
  • One of the African-American groups from the USA had French ancestry, in agreement with historical French immigration into the colonial Southern United States.
  • The proportion of genes from European versus African sources varied greatly from individual to individual within recipient populations.

The team, which also included researchers from UCL (University College London) and the Universita' del Sacro Cuore of Rome, analysed more than 4,000 previously collected DNA samples from 64 different populations, covering multiple locations in Europe, Africa and the Americas. Since migration has generally flowed from Africa and Europe to the Americas over the last few hundred years, the team compared the 'donor' African and European populations with 'recipient' American populations to track where the ancestors of current-day North and South Americans came from.

'We found that the genetic profile of Americans is much more complex than previously thought,' said study leader Professor Cristian Capelli from the Department of Zoology. The research team analyzed DNA samples collected from people in Barbados, Columbia, the Dominican Republic, Ecuador, Mexico, Puerto Rico and African-Americans in the USA. They used a technique called haplotype-based analysis to compare the pattern of genes in these 'recipient populations' to 'donor populations' in areas where migrants to America came from.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New algorithm for 3D structures from 2D images will speed up protein structure discovery 100,000 fold

New algorithm for 3D structures from 2D images will speed up protein structure discovery 100,000 fold | Amazing Science |

One of the great challenges in molecular biology is to determine the three-dimensional structure of large biomolecules such as proteins. But this is a famously difficult and time-consuming task. The standard technique is x-ray crystallography, which involves analyzing the x-ray diffraction pattern from a crystal of the molecule under investigation. That works well for molecules that form crystals easily.

But many proteins, perhaps most, do not form crystals easily. And even when they do, they often take on unnatural configurations that do not resemble their natural shape. So finding another reliable way of determining the 3-D structure of large biomolecules would be a huge breakthrough. Today, Marcus Brubaker and a couple of pals at the University of Toronto in Canada say they have found a way to dramatically improve a 3-D imaging technique that has never quite matched the utility of x-ray crystallography.

The new technique is based on an imaging process called electron cryomicroscopy. This begins with a purified solution of the target molecule that is frozen into a thin film just a single molecule thick. This film is then photographed using a process known as transmission electron microscopy—it is bombarded with electrons and those that pass through are recorded. Essentially, this produces two-dimensional “shadowgrams” of the molecules in the film. Researchers then pick out each shadowgram and use them to work out the three-dimensional structure of the target molecule.

This process is hard for a number of reasons. First, there is a huge amount of noise in each image so even the two-dimensional shadow is hard to make out. Second, there is no way of knowing the orientation of the molecule when the shadow was taken so determining the 3-D shape is a huge undertaking.

The standard approach to solving this problem is little more than guesswork. Dream up a potential 3-D structure for the molecule and then rotate it to see if it can generate all of the shadowgrams in the dataset. If not, change the structure, test it, and so on.

Obviously, this is a time-consuming process. The current state-of-the-art algorithm running on 300 cores takes two weeks to find the 3-D structure of a single molecule from a dataset of 200,000 images.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Chinese scientists admitted to tweaking the genes of human embryos for the first time in history

Chinese scientists admitted to tweaking the genes of human embryos for the first time in history | Amazing Science |

A group of Chinese scientists just reported that they modified the genome of human embryos, something that has never been done in the history of the world, according to a report in Nature News

A recent biotech discovery - one that has been called the biggest biotech discovery of the century - showed how scientists might be able to modify a human genome when that genome was still just in an embryo.

This could change not only the genetic material of a person, but could also change the DNA they pass on, removing "bad" genetic codes (and potentially adding "good" ones) and taking an active hand in evolution.

Concerned scientists published an argument that no one should edit the human genome in this way until we better understood the consequences after a report uncovered rumours that Chinese scientists were already working on using this technology.

But this new paper, published April 18 in the journal Protein and Cell by a Chinese group led by gene-function researcher Junjiu Huang of Sun Yat-sen University, shows that work has already been done, and Nature News spoke to a Chinese source that said at least four different groups are "pursuing gene editing in human embryos."

Specifically, the team tried to modify a gene in a non-viable embryo that would have been responsible for a deadly blood disorder. But they noted in the study that they encountered serious challenges, suggesting there are still significant hurdles before clinical use becomes a reality.

CRISPR, the technology that makes all this possible, can find bad sections of DNA and cut them and even replace them with DNA that doesn't code for deadly diseases, but it can also make unwanted substitutions. Its level of accuracy is still very low.

Huang's group successfully introduced the DNA they wanted in only "a fraction" of the 28 embryos that had been "successfully spliced" (they tried 86 embryos at the start and tested 54 of the 71 that survived the procedure). They also found a "surprising number of ‘off-target’ mutations," according to Nature News.

Huang told Nature News that they stopped then because they knew that if they were do this work medically, that success rate would need to be closer to 100 percent. Our understanding of CRISPR needs to significantly develop before we get there, but this is a new technology that's changing rapidly.

Even though the Chinese team worked with non-viable embryos, embryos that cannot result in a live birth, editing the human genome and changing the DNA of an embryo is considered ethically questionable, because it could lead to more uses of this technology in humans. Changing the DNA of viable embryos could have unpredictable results for future generations, and some researchers want us to understand this better before putting it into practice.

Still, many researchers think this technology (most don't think it's ready to be used yet) could be invaluable. It could eliminate genetic diseases like sickle cell anemia, Huntington's disease, and cystic fibrosis, all devastating illnesses caused by genes that could theoretically be removed. Others fear that once we can do this accurately, it will inevitably be used to create designer humans with specific desired traits. After all, even though this research is considered questionable now, it is still actively being experimented with.

Huang told Nature News that both Nature and Science journals rejected his paper on embryo editing, "in part because of ethical objections." Neither journal commented to Nature News on that statement. Huang plans on trying to improve the accuracy of CRISPR in animal models for now. But CRISPR is reportedly quite easy to use, according to scientists who previously argued against doing this research in embryos now, meaning that it's incredibly likely these experiments will continue.

RegentsCareServices's curator insight, April 25, 10:42 AM

Chinese scientists admitted to tweaking the genes of human embryos for the first time in history

Scooped by Dr. Stefan Gruenwald!

New insight into how brain makes memories

New insight into how brain makes memories | Amazing Science |

Every time you make a memory, somewhere in your brain a tiny filament reaches out from one neuron and forms an electrochemical connection to a neighboring neuron. A team of biologists at Vanderbilt University, headed by Associate Professor of Biological Sciences Donna Webb, studies how these connections are formed at the molecular and cellular level.

The filaments that make these new connections are called dendritic spines and, in a series of experiments described in the April 17 issue of the Journal of Biological Chemistry, the researchers report that a specific signaling protein, Asef2, a member of a family of proteins that regulate cell migration and adhesion, plays a critical role in spine formation. This is significant because Asef2 has been linked to autism and the co-occurrence of alcohol dependency and depression.

"Alterations in dendritic spines are associated with many neurological and developmental disorders, such as autism, Alzheimer's disease and Down Syndrome," said Webb. "However, the formation and maintenance of spines is a very complex process that we are just beginning to understand."

Neuron cell bodies produce two kinds of long fibers that weave through the brain: dendrites and axons. Axons transmit electrochemical signals from the cell body of one neuron to the dendrites of another neuron. Dendrites receive the incoming signals and carry them to the cell body. This is the way that neurons communicate with each other.

As they wait for incoming signals, dendrites continually produce tiny flexible filaments called filopodia. These poke out from the surface of the dendrite and wave about in the region between the cells searching for axons. At the same time, biologists think that the axons secrete chemicals of an unknown nature that attract the filopodia. When one of the dendritic filaments makes contact with one of the axons, it begins to adhere and to develop into a spine. The axon and spine form the two halves of a synaptic junction. New connections like this form the basis for memory formation and storage.

The formation of spines is driven by actin, a protein that produces microfilaments and is part of the cytoskeleton. Webb and her colleagues showed that Asef2 promotes spine and synapse formation by activating another protein called Rac, which is known to regulate actin activity. They also discovered that yet another protein, spinophilin, recruits Asef2 and guides it to specific spines. "Once we figure out the mechanisms involved, then we may be able to find drugs that can restore spine formation in people who have lost it, which could give them back their ability to remember," said Webb.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientists discover asthma's potential root cause and a novel treatment

Scientists discover asthma's potential root cause and a novel treatment | Amazing Science |

Cardiff scientists have for the first time identified the potential root cause of asthma and an existing drug that offers a new treatment.

Published today in Science Translational Medicine journal, University researchers, working in collaboration with scientists at King's College London and the Mayo Clinic (USA), describe the previously unproven role of the calcium sensing receptor (CaSR) in causing asthma, a disease which affects 300 million people worldwide. 

The team used mouse models of asthma and human airway tissue from asthmatic and non-asthmatic people to reach their findings.

Crucially, the paper highlights the effectiveness of a class of drugs known as calcilytics in manipulating CaSR to reverse all symptoms associated with the condition. These symptoms include airway narrowing, airway twitchiness and inflammation - all of which contribute to increased breathing difficulty.

"Our findings are incredibly exciting," said the principal investigator, Professor Daniela Riccardi, from the School of Biosciences. "For the first time we have found a link airways inflammation, which can be caused by environmental triggers - such as allergens, cigarette smoke and car fumes – and airways twitchiness in allergic asthma.

"Our paper shows how these triggers release chemicals that activate CaSR in airway tissue and drive asthma symptoms like airway twitchiness, inflammation, and narrowing. Using calcilytics, nebulized directly into the lungs, we show that it is possible to deactivate CaSR and prevent all of these symptoms."

Dr Samantha Walker, Director of Research and Policy at Asthma UK, who helped fund the research, said: 

"This hugely exciting discovery enables us, for the first time, to tackle the underlying causes of asthma symptoms. Five per cent of people with asthma don't respond to current treatments so research breakthroughs could be life changing for hundreds of thousands of people. 

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientists link unexplained childhood paralysis to enterovirus D68

Scientists link unexplained childhood paralysis to enterovirus D68 | Amazing Science |

A research team led by UC San Francisco scientists has found the genetic signature of enterovirus D68 (EV-D68) in half of California and Colorado children diagnosed with acute flaccid myelitis -- sudden, unexplained muscle weakness and paralysis -- between 2012 and 2014, with most cases occurring during a nationwide outbreak of severe respiratory illness from EV-D68 last fall. The finding strengthens the association between EV-D68 infection and acute flaccid myelitis, which developed in only a small fraction of those who got sick. The scientists could not find any other pathogen capable of causing these symptoms, even after checking patient cerebrospinal fluid for every known infectious agent.

Researchers analyzed the genetic sequences of EV-D68 in children with acute flaccid myelitis and discovered that they all corresponded to a new strain of the virus, designated strain B1, which emerged about four years ago and had mutations similar to those found in poliovirus and another closely related nerve-damaging virus, EV-D70. The B1 strain was the predominant circulating strain detected during the 2014 EV-D68 respiratory outbreak, and the researchers found it both in respiratory secretions and -- for the first time -- in a blood sample from one child as his acute paralytic illness was worsening.

The study also included a pair of siblings, both of whom were infected with genetically identical EV-D68 virus, yet only one of whom developed acute flaccid myelitis. 

"This suggests that it's not only the virus, but also patients' individual biology that determines what disease they may present with," said Charles Chiu, MD, PhD, an associate professor of Laboratory Medicine and director of UCSF-Abbott Viral Diagnostics and Discovery Center. "Given that none of the children have fully recovered, we urgently need to continue investigating this new strain of EV-D68 and its potential to cause acute flaccid myelitis."

Among the 25 patients with acute flaccid myelitis in the study, 16 were from California and nine were from Colorado. Eleven were part of geographic clusters of children in Los Angeles and in Aurora, Colorado, who became symptomatic at the same time, and EV-D68 was detected in seven of these patients.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Fast and Accurate 3D Imaging Technique to Track Optically-Trapped Particles

Fast and Accurate 3D Imaging Technique to Track Optically-Trapped Particles | Amazing Science |

Optical tweezers have been used as an invaluable tool for exerting micro-scale force on microscopic particles and manipulating three-dimensional (3-D) positions of particles. Optical tweezers employ a tightly-focused laser whose beam diameter is smaller than one micrometer (1/100 of hair thickness), which generates attractive force on neighboring microscopic particles moving toward the beam focus. Controlling the positions of the beam focus enabled researchers to hold the particles and move them freely to other locations so they coined the name “optical tweezers.”


To locate the optically-trapped particles by a laser beam, optical microscopes have usually been employed. Optical microscopes measure light signals scattered by the optically-trapped microscopic particles and the positions of the particles in two dimensions. However, it was difficult to quantify the particles’ precise positions along the optic axis, the direction of the beam, from a single image, which is analogous to the difficulty of determining the front and rear positions of objects when closing an eye due to a lack of depth perception. Furthermore, it became more difficult to measure precisely 3-D positions of particles when scattered light signals were distorted by optically-trapped particles having complicated shapes or other particles occlude the target object along the optic axis.


Professor YongKeun Park and his research team in the Department of Physics at the Korea Advanced Institute of Science and Technology (KAIST) employed an optical diffraction tomography (ODT) technique to measure 3-D positions of optically-trapped particles in high speed. The principle of ODT is similar to X-ray CT imaging commonly used in hospitals for visualizing the internal organs of patients. Like X-ray CT imaging, which takes several images from various illumination angles, ODT measures 3-D images of optically-trapped particles by illuminating them with a laser beam in various incidence angles.


The KAIST team used optical tweezers to trap a glass bead with a diameter of 2 micrometers, and moved the bead toward a white blood cell having complicated internal structures. The team measured the 3-D dynamics of the white blood cell as it responded to an approaching glass bead via ODT in the high acquisition rate of 60 images per second. Since the white blood cell screens the glass bead along an optic axis, a conventionally-used optical microscope could not determine the 3-D positions of the glass bead. In contrast, the present method employing ODT localized the 3-D positions of the bead precisely as well as measured the composition of the internal materials of the bead and the white blood cell simultaneously.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

How the Hubble Space Telescope has changed our view of the universe

How the Hubble Space Telescope has changed our view of the universe | Amazing Science |
Hubble has made more than 1.2 million observations and generated 100 terabytes of data, all while whirling around the Earth at 17,000 mph.

Without Hubble, astronomy "would be an awful lot poorer a field,” said Mike Garcia, a program scientist for Hubble at NASA headquarters in Washington. “The Hubble images capture the beauty of the heavens in a way that nothing else has done. The pictures are works of art, and nothing else has done that.”

Garcia, who has worked on NASA projects for over 30 years, has used the satellite’s images to study black holes in the Andromeda galaxy.

“Hubble found out that there's a supermassive black hole in the center of every galaxy. And that was a surprise,” Garcia said. “The black holes and the galaxies know about each other. The size of the black hole is in lockstep with the size of the galaxy."

Hubble doesn’t just stare into deep space; the telescope is just as good at observing objects closer to home. Hubble has provided scientists with images of Pluto’s four moons and photographic evidence that Jupiter’s Great Red Spot has been shrinking, as well as treating viewers to the fragments of a comet crashing into the gas planet.

Garcia’s favorite Hubble image captures the Andromeda galaxy’s nucleus, 2 million light-years away, which is actually pretty close. "It's a double nucleus, which is really rare. And it surrounds a supermassive black hole," Garcia said. "Only an astronomer would love it. It’s not an image the public would go wild over."

But there are plenty of images the public has gone wild over. Hubble images are embedded in our culture – seen in frames on walls, on computer screen savers and postage stamps. “An image captures people’s imaginations right away,” said John Trauger, a senior scientist at the Jet Propulsion Laboratory in La Cañada-Flintridge. “Hubble has really helped the idea of communicating science.”

Trauger helped process one of Hubble’s most recognizable images, featuring a dying star throwing dust back into space. “MyCn18,” an hourglass-shaped nebula with a green eye-like center, was photographed in 1996 and made the cover of both National Geographic and the Pearl Jam album “Binaural.”

Trauger was also the principal investigator of JPL’s mission to repair Hubble after it was launched with a flaw that rendered all its instruments “unfocusable.” In 1993, Space Shuttle Endeavour installed a new camera, called Wide Field Planetary Camera 2, giving us a view into the deepest regions of space. The camera was replaced again with a third version in 2009. With Hubble, scientists can see the same amount of detail in objects 10 times farther than they would be able to get from a land-based observatory. That allows humans to view a region of space 1,000 times larger than what we can see from the ground, Trauger said.

Hubble’s quest to capture the universe in images has benefited Earthlings in other ways, too. As NASA and the military push for more advanced digital camera technologies, those improvements eventually find their way into our pocket-sized devices.

Twenty-five years after its launch and six years after its last servicing mission, Hubble is at its scientific peak of productivity. NASA expects the satellite to work well in to the 2020s. By 2037, the agency estimates atmospheric drag will start to take its toll. Then they’ll think about boosting it up or bringing it back.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Autism and prodigy share a common genetic link

Autism and prodigy share a common genetic link | Amazing Science |

Researchers have uncovered the first evidence of a genetic link between prodigy and autism. The scientists found that child prodigies in their sample share some of the same genetic variations with people who have autism. These shared genetic markers occur on chromosome, according to the researchers from The Ohio State University and Nationwide Children’s Hospital in Columbus.

The findings confirm a hypothesis made by Joanne Ruthsatz, co-author of the study and assistant professor of psychology at Ohio State’s Mansfield campus. In a previous study, Ruthsatz and a colleague had found that half of the prodigies in their sample had a family member or a first- or second-degree relative with an autism diagnosis.

“Based on my earlier work, I believed there had to be a genetic connection between prodigy and autism and this new research provides the first evidence to confirm that,” Ruthsatz said.

The new study appears online in the journal Human Heredity.

These findings are the first step toward answering the big question, Ruthsatz said. “We now know what connects prodigy with autism. What we want to know is what distinguishes them. We have a strong suspicion that there’s a genetic component to that, as well, and that’s the focus of our future work,” she said.

The Human Heredity study involved five child prodigies and their families that Ruthsatz has been studying, some for many years. Each of the prodigies had received national or international recognition for a specific skill, such as math or music. All took tests to confirm their exceptional skills.

The researchers took saliva samples from the prodigies, and from between four and 14 of each prodigy’s family members. Each prodigy had between one and five family members in the study who had received a diagnosis on the autism spectrum.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

A new wearable device, NailO, turns the user’s thumbnail into a miniature wireless track pad

A new wearable device, NailO, turns the user’s thumbnail into a miniature wireless track pad | Amazing Science |

Researchers at the MIT Media Laboratory are developing a new wearable device that turns the user’s thumbnail into a miniature wireless track pad. They envision that the technology could let users control wireless devices when their hands are full — answering the phone while cooking, for instance. It could also augment other interfaces, allowing someone texting on a cellphone, say, to toggle between symbol sets without interrupting his or her typing. Finally, it could enable subtle communication in circumstances that require it, such as sending a quick text to a child while attending an important meeting.

The researchers describe a prototype of the device, called NailO, in a paper they’re presenting next week at the Association for Computing Machinery’s Computer-Human Interaction conference in Seoul, South Korea.

According to Cindy Hsin-Liu Kao, an MIT graduate student in media arts and sciences and one of the new paper’s lead authors, the device was inspired by the colorful stickers that some women apply to their nails. “It’s a cosmetic product, popular in Asian countries,” says Kao, who is Taiwanese. “When I came here, I was looking for them, but I couldn’t find them, so I’d have my family mail them to me.”

Indeed, the researchers envision that a commercial version of their device would have a detachable membrane on its surface, so that users could coordinate surface patterns with their outfits. To that end, they used capacitive sensing — the same kind of sensing the iPhone’s touch screen relies on — to register touch, since it can tolerate a thin, nonactive layer between the user’s finger and the underlying sensors.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

A layered fabric 3D printer for soft interactive objects

A layered fabric 3D printer for soft interactive objects | Amazing Science |

A team from Disney ResearchCarnegie Mellon University and Cornell University have devised a 3-D printer that layers together laser-cut sheets of fabric to form soft, squeezable objects such as phone cases and toys. These objects can have complex geometries and incorporate circuitry that makes them interactive.

“Today’s 3-D printers can easily create custom metal, plastic, and rubber objects,” said Jim McCann, associate research scientist at Disney Research Pittsburgh. “But soft fabric objects, like plush toys, are still fabricated by hand. Layered fabric printing is one possible method to automate the production of this class of objects.”

The fabric printer is similar in principle to laminated object manufacturing, which takes sheets of paper or metal that have each been cut into a 2-D shape and then bonds them together to form a 3-D object. Fabric presents particular cutting and handling challenges, however, which the Disney team has addressed in the design of its printer.

The latest soft printing apparatus includes two fabrication surfaces: an upper cutting platform and a lower bonding platform. Fabric is fed from a roll into the device, where a vacuum holds the fabric up against the upper cutting platform while a laser cutting head moves below. The laser cuts a rectangular piece out of the fabric roll, then cuts the layer’s desired 2-D shape or shapes within that rectangle. This second set of cuts is left purposefully incomplete so that the shapes receive support from the surrounding fabric during the fabrication process.

Once the cutting is complete, the bonding platform is raised up to the fabric and the vacuum is shut off to release the fabric. The platform is lowered and a heated bonding head is deployed, heating and pressing the fabric against previous layers. The fabric is coated with a heat-sensitive adhesive, so the bonding process is similar to a person using a hand iron to apply non-stitched fabric ornamentation onto a costume or banner.

Once the process is complete, the surrounding support fabric is torn away by hand to reveal the 3-D object. The researchers demonstrated this technique by using 32 layers of 2-millimeter-thick felt to create a 2 ½-inch bunny. The process took about 2 ½ hours.

Two types of material can be used to create objects by feeding one roll of fabric into the machine from left to right, while a second roll of a different material is fed front to back. If one of the materials is conductive, the equivalent of wiring can be incorporated into the device. The researchers demonstrated the possibilities by building a fabric starfish that serves as a touch sensor, as well as a fabric smartphone case with an antenna that can harvest enough energy from the phone to light an LED.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New VR Technology Lets You Explore Worlds at the Nanoscale

New VR Technology Lets You Explore Worlds at the Nanoscale | Amazing Science |

Nanotronics Imaging, an Ohio-based company backed by PayPal founder and early Facebook investor Peter Thiel, makes atomic-scale microscopes that both researchers and industrial manufacturers can use in the production of nanoscale materials. Today at the Tribeca Disruptive Innovation Awards the company announced a new endeavor: the ability to view the microscopes’ output using virtual reality headsets like the Rift.

The new product, nVisible, will enable Nanotronics users to do virtual walkthroughs of nano-structures, which the company says will enable them to better visualize and understand the materials they’re working with. But most importantly, it could help manufacturers create more reliable processes for building nanoscale products—which has historically been a huge hurdle in working with such incredibly small materials.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Crime scene discovery – DNA methylation can tell DNA of identical twins apart

Crime scene discovery – DNA methylation can tell DNA of identical twins apart | Amazing Science |

SINCE its first use in the 1980s – a breakthrough dramatised in recent ITV series Code of a Killer– DNA profiling has been a vital tool for forensic investigators.  Now researchers at the University of Huddersfield have solved one of its few limitations by successfully testing a technique for distinguishing between the DNA – or genetic fingerprint – of identical twins.

The probability of a DNA match between two unrelated individuals is about one in a billion.  For two full siblings, the probability drops to one-in-10,000.  But identical twins present exactly the same DNA profile as each other and this has created legal conundrums when it was not possible to tell which of the pair was guilty or innocent of a crime.  This has led to prosecutions being dropped, rather than run the risk of convicting the wrong twin.

Now Dr Graham Williams (pictured right) and his Forensic Genetics Research Group at the University of Huddersfield have developed a solution to the problem and published their findings in the journal Analytical BiochemistryPrevious methods have been proposed for distinguishing the DNA of twins.  One is termed “mutation analysis”, where the whole genome of both twins is sequenced to identify mutations that might have occurred to one of them.

“If such a mutation is identified at a particular location in the twin, then that same particular mutation can be specifically searched for in the crime scene sample.  However, this is very expensive and time-consuming and is unlikely to be paid for by cash-strapped police forces,” according to Dr Williams, who has shown that a cheaper, quicker technique is available.

It is based on the concept of DNA methylation, which is effectively the molecular mechanism that turns various genes on and off. As twins get older, the degree of difference between them grows as they are subjected to increasingly different environments.  For example, one might take up smoking, or one might have a job outdoors and the other a desk job.  This will cause changes in the methylation status of the DNA.

In order to carry our speedy, inexpensive analysis of this, Dr Williams and his team propose a technique named “high resolution melt curve analysis” (HRMA). “What HRMA does is to subject the DNA to increasingly high temperatures until the hydrogen bonds break, known as the melting temperature.  The more hydrogen bonds that are present in the DNA, the higher the temperature required to melt them,” explains Dr Williams.

“Consequently, if one DNA sequence is more methylated than the other, then the melting temperatures of the two samples will differ – a difference that can be measured, and which will establish the difference between two identical twins.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Audi Has Made Diesel From Water And Carbon Dioxide

Audi Has Made Diesel From Water And Carbon Dioxide | Amazing Science |

It’s the holy grail in energy production: produce a fuel that is both carbon neutral and can be poured directly into our current cars without the need to retrofit. There are scores of companies out there trying to do just that using vegetable oil, algae, and even the microbes found in panda poop to turn bamboo into fuel.

This week, German car manufacturer Audi has declared that they have been able to create an "e-diesel," or diesel containing ethanol, by using renewable energy to produce a liquid fuel from nothing more than water and carbon dioxide. After a commissioning phase of just four months, the plant in Dresden operated by clean tech company Sunfire has managed to produce its first batch of what they’re calling “blue crude.” The product liquid is composed of long-chain hydrocarbon compounds, similar to fossil fuels, but free from sulfur and aromatics and therefore burns soot-free.

The first step in the process involves harnessing renewable energy through solar, wind or hydropower. This energy is then used to heat water to temperatures in excess of 800oC (1472oF). The steam is then broken down into oxygen and hydrogen through high temperature electrolysis, a process where an electric current is passed through a solution.

The hydrogen is then removed and mixed with carbon monoxide under high heat and pressure, creating a hydrocarbon product they’re calling "blue crude." Sunfire claim that the synthetic fuel is not only more environmentally friendly than fossil fuel, but that the efficiency of the overall process—from renewable power to liquid hydrocarbon—is very high at around 70%. The e-diesel can then be either mixed with regular diesel, or used as a fuel in its own right.

But all may not be as it seems. The process used by Audi is actually called the Fischer-Tropsch process and has been known by scientists since the 1920s. It was even used by the Germans to turn coal into diesel during the Second World War when fuel supplies ran short. The process is currently used by many different companies all around the world, especially in countries where reserves of oil are low but reserves of other fossils fuels, such as gas and coal, are high.

And it would seem that Audi aren’t the first to think about using biogas facilities to produce carbon neutral biofuels either. Another German company called Choren has already made an attempt at producing biofuel using biogas and the Fischer-Tropsch process. Backed byShell and Volkswagen, the company had all the support and funding it needed, but in 2011 it filed for bankruptcy due to impracticalities in the process.

Audi readily admits that none of the processes they use are new, but claim it’s how they’re going about it that is. They say that increasing the temperature at which the water is split increases the efficiency of the process and that the waste heat can then be recovered. Whilst their announcement might not be heralding a new fossil fuel-free era, the tech of turning green power into synthetic fuel could have applications as a battery to store excess energy produced by renewables.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The first hijacking of a medical telerobot raises important questions over the security of remote surgery

The first hijacking of a medical telerobot raises important questions over the security of remote surgery | Amazing Science |

A crucial bottleneck that prevents life-saving surgery being performed in many parts of the world is the lack of trained surgeons. One way to get around this is to make better use of the ones that are available. Sending them over great distances to perform operations is clearly inefficient because of the time that has to be spent travelling. So an increasingly important alternative is the possibility of telesurgery with an expert in one place controlling a robot in another that physically performs the necessary cutting and dicing. Indeed, the sale of medical robots is increasing at a rate of 20 percent per year.

But while the advantages are clear, the disadvantages have been less well explored. Telesurgery relies on cutting edge technologies in fields as diverse as computing, robotics, communications, ergonomics, and so on. And anybody familiar with these areas will tell you that they are far from failsafe.

Today, Tamara Bonaci and pals at the University of Washington in Seattle examine the special pitfalls associated with the communications technology involved in telesurgery. In particular, they show how a malicious attacker can disrupt the behavior of a telerobot during surgery and even take over such a robot, the first time a medical robot has been hacked in this way.

The first telesurgery took place in 2001 with a surgeon in New York successfully removing the gall bladder of a patient in Strasbourg in France, more than 6,000 kilometers away. The communications ran over a dedicated fiber provided by a telecommunications company specifically for the operation. That’s an expensive option since dedicated fibers can cost tens of thousands of dollars.

Since then, surgeons have carried out numerous remote operations and begun to experiment with ordinary communications links over the Internet, which are significantly cheaper. Although there are no recorded incidents in which the communications infrastructure has caused problems during a telesurgery operation, there are still questions over security and privacy which have never been full answered.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

DNA 'cage' holding a payload of drugs set to begin clinical trial soon

DNA 'cage' holding a payload of drugs set to begin clinical trial soon | Amazing Science |

Ido Bachelet, who was previously at Harvard’s Wyss Institute in Boston, Massachusetts and Israel’s Bar-Ilan University, intends to treat a patient who has been given six months to live. The patient is set to receive an injection of DNA nanocages designed to interact with and destroy leukemia cells without damaging healthy tissue. Speaking in December, he said: ‘Judging from what we saw in our tests, within a month that person is going to recover.

DNA nanocages can be programmed to independently recognize target cells and deliver payloads, such as cancer drugs, to these cells. 

George Church, who is involved in the research at the Wyss Institute explained the idea of the microscopic robots is to make a ‘cage’ that protects a fragile or toxic payload and ‘only releases it at the right moment.’

These nanostructures are built upon a single strand of DNA which is combined with short synthetic strands of DNA designed by the experts.  When mixed together, they self-assemble into a desired shape, which in this case looks a little like a barrel.

Dr Bachelet said: 'The nanorobot we designed actually looks like an open-ended barrel, or clamshell that has two halves linked together by flexible DNA hinges and the entire structure is held shut by latches that are DNA double helixes.’

A complementary piece of DNA is attached to a payload, which enables it to bind to the inside of the biological barrel. The double helixes stay closed until specific molecules or proteins on the surface of cancer cells act as a 'key' to open the ‘barrel’ so the payload can be deployed.

'The nanorobot is capable of recognizing a small population of target cells within a large healthy population,’ Dr Bachelet continued.

‘While all cells share the same drug target that we want to attack, only those target cells that express the proper set of keys open the nanorobot and therefore only they will be attacked by the nanorobot and by the drug.’

The team has tested its technique in animals as well as cell cultures and said the ‘nanorobot attacked these [targets] with almost zero collateral damage.’ The method has many advantages over invasive surgery and blasts of drugs, which can be ‘as painful and damaging to the body as the disease itself,’ the team added.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Mammoth genome sequence completed

Mammoth genome sequence completed | Amazing Science |

An international team of scientists has sequenced the complete genome of the woolly mammoth. A US team is already attempting to study the animals' characteristics by inserting mammoth genes into elephant stem cells. They want to find out what made the mammoths different from their modern relatives and how their adaptations helped them survive the ice ages.

The new genome study has been published in the journal Current BiologyDr Love Dalén, at the Swedish Museum of Natural History in Stockholm, told BBC News that the first ever publication of the full DNA sequence of the mammoth could help those trying to bring the creature back to life. "It would be a lot of fun (in principle) to see a living mammoth, to see how it behaves and how it moves," he said.

But he would rather his research was not used to this end. "It seems to me that trying this out might lead to suffering for female elephants and that would not be ethically justifiable."

Dr Dalén and the international group of researchers he is collaborating with are not attempting to resurrect the mammoth. But the Long Now Foundation, an organisation based in San Francisco, claims that it is. Now, with the publication of the complete mammoth genome, it could be a step closer to achieving its aim.

On its website, the foundation says its ultimate goal is "to produce new mammoths that are capable of repopulating the vast tracts of tundra and boreal forest in Eurasia and North America. "The goal is not to make perfect copies of extinct woolly mammoths, but to focus on the mammoth adaptations needed for Asian elephants to live in the cold climate of the tundra.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientists use nanoscale building blocks and DNA 'glue' to shape 3-D superlattices

Scientists use nanoscale building blocks and DNA 'glue' to shape 3-D superlattices | Amazing Science |
aking child's play with building blocks to a whole new level-the nanometer scale-scientists at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory have constructed 3D "superlattice" multicomponent nanoparticle arrays where the arrangement of particles is driven by the shape of the tiny building blocks. The method uses linker molecules made of complementary strands of DNA to overcome the blocks' tendency to pack together in a way that would separate differently shaped components. The results, published in Nature Communications, are an important step on the path toward designing predictable composite materials for applications in catalysis, other energy technologies, and medicine. "If we want to take advantage of the promising properties of nanoparticles, we need to be able to reliably incorporate them into larger-scale composite materials for real-world applications," explained Brookhaven physicist Oleg Gang, who led the research at Brookhaven's Center for Functional Nanomaterials (CFN), a DOE Office of Science User Facility.

"Our work describes a new way to fabricate structured composite materials using directional bindings of shaped particles for predictable assembly," said Fang Lu, the lead author of the publication.

The research builds on the team's experience linking nanoparticles together using strands of synthetic DNA. Like the molecule that carries the genetic code of living things, these synthetic strands have complementary bases known by the genetic code letters G, C, T, and A, which bind to one another in only one way (G to C; T to A). Gang has previously used complementary DNA tethers attached to nanoparticles to guide the assembly of a range of arrays and structures. The new work explores particle shape as a means of controlling the directionality of these interactions to achieve long-range order in large-scale assemblies and clusters.

Spherical particles, Gang explained, normally pack together to minimize free volume. DNA linkers-using complementary strands to attract particles, or non-complementary strands to keep particles apart-can alter that packing to some degree to achieve different arrangements. For example, scientists have experimented with placing complementary linker strands in strategic locations on the spheres to get the particles to line up and bind in a particular way. But it's not so easy to make nanospheres with precisely placed linker strands.

"We explored an alternate idea: the introduction of shaped nanoscale 'blocks' decorated with DNA tethers on each facet to control the directional binding of spheres with complementary DNA tethers," Gang said.
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Technology Trends - Singularity Blog: Most Anticipated New Technologies for 2015/2016

Technology Trends - Singularity Blog: Most Anticipated New Technologies for 2015/2016 | Amazing Science |
Future timeline, a timeline of humanity's future, based on current trends, long-term environmental changes, advances in technology such as Moore's Law, the latest medical advances, and the evolving geopolitical landscape.

Ursula Sola de Hinestrosa's curator insight, April 24, 4:50 PM

Nuevas tecnologias

AugusII's curator insight, April 25, 6:15 PM

Being up to date a must -  Learning on trends useful.

Scooped by Dr. Stefan Gruenwald!

The Current State of Machine Intelligence

The Current State of Machine Intelligence | Amazing Science |

A few years ago, investors and startups were chasing “big data”. Now we’re seeing a similar explosion of companies calling themselves artificial intelligence, machine learning, or collectively “machine intelligence”. The Bloomberg Beta fund, which is focused on the future of work, has been investing in these approaches.

Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus).

Machine intelligence technologies cut across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks). All of these technologies are reflected on this landscape.

What this landscape doesn’t include, however important, is “big data” technologies. Some have used this term interchangeably with machine learning and artificial intelligence, but I want to focus on the intelligence methods rather than data, storage, and computation pieces of the puzzle for this landscape (though of course data technologies enable machine intelligence).

We’ve seen a few great articles recently outlining why machine intelligence is experiencing a resurgence, documenting the enabling factors of this resurgence. Kevin Kelly, for example chalks it up to cheap parallel computing, large datasets, and better algorithms.

Machine intelligence is enabling applications we already expect like automated assistants (Siri), adorable robots (Jibo), and identifying people in images (like the highly effective but unfortunately named DeepFace). However, it’s also doing the unexpected: protecting children from sex trafficking, reducing the chemical content in the lettuce we eat, helping us buy shoes online that fit our feet precisely, anddestroying 80's classic video games.

Big companies have a disproportionate advantage, especially those that build consumer products. The giants in search (Google, Baidu), social networks (Facebook, LinkedIn, Pinterest), content (Netflix, Yahoo!), mobile (Apple) and e-commerce (Amazon) are in an incredible position. They have massive datasets and constant consumer interactions that enable tight feedback loops for their algorithms (and these factors combine to create powerful network effects) — and they have the most to gain from the low hanging fruit that machine intelligence bears.

Best-in-class personalization and recommendation algorithms have enabled these companies’ success (it’s both impressive and disconcerting that Facebook recommends you add the person you had a crush on in college and Netflix tees up that perfect guilty pleasure sitcom).

Now they are all competing in a new battlefield: the move to mobile. Winning mobile will require lots of machine intelligence: state of the art natural language interfaces (like Apple’s Siri), visual search (like Amazon’s “FireFly”), and dynamic question answering technology that tells you the answer instead of providing a menu of links (all of the search companies are wrestling with this).Large enterprise companies (IBM and Microsoft) have also made incredible strides in the field, though they don’t have the same human-facing requirements so are focusing their attention more on knowledge representation tasks on large industry datasets, like IBM Watson’s application to assist doctors with diagnoses.
John Vollenbroek's curator insight, April 25, 2:53 AM

I like this overview

pbernardon's curator insight, April 26, 2:33 AM

Une infographie et une cartographie claire et très intéressante sur l'intelligence artificielle et les usages induits que les organisations vont devoir s'approprier.



Scooped by Dr. Stefan Gruenwald!

Two huge magma chambers sit beneath Yellowstone National Park

Two huge magma chambers sit beneath Yellowstone National Park | Amazing Science |
Underneath the bubbling geysers and hot springs of Yellowstone National Park in Wyoming sits a volcanic hot spot that has driven some of the largest eruptions on Earth. Geoscientists have now completely imaged the subterranean plumbing system and have found not just one, but two magma chambers underneath the giant volcano.

“The main new thing is we unveil a deeper and bigger magma reservoir in the lower crust,” says study author Hsin-Hua Huang, a seismologist at the University of Utah in Salt Lake City.

Scientists had already known about a plume, which brings molten rock up from deep in the mantle to a region about 60 kilometers below the surface. And they had also imaged a shallow magma chamber about 10 kilometers below the surface, containing about 10,000 cubic kilometers of molten material. But now they have found a deeper one, 4.5 times larger, that sits between 20 and 50 kilometers below the surface. “They found the missing link between the mantle plume and the shallow magma chamber,” says Peter Cervelli, a geophysicist in Anchorage, Alaska, who works at the U.S. Geological Survey’s Yellowstone Volcano Observatory.

The discovery does not, on its own, increase the chance of an eruption, which is driven by an emptying of the shallow chamber. The last major eruption was 640,000 years ago, and today the threat of earthquakes is far more likely. But the deeper chamber does mean that the shallow chamber can be replenished again and again. “Knowing that you have this additional reservoir tells you you could have a much bigger volume erupt over a relatively short time scale,” says co-author Victor Tsai, a geophysicist at the California Institute of Technology in Pasadena. The discovery, reported online today in Science, also confirms a long-suspected model for some volcanoes, in which a deep chamber of melted basalt, a dense iron- and magnesium-rich rock, feeds a shallower chamber containing a melted, lighter silicon-rich rock called a rhyolite.

The researchers used seismometers to measure the noise of earthquakes in order to take a sort of sonogram of Earth’s crust. When earthquakes pass through liquid material, seismic waves slow down. The team interprets these low-velocity regions as magma chambers (although these chambers are still mostly solid rock and contain only a small fraction of liquid melt). Distant earthquakes are useful for imaging deep structures, like the mantle plume, and local earthquakes can help to see the shallow chamber. Huang says his study is the first time that both types of data were combined so that the middle depths, and the deeper chamber, could be seen. His team used 11 seismometers from the EarthScope USArray to listen for the deep earthquakes and 69 seismometers from several local seismic networks to gather data from shallower earthquakes.
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Lipid-nanoparticle-encapsulated siRNAs cure monkeys infected with West African Ebola strain

Lipid-nanoparticle-encapsulated siRNAs cure monkeys infected with West African Ebola strain | Amazing Science |
An experimental drug has cured monkeys infected with the strain of the Ebola virus present in West Africa, US-based scientists say.

An experimental drug has cured monkeys infected with the Ebola virus, US-based scientists have said. The treatment, known as TKM-Ebola-Guinea, targets the Makona strain of the virus, which caused the current deadly outbreak in West Africa. All three monkeys receiving the treatment were healthy when the trial ended after 28 days; three untreated monkeys died within nine days. Scientists cautioned that the drug's efficacy has not been proven in humans. At present, there are no treatments or vaccines for Ebola that have been proven to work in humans.

University of Texas scientist Thomas Geisbert, who was the senior author of the study published in the journal Nature, said: "This is the first study to show post-exposure protection... against the new Makona outbreak strain of Ebola-Zaire virus." Results from human trials with the drug are expected in the second half of this year.

The current outbreak of Ebola virus in West Africa is unprecedented, causing more cases and fatalities than all previous outbreaks combined, and has yet to be controlled1. Several post-exposure interventions have been employed under compassionate use to treat patients repatriated to Europe and the United States2. However, the in vivo efficacy of these interventions against the new outbreak strain of Ebola virus is unknown.

In the current study, the scientists show that lipid-nanoparticle-encapsulated short interfering RNAs (siRNAs) rapidly adapted to target the Makona outbreak strain of Ebola virus are able to protect 100% of rhesus monkeys against lethal challenge when treatment was initiated at 3 days after exposure while animals were viremic and clinically ill.

Although all infected animals showed evidence of advanced disease including abnormal hematology, blood chemistry and coagulopathy, siRNA-treated animals had milder clinical features and fully recovered, while the untreated control animals succumbed to the disease. These results represent the first successful demonstration of therapeutic anti-Ebola virus efficacy against the new outbreak strain in non-human primates and highlight the rapid development of lipid-nanoparticle-delivered siRNA as a countermeasure against this highly lethal human disease.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Ultra-high-resolution nondestructive 3D imaging of biological cells with picosecond ultrasound

Ultra-high-resolution nondestructive 3D imaging of biological cells with picosecond ultrasound | Amazing Science |

A team of researchers in Japan and Thailand reports the first known nondestructive 3-D scan of a single biological cell using a revised form of “picosecond* ultrasound.” This new technique can achieve micrometer (millionth of a meter) resolution of live single cells, imaging their interiors in slices separated by 150 nanometers (.15 micrometer), in contrast to the typical 0.5-millimeter (500 micrometers) spatial resolution of a standard medical MRI scan. The work is a proof-of-principle that could open the door to new ways of studying the physical properties of living cells by imaging them non-destructively in vivo, the researchers say.

The team accomplished the imaging by first placing a cell in solution on a titanium-coated sapphire substrate and then scanning with a point source of high-frequency sound generated by using a beam of focused ultrashort laser pulses over the titanium film. This was followed by focusing another beam of laser pulses on the same point to pick up tiny changes in optical reflectance caused by the sound traveling through the cell tissue.

“By scanning both beams together, we’re able to build up an acoustic image of the cell that represents one slice of it,” explained co-author Professor Oliver B. Wright, who teaches in the Division of Applied Physics, Faculty of Engineering at Hokkaido University. “We can view a selected slice of the cell at a given depth by changing the timing between the two beams of laser pulses.”

“The time required for 3-D imaging [with conventional acoustic microscopes] probably remains too long to be practical,” Wright said. “Building up a 3-D acoustic image, in principle, allows you to see the 3-D relative positions of cell organelles without killing the cell.

“By using an ultraviolet-pulsed laser, we could improve the lateral resolution by about a factor of three — and greatly improve the image quality. And, switching to a diamond substrate instead of sapphire would allow better heat conduction away from the probed area, which, in turn, would enable us to increase the laser power and image quality.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New WiFi system uses LED lights to boost bandwidth tenfold

New WiFi system uses LED lights to boost bandwidth tenfold | Amazing Science |

Researchers at Oregon State University have invented a new technology called WiFiFO (WiFi Free space Optic) that can increase the bandwidth of WiFi systems by 10 times, using optical transmission via LED lights. The technology could be integrated with existing WiFi systems to reduce bandwidth problems in crowded locations, such as airport terminals or coffee shops, and in homes where several people have multiple WiFi devices.

Experts say that recent advances in LED technology have made it possible to modulate the LED light more rapidly, opening the possibility of using light for wireless transmission in a “free space” optical communication system. “In addition to improving the experience for users, the two big advantages of this system are that it uses inexpensive components, and it integrates with existing WiFi systems,” said Thinh Nguyen, an OSU associate professor of electrical and computer engineering. Nguyen worked with Alan Wang, an assistant professor of electrical and computer engineering, to build the first prototype.

“I believe the WiFO system could be easily transformed into a marketable product, and we are currently looking for a company that is interested in further developing and licensing the technology,” Nguyen said. 

The system can potentially send data at up to 100 megabits per second. Although some current WiFi systems have similar bandwidth, it has to be divided by the number of devices, so each user might be receiving just 5 to 10 megabits per second, whereas the hybrid system could deliver 50–100 megabits to each user. In a home where telephones, tablets, computers, gaming systems, and televisions may all be connected to the Internet, increased bandwidth would eliminate problems like video streaming that stalls and buffers (think Netflix).

The receivers are small photodiodes that cost less than a dollar each and could be connected through a USB port for current systems, or incorporated into the next generation of laptops, tablets, and smartphones. A provisional patent has been secured on the technology, and a paper was published in the 17th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems. 

No comment yet.