Amazing Science
814.0K views | +230 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

No more science fiction: 3D holographic images are here to stay

No more science fiction: 3D holographic images are here to stay | Amazing Science | Scoop.it

The research efforts in nanotechnology have significantly advanced development of display devices. Graphene, an atomic layer of carbon material that won scientists Andre Geim and Konstantin Novoselov the 2010 Nobel Prize in Physics, has emerged as a key component for flexible and wearable displaying devices. Owing to its fascinating electronic and optical properties, and high mechanical strength, graphene has been mainly used as touch screens in wearable devices such as mobiles. This technical advance has enabled devices such as smart watches, fitness bands and smart headsets to transition from science fiction into reality, even though the display is still 2D flat.


But wearable displaying devices, in particular devices with a floating display, will remain one of the most significant trends in the industry, which is projected to double every two years and exceed US$12 billion in the year of 2018.


In a paper, published in Nature Communications, we show how our technology realizes wide viewing-angle and full-color floating 3D display in graphene based materials. Ultimately this will help to transform wearable displaying devices into floating 3D displays.

A graphene enabled floating display is based on the principle of holography invented by Dennis Gabor, who was awarded the Nobel Prize in Physics in 1971. The idea of optical holography provides a revolutionary method for recording and displaying both 3D amplitude and phase of an optical wave that comes from an object of interest.


The physical realization of high definition and wide viewing angle holographic 3D displays relies on the generation of a digital holographic screen which is composed of many small pixels. These pixels are used to bend light carrying the information for display. The angle of bending is measured by the refractive index of the screen material – according to the holographic correlation.


The smaller the refractive index pixels, the larger the bending angle once the beam passes through the hologram. This nanometer size of pixels is of great significance for the reconstructed 3D object to be vividly viewed in a wide angle. The process is complex but the key physical step is to control the heating of photoreduction of graphene oxides, derivatives of graphene with analogous physical structures but presence of additional oxygen groups. Through a photoreduction process, without involving any temperature increment, graphene oxides can be reduced toward graphene by absorbing a single femtosecond pulsed laser beam.


During the photoreduction, a change in the refractive index can be created. Through such a photoreduction we are able to create holographically-correlated refractive index pixel at the nanometer scale. This technique enables the reconstructed floating 3D object to be vividly and naturally viewed in a wide angle up to 52 degrees.


This result corresponds to an improvement in viewing angles by one-order-of-magnitude compared with the current available 3D holographic displays based on liquid crystal phase modulators, limited to a few degrees. In addition, the constant refractive index change over the visible spectra in reduced graphene oxides enables full-color 3D display.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Deep Learning Machine Solves the Cocktail Party Problem

Deep Learning Machine Solves the Cocktail Party Problem | Amazing Science | Scoop.it
Separating a singer’s voice from background music has always been a uniquely human ability. Not anymore.


The cocktail party effect is the ability to focus on a specific human voice while filtering out other voices or background noise. The ease with which humans perform this trick belies the challenge that scientists and engineers have faced in reproducing it synthetically. By and large, humans easily outperform the best automated methods for singling out voices. A particularly challenging cocktail party problem is in the field of music, where humans can easily concentrate on a singing voice superimposed on a musical background that includes a wide range of instruments. By comparison, machines are poor at this task.


Today, that looks to be changing thanks to the work of Andrew Simpson and pals at the University of Surrey in the U.K. These guys have used some of the most recent advances associated with deep neural networks to separate human voices from the background in a wide range of songs. Their approach showcases the huge advances that have been made in recent years in machine learning and neural networks. And it paves the way for a more general solution to the famous cocktail party problem which should allow, among other things, the vocals to be easily separated from the music they accompany.


The method these guys use is relatively straightforward. They start with a database of 63 songs that are available as a set of individual tracks that each contain a different instrument or voice, as well as the fully mixed version of the song. Simpson and co divide each track into 20-second segments and create a spectrogram for each that shows how the frequencies in the sound vary over time. The result is a kind of unique fingerprint that identifies the instrument or voice.


They also create a spectrogram of the fully mixed version of the song. This is essentially all of the component spectrograms added together.

The task of picking out a voice from this mixture is essentially the task of separating the voice’s unique spectrogram from the other spectrograms that are present. Simpson and co trained their deep convolutional neural network to do exactly that. They used 50 of these songs to train the network while keeping the remaining 13 to test it on. In total that generated more than 20,000 spectrograms for training purposes.


The task for the neural network was simple. As an input, they gave it the fully mixed spectrogram and expected it to produce, essentially, the vocal spectrogram as the output. The task in this kind of machine learning is one of parameter optimization. Their deep neural network has a billion parameters that need to be tuned in a way that produces the desired output. This process of optimization—or learning—occurs by iteration. So the network begins with these parameters set randomly and then gradually improves the settings each time it scans through the database, which it did over a hundred iterations.


Having found a good setup for the network, Simpson and co then gave it the 13 songs it had not seen before to test how well it could separate the vocals from the mix. The outputs turned out to be impressive. “These results demonstrate that a convolutional deep neural network approach is capable of generalizing voice separation, learned in a musical context, to new musical contexts,” say the team.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Is the universe a hologram? New calculations show that this may be fundamental feature of space itself

Is the universe a hologram? New calculations show that this may be fundamental feature of space itself | Amazing Science | Scoop.it

At first glance, there is not the slightest doubt: to us, the universe looks three dimensional. But one of the most fruitful theories of theoretical physics in the last two decades is challenging this assumption. The "holographic principle" asserts that a mathematical description of the universe actually requires one fewer dimension than it seems. What we perceive as three dimensional may just be the image of two dimensional processes on a huge cosmic horizon.


Up until now, this principle has only been studied in exotic spaces with negative curvature. This is interesting from a theoretical point of view, but such spaces are quite different from the space in our own universe. Results obtained by scientists at TU Wien (Vienna) now suggest that the holographic principle even holds in a flat spacetime.


Gravitational phenomena are described in a theory with three spatial dimensions, the behavior of quantum particles is calculated in a theory with just two spatial dimensions - and the results of both calculations can be mapped onto each other. Such a correspondence is quite surprising. It is like finding out that equations from an astronomy textbook can also be used to repair a CD-player. But this method has proven to be very successful. More than ten thousand scientific papers about Maldacena's "AdS-CFT-correspondence" have been published to date.


For theoretical physics, this is extremely important, but it does not seem to have much to do with our own universe. Apparently, we do not live in such an anti-de-sitter-space. These spaces have quite peculiar properties. They are negatively curved, any object thrown away on a straight line will eventually return. "Our universe, in contrast, is quite flat - and on astronomic distances, it has positive curvature", says Daniel Grumiller.


However, Grumiller has suspected for quite some time that a correspondence principle could also hold true for our real universe. To test this hypothesis, gravitational theories have to be constructed, which do not require exotic anti-de-sitter spaces, but live in a flat space. For three years, he and his team at TU Wien (Vienna) have been working on that, in cooperation with the University of Edinburgh, Harvard, IISER Pune, the MIT and the University of Kyoto. Now Grumiller and colleagues from India and Japan have published an article in the journal Physical Review Letters, confirming the validity of the correspondence principle in a flat universe.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Ancient megadrought entombed dodos in poisonous cocktail

Ancient megadrought entombed dodos in poisonous cocktail | Amazing Science | Scoop.it

Nine hundred kilometers off the east coast of Madagascar lies the tiny island paradise of Mauritius. The waters are pristine, the beaches bright white, and the average temperature hovers between 22°C and 28°C (72°F to 82°F) year-round. But conditions there may not have always been so idyllic. A new study suggests that about 4000 years ago, a prolonged drought on the island left many of the native species, such as dodo birds and giant tortoises, dead in a soup of poisonous algae and their own feces.


The die-off happened in an area known as Mare aux Songes, which once held a shallow lake that was an important source of fresh water for nonmigratory animals. Today, it’s just a grassy swamp, but beneath the surface, fossils are so common and so well preserved that the area qualifies as what scientists call a Lagerstätte, which in German means “storage space.” "What I wanted to know was, how did this drought cause this graveyard?” says Erik de Boer, a paleoecologist at the University of Amsterdam. “How did so many animals die?”


To find out, de Boer and colleagues analyzed sediment cores taken from the area. The layers in a core contain markers that can help scientists reconstruct an ecosystem’s history, such as preserved pollens and microbes. About 4200 years ago, monsoon activity declined dramatically, causing a 50-year megadrought on the island. The cores revealed that during the same time period, the ancient lake became a muddy, salty swamp. “Annually, the lake would get some fresh water in, however this drinking water turned foul during the dry season,” de Boer says.


Things got bad fairly quickly for local animals once the lake began to dry up, the team reports in the current issue of The Holocene. Sanitation appears to have become a major issue with so many animals crowding around the shrinking source of fresh water. “The animals lived around the edges, and the excrements probably got mixed up in the wetlands," de Boer says. "It’s like a big toilet.” Even worse, the researchers’ analysis shows that the feces-flooded waters encouraged the growth of single-celled algae and bacteria—diatoms and cyanobacteria—which can cause poisonous algal blooms. The circumstances combined to create what the scientists refer to as a “deadly cocktail” that they think killed many of the animals preserved as fossils at Mare aux Songes today.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists watch live taste cells in action

Scientists watch live taste cells in action | Amazing Science | Scoop.it

Scientists have for the first time captured live images of the process of taste sensation on the tongue. The international team imaged single cells on the tongue of a mouse with a specially designed microscope system. "We've watched live taste cells capture and process molecules with different tastes," said biomedical engineer Dr Steve Lee, from the ANU Research School of Engineering.


There are more than 2,000 taste buds on the human tongue, which can distinguish at least five tastes: salty, sweet, sour, bitter and umami.However the relationship between the many taste cells within a taste bud, and our perception of taste has been a long standing mystery, said Professor Seok-Hyun Yun from Harvard Medical School. "With this new imaging tool we have shown that each taste bud contains taste cells for different tastes," said Professor Yun.


The team also discovered that taste cells responded not only to molecules contacting the surface of the tongue, but also to molecules in the blood circulation." We were surprised by the close association between taste cells and blood vessels around them," said Assistant Professor Myunghwan (Mark) Choi, from the Sungkyunkwan University in South Korea. "We think that tasting might be more complex than we expected, and involve an interaction between the food taken orally and blood composition," he said.


The team imaged the tongue by shining a bright infrared laser on to the mouse's tongue, which caused different parts of the tongue and the flavor molecules to fluoresce. The scientists captured the fluorescence from the tongue with a technique known as intravital multiphoton microscopy. They were able to pick out the individual taste cells within each taste bud, as well as blood vessels up to 240 microns below the surface of the tongue. The breakthrough complements recent studies by other research groups that identified the areas in the brain associated with taste.


The team now hopes to develop an experiment to monitor the brain while imaging the tongue to track the full process of taste sensation. However to fully understand the complex interactions that form our basic sense of taste could take years, Dr Lee said. "Until we can simultaneously capture both the neurological and physiological events, we can't fully unravel the logic behind taste," he said.


The research has been published in the latest edition of Nature Publishing Group's Scientific Reports

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Complex genetic ancestry of Americans uncovered

Complex genetic ancestry of Americans uncovered | Amazing Science | Scoop.it

By comparing the genes of current-day North and South Americans with African and European populations, an Oxford University study has found the genetic fingerprints of the slave trade and colonization that shaped migrations to the Americas hundreds of years ago.


The study published in Nature Communications found that:

  • While Spaniards provide the majority of European ancestry in continental American Hispanic/Latino populations, the most common European genetic source in African-Americans and Barbadians comes from Great Britain.
  • The Basques, a distinct ethnic group spread across current-day Spain and France, provided a small but distinct genetic contribution to current-day Continental South American populations, including the Maya in Mexico.
  • The Caribbean Islands of Puerto Rico and the Dominican Republic are genetically similar to each other and distinct from the other populations, probably reflecting a different migration pattern between the Caribbean and mainland America.
  • Compared to South Americans, people from Caribbean countries (such as the Barbados) had a larger genetic contribution from Africa.
  • The ancestors of current-day Yoruba people from West Africa (one of the largest African ethnic groups) provided the largest contribution of genes from Africa to all current-day American populations.
  • The proportion of African ancestry varied across the continent, from virtually zero (in the Maya people from Mexico) to 87% in current-day Barbados.
  • South Italy and Sicily also provided a significant European genetic contribution to Colombia and Puerto Rico, in line with the known history of Italian emigrants to the Americas in the late 19th and early 20th century.
  • One of the African-American groups from the USA had French ancestry, in agreement with historical French immigration into the colonial Southern United States.
  • The proportion of genes from European versus African sources varied greatly from individual to individual within recipient populations.


The team, which also included researchers from UCL (University College London) and the Universita' del Sacro Cuore of Rome, analysed more than 4,000 previously collected DNA samples from 64 different populations, covering multiple locations in Europe, Africa and the Americas. Since migration has generally flowed from Africa and Europe to the Americas over the last few hundred years, the team compared the 'donor' African and European populations with 'recipient' American populations to track where the ancestors of current-day North and South Americans came from.


'We found that the genetic profile of Americans is much more complex than previously thought,' said study leader Professor Cristian Capelli from the Department of Zoology. The research team analyzed DNA samples collected from people in Barbados, Columbia, the Dominican Republic, Ecuador, Mexico, Puerto Rico and African-Americans in the USA. They used a technique called haplotype-based analysis to compare the pattern of genes in these 'recipient populations' to 'donor populations' in areas where migrants to America came from.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New algorithm for 3D structures from 2D images will speed up protein structure discovery 100,000 fold

New algorithm for 3D structures from 2D images will speed up protein structure discovery 100,000 fold | Amazing Science | Scoop.it

One of the great challenges in molecular biology is to determine the three-dimensional structure of large biomolecules such as proteins. But this is a famously difficult and time-consuming task. The standard technique is x-ray crystallography, which involves analyzing the x-ray diffraction pattern from a crystal of the molecule under investigation. That works well for molecules that form crystals easily.


But many proteins, perhaps most, do not form crystals easily. And even when they do, they often take on unnatural configurations that do not resemble their natural shape. So finding another reliable way of determining the 3-D structure of large biomolecules would be a huge breakthrough. Today, Marcus Brubaker and a couple of pals at the University of Toronto in Canada say they have found a way to dramatically improve a 3-D imaging technique that has never quite matched the utility of x-ray crystallography.


The new technique is based on an imaging process called electron cryomicroscopy. This begins with a purified solution of the target molecule that is frozen into a thin film just a single molecule thick. This film is then photographed using a process known as transmission electron microscopy—it is bombarded with electrons and those that pass through are recorded. Essentially, this produces two-dimensional “shadowgrams” of the molecules in the film. Researchers then pick out each shadowgram and use them to work out the three-dimensional structure of the target molecule.


This process is hard for a number of reasons. First, there is a huge amount of noise in each image so even the two-dimensional shadow is hard to make out. Second, there is no way of knowing the orientation of the molecule when the shadow was taken so determining the 3-D shape is a huge undertaking.


The standard approach to solving this problem is little more than guesswork. Dream up a potential 3-D structure for the molecule and then rotate it to see if it can generate all of the shadowgrams in the dataset. If not, change the structure, test it, and so on.


Obviously, this is a time-consuming process. The current state-of-the-art algorithm running on 300 cores takes two weeks to find the 3-D structure of a single molecule from a dataset of 200,000 images.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Chinese scientists admitted to tweaking the genes of human embryos for the first time in history

Chinese scientists admitted to tweaking the genes of human embryos for the first time in history | Amazing Science | Scoop.it

A group of Chinese scientists just reported that they modified the genome of human embryos, something that has never been done in the history of the world, according to a report in Nature News


A recent biotech discovery - one that has been called the biggest biotech discovery of the century - showed how scientists might be able to modify a human genome when that genome was still just in an embryo.


This could change not only the genetic material of a person, but could also change the DNA they pass on, removing "bad" genetic codes (and potentially adding "good" ones) and taking an active hand in evolution.


Concerned scientists published an argument that no one should edit the human genome in this way until we better understood the consequences after a report uncovered rumours that Chinese scientists were already working on using this technology.


But this new paper, published April 18 in the journal Protein and Cell by a Chinese group led by gene-function researcher Junjiu Huang of Sun Yat-sen University, shows that work has already been done, and Nature News spoke to a Chinese source that said at least four different groups are "pursuing gene editing in human embryos."


Specifically, the team tried to modify a gene in a non-viable embryo that would have been responsible for a deadly blood disorder. But they noted in the study that they encountered serious challenges, suggesting there are still significant hurdles before clinical use becomes a reality.


CRISPR, the technology that makes all this possible, can find bad sections of DNA and cut them and even replace them with DNA that doesn't code for deadly diseases, but it can also make unwanted substitutions. Its level of accuracy is still very low.


Huang's group successfully introduced the DNA they wanted in only "a fraction" of the 28 embryos that had been "successfully spliced" (they tried 86 embryos at the start and tested 54 of the 71 that survived the procedure). They also found a "surprising number of ‘off-target’ mutations," according to Nature News.


Huang told Nature News that they stopped then because they knew that if they were do this work medically, that success rate would need to be closer to 100 percent. Our understanding of CRISPR needs to significantly develop before we get there, but this is a new technology that's changing rapidly.


Even though the Chinese team worked with non-viable embryos, embryos that cannot result in a live birth, editing the human genome and changing the DNA of an embryo is considered ethically questionable, because it could lead to more uses of this technology in humans. Changing the DNA of viable embryos could have unpredictable results for future generations, and some researchers want us to understand this better before putting it into practice.


Still, many researchers think this technology (most don't think it's ready to be used yet) could be invaluable. It could eliminate genetic diseases like sickle cell anemia, Huntington's disease, and cystic fibrosis, all devastating illnesses caused by genes that could theoretically be removed. Others fear that once we can do this accurately, it will inevitably be used to create designer humans with specific desired traits. After all, even though this research is considered questionable now, it is still actively being experimented with.


Huang told Nature News that both Nature and Science journals rejected his paper on embryo editing, "in part because of ethical objections." Neither journal commented to Nature News on that statement. Huang plans on trying to improve the accuracy of CRISPR in animal models for now. But CRISPR is reportedly quite easy to use, according to scientists who previously argued against doing this research in embryos now, meaning that it's incredibly likely these experiments will continue.

more...
RegentsCareServices's curator insight, April 25, 2015 10:42 AM

Chinese scientists admitted to tweaking the genes of human embryos for the first time in history

Dorothy Retha Cook 's curator insight, October 26, 2017 4:36 AM

They can change the DNA! So is that a way of the child not being able to be found per DNA the child of Its  created by its Mother a and Father by DNA testing because the child original DNA is changed? Has that ever happened and if so by which Scientist. who is the child and parents and why because this was years ago per article therefore things are more advanced and creative now. I does make a person wonder, as DNA is used for other things also, if they can change it in one way why not in and for the others to for the right price or whatever could DNA change have an innocent person found guilty of a crime or a guilty person found innocent of a crime and who would know if it was done? Just saying and asking your opinion on thi s artie including DNA changing , You have say to!

Scooped by Dr. Stefan Gruenwald
Scoop.it!

New insight into how brain makes memories

New insight into how brain makes memories | Amazing Science | Scoop.it

Every time you make a memory, somewhere in your brain a tiny filament reaches out from one neuron and forms an electrochemical connection to a neighboring neuron. A team of biologists at Vanderbilt University, headed by Associate Professor of Biological Sciences Donna Webb, studies how these connections are formed at the molecular and cellular level.


The filaments that make these new connections are called dendritic spines and, in a series of experiments described in the April 17 issue of the Journal of Biological Chemistry, the researchers report that a specific signaling protein, Asef2, a member of a family of proteins that regulate cell migration and adhesion, plays a critical role in spine formation. This is significant because Asef2 has been linked to autism and the co-occurrence of alcohol dependency and depression.


"Alterations in dendritic spines are associated with many neurological and developmental disorders, such as autism, Alzheimer's disease and Down Syndrome," said Webb. "However, the formation and maintenance of spines is a very complex process that we are just beginning to understand."


Neuron cell bodies produce two kinds of long fibers that weave through the brain: dendrites and axons. Axons transmit electrochemical signals from the cell body of one neuron to the dendrites of another neuron. Dendrites receive the incoming signals and carry them to the cell body. This is the way that neurons communicate with each other.


As they wait for incoming signals, dendrites continually produce tiny flexible filaments called filopodia. These poke out from the surface of the dendrite and wave about in the region between the cells searching for axons. At the same time, biologists think that the axons secrete chemicals of an unknown nature that attract the filopodia. When one of the dendritic filaments makes contact with one of the axons, it begins to adhere and to develop into a spine. The axon and spine form the two halves of a synaptic junction. New connections like this form the basis for memory formation and storage.


The formation of spines is driven by actin, a protein that produces microfilaments and is part of the cytoskeleton. Webb and her colleagues showed that Asef2 promotes spine and synapse formation by activating another protein called Rac, which is known to regulate actin activity. They also discovered that yet another protein, spinophilin, recruits Asef2 and guides it to specific spines. "Once we figure out the mechanisms involved, then we may be able to find drugs that can restore spine formation in people who have lost it, which could give them back their ability to remember," said Webb.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists discover asthma's potential root cause and a novel treatment

Scientists discover asthma's potential root cause and a novel treatment | Amazing Science | Scoop.it

Cardiff scientists have for the first time identified the potential root cause of asthma and an existing drug that offers a new treatment.

Published today in Science Translational Medicine journal, University researchers, working in collaboration with scientists at King's College London and the Mayo Clinic (USA), describe the previously unproven role of the calcium sensing receptor (CaSR) in causing asthma, a disease which affects 300 million people worldwide. 

The team used mouse models of asthma and human airway tissue from asthmatic and non-asthmatic people to reach their findings.

Crucially, the paper highlights the effectiveness of a class of drugs known as calcilytics in manipulating CaSR to reverse all symptoms associated with the condition. These symptoms include airway narrowing, airway twitchiness and inflammation - all of which contribute to increased breathing difficulty.

"Our findings are incredibly exciting," said the principal investigator, Professor Daniela Riccardi, from the School of Biosciences. "For the first time we have found a link airways inflammation, which can be caused by environmental triggers - such as allergens, cigarette smoke and car fumes – and airways twitchiness in allergic asthma.

"Our paper shows how these triggers release chemicals that activate CaSR in airway tissue and drive asthma symptoms like airway twitchiness, inflammation, and narrowing. Using calcilytics, nebulized directly into the lungs, we show that it is possible to deactivate CaSR and prevent all of these symptoms."

Dr Samantha Walker, Director of Research and Policy at Asthma UK, who helped fund the research, said: 

"This hugely exciting discovery enables us, for the first time, to tackle the underlying causes of asthma symptoms. Five per cent of people with asthma don't respond to current treatments so research breakthroughs could be life changing for hundreds of thousands of people. 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists link unexplained childhood paralysis to enterovirus D68

Scientists link unexplained childhood paralysis to enterovirus D68 | Amazing Science | Scoop.it

A research team led by UC San Francisco scientists has found the genetic signature of enterovirus D68 (EV-D68) in half of California and Colorado children diagnosed with acute flaccid myelitis -- sudden, unexplained muscle weakness and paralysis -- between 2012 and 2014, with most cases occurring during a nationwide outbreak of severe respiratory illness from EV-D68 last fall. The finding strengthens the association between EV-D68 infection and acute flaccid myelitis, which developed in only a small fraction of those who got sick. The scientists could not find any other pathogen capable of causing these symptoms, even after checking patient cerebrospinal fluid for every known infectious agent.


Researchers analyzed the genetic sequences of EV-D68 in children with acute flaccid myelitis and discovered that they all corresponded to a new strain of the virus, designated strain B1, which emerged about four years ago and had mutations similar to those found in poliovirus and another closely related nerve-damaging virus, EV-D70. The B1 strain was the predominant circulating strain detected during the 2014 EV-D68 respiratory outbreak, and the researchers found it both in respiratory secretions and -- for the first time -- in a blood sample from one child as his acute paralytic illness was worsening.


The study also included a pair of siblings, both of whom were infected with genetically identical EV-D68 virus, yet only one of whom developed acute flaccid myelitis. 


"This suggests that it's not only the virus, but also patients' individual biology that determines what disease they may present with," said Charles Chiu, MD, PhD, an associate professor of Laboratory Medicine and director of UCSF-Abbott Viral Diagnostics and Discovery Center. "Given that none of the children have fully recovered, we urgently need to continue investigating this new strain of EV-D68 and its potential to cause acute flaccid myelitis."


Among the 25 patients with acute flaccid myelitis in the study, 16 were from California and nine were from Colorado. Eleven were part of geographic clusters of children in Los Angeles and in Aurora, Colorado, who became symptomatic at the same time, and EV-D68 was detected in seven of these patients.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Fast and Accurate 3D Imaging Technique to Track Optically-Trapped Particles

Fast and Accurate 3D Imaging Technique to Track Optically-Trapped Particles | Amazing Science | Scoop.it

Optical tweezers have been used as an invaluable tool for exerting micro-scale force on microscopic particles and manipulating three-dimensional (3-D) positions of particles. Optical tweezers employ a tightly-focused laser whose beam diameter is smaller than one micrometer (1/100 of hair thickness), which generates attractive force on neighboring microscopic particles moving toward the beam focus. Controlling the positions of the beam focus enabled researchers to hold the particles and move them freely to other locations so they coined the name “optical tweezers.”

 

To locate the optically-trapped particles by a laser beam, optical microscopes have usually been employed. Optical microscopes measure light signals scattered by the optically-trapped microscopic particles and the positions of the particles in two dimensions. However, it was difficult to quantify the particles’ precise positions along the optic axis, the direction of the beam, from a single image, which is analogous to the difficulty of determining the front and rear positions of objects when closing an eye due to a lack of depth perception. Furthermore, it became more difficult to measure precisely 3-D positions of particles when scattered light signals were distorted by optically-trapped particles having complicated shapes or other particles occlude the target object along the optic axis.

 

Professor YongKeun Park and his research team in the Department of Physics at the Korea Advanced Institute of Science and Technology (KAIST) employed an optical diffraction tomography (ODT) technique to measure 3-D positions of optically-trapped particles in high speed. The principle of ODT is similar to X-ray CT imaging commonly used in hospitals for visualizing the internal organs of patients. Like X-ray CT imaging, which takes several images from various illumination angles, ODT measures 3-D images of optically-trapped particles by illuminating them with a laser beam in various incidence angles.

 

The KAIST team used optical tweezers to trap a glass bead with a diameter of 2 micrometers, and moved the bead toward a white blood cell having complicated internal structures. The team measured the 3-D dynamics of the white blood cell as it responded to an approaching glass bead via ODT in the high acquisition rate of 60 images per second. Since the white blood cell screens the glass bead along an optic axis, a conventionally-used optical microscope could not determine the 3-D positions of the glass bead. In contrast, the present method employing ODT localized the 3-D positions of the bead precisely as well as measured the composition of the internal materials of the bead and the white blood cell simultaneously.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

How the Hubble Space Telescope has changed our view of the universe

How the Hubble Space Telescope has changed our view of the universe | Amazing Science | Scoop.it
Hubble has made more than 1.2 million observations and generated 100 terabytes of data, all while whirling around the Earth at 17,000 mph.

Without Hubble, astronomy "would be an awful lot poorer a field,” said Mike Garcia, a program scientist for Hubble at NASA headquarters in Washington. “The Hubble images capture the beauty of the heavens in a way that nothing else has done. The pictures are works of art, and nothing else has done that.”

Garcia, who has worked on NASA projects for over 30 years, has used the satellite’s images to study black holes in the Andromeda galaxy.


“Hubble found out that there's a supermassive black hole in the center of every galaxy. And that was a surprise,” Garcia said. “The black holes and the galaxies know about each other. The size of the black hole is in lockstep with the size of the galaxy."


Hubble doesn’t just stare into deep space; the telescope is just as good at observing objects closer to home. Hubble has provided scientists with images of Pluto’s four moons and photographic evidence that Jupiter’s Great Red Spot has been shrinking, as well as treating viewers to the fragments of a comet crashing into the gas planet.


Garcia’s favorite Hubble image captures the Andromeda galaxy’s nucleus, 2 million light-years away, which is actually pretty close. "It's a double nucleus, which is really rare. And it surrounds a supermassive black hole," Garcia said. "Only an astronomer would love it. It’s not an image the public would go wild over."


But there are plenty of images the public has gone wild over. Hubble images are embedded in our culture – seen in frames on walls, on computer screen savers and postage stamps. “An image captures people’s imaginations right away,” said John Trauger, a senior scientist at the Jet Propulsion Laboratory in La Cañada-Flintridge. “Hubble has really helped the idea of communicating science.”


Trauger helped process one of Hubble’s most recognizable images, featuring a dying star throwing dust back into space. “MyCn18,” an hourglass-shaped nebula with a green eye-like center, was photographed in 1996 and made the cover of both National Geographic and the Pearl Jam album “Binaural.”


Trauger was also the principal investigator of JPL’s mission to repair Hubble after it was launched with a flaw that rendered all its instruments “unfocusable.” In 1993, Space Shuttle Endeavour installed a new camera, called Wide Field Planetary Camera 2, giving us a view into the deepest regions of space. The camera was replaced again with a third version in 2009. With Hubble, scientists can see the same amount of detail in objects 10 times farther than they would be able to get from a land-based observatory. That allows humans to view a region of space 1,000 times larger than what we can see from the ground, Trauger said.


Hubble’s quest to capture the universe in images has benefited Earthlings in other ways, too. As NASA and the military push for more advanced digital camera technologies, those improvements eventually find their way into our pocket-sized devices.


Twenty-five years after its launch and six years after its last servicing mission, Hubble is at its scientific peak of productivity. NASA expects the satellite to work well in to the 2020s. By 2037, the agency estimates atmospheric drag will start to take its toll. Then they’ll think about boosting it up or bringing it back.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

After stroke, brain learns to see again

After stroke, brain learns to see again | Amazing Science | Scoop.it

Once thought irreversible, vision loss sometimes associated with stroke may be treatable. By doing a set of vigorous visual exercises on a computer every day for several months, patients who had gone partially blind as a result of suffering a stroke were able to regain some vision. Some patients even were able to drive again.


“We were very surprised when we saw the results from our first patients,” says Krystel Huxlin, the neuroscientist and associate professor who led the study of seven patients at the University of Rochester’s Eye Institute. “This is a type of brain damage that clinicians and scientists have long believed you simply can’t recover from. It’s devastating, and patients are usually sent home to somehow deal with it the best they can.”


The results are a cause for hope for patients with vision damage from stroke or other causes, says Huxlin. The work also shows a remarkable capacity for “plasticity” in damaged, adult brains. It shows that the brain can change a great deal in older adults and that some brain regions are capable of covering for other areas that have been damaged.


Huxlin studied seven people who had suffered a stroke that damaged an area of the brain known as the primary visual cortex or V1, which serves as the gateway to the rest of the brain for all the visual information that comes through our eyes. V1 passes visual information along to dozens of other brain areas, which process and make sense of the information, ultimately allowing us to see.


Patients with damage to the primary visual cortex have severely impaired vision – they typically have a difficult or impossible time reading, driving, or getting out to do ordinary chores like grocery shopping. Patients may walk into walls, oftentimes cannot navigate stores without bumping into goods or other people, and they may be completely unaware of cars on the road coming toward them from the left or right.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Cloud of quantum particles can have several temperatures at once

Cloud of quantum particles can have several temperatures at once | Amazing Science | Scoop.it

The air around us consists of countless molecules, moving around randomly. It would be utterly impossible to track them all and to describe all their trajectories. But for many purposes, this is not necessary. Properties of the gas can be found which describe the collective behaviour of all the molecules, such as the air pressure or the temperature, which results from the particles' energy. On a hot summer's day, the molecules move at about 430 meters per second, in winter, it is a bit less.


This statistical view (which was developed by the Viennese physicist Ludwig Boltzmann) has proved to be extremely successful and describes many different physical systems, from pots of boiling water to phase transitions in liquid crystals in LCD-displays. However, in spite of huge efforts, open questions have remained, especially with regard to quantum systems. How the well-known laws of statistical physics emerge from many small quantum parts of a system remains one of the big open questions in physics.


Scientists at the Vienna University of Technology have now succeeded in studying the behaviour of a quantum physical multi-particle system in order to understand the emergence of statistical properties. The team of Professor Jörg Schmiedmayer used a special kind of microchip to catch a cloud of several thousand atoms and cool them close to absolute zero at -273°C, where their quantum properties become visible.


The experiment showed remarkable results: When the external conditions on the chip were changed abruptly, the quantum gas could take on different temperatures at once. It can be hot and cold at the same time. The number of temperatures depends on how exactly the scientists manipulate the gas. "With our microchip we can control the complex quantum systems very well and measure their behaviour", says Tim Langen, leading author of the paper published in Science. There had already been theoretical calculations predicting this effect, but it has never been possible to observe it and to produce it in a controlled environment.


The experiment helps scientists to understand the fundamental laws of quantum physics and their relationship with the statistical laws of thermodynamics. This is relevant for many different quantum systems, maybe even for technological applications. Finally, the results shed some light on the way our classical macroscopic world emerges from the strange world of tiny quantum objects.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Hidden water below Antarctica provides hope for life on Mars

Hidden water below Antarctica provides hope for life on Mars | Amazing Science | Scoop.it
The McMurdo Dry Valleys form the largest ice-free region in Antarctica. They also make up the coldest and driest environments on the planet. Yet, despite these extreme conditions, the valleys' surface is home to a large diversity of microbial life. Now, new evidence suggests that a vast network of salty liquid water exists 1,000 feet below the surface — a finding that lends support to the idea that microbial life may exist beneath Antarctica's surface as well. The finding isn't just exciting for Earth ecologists, however; planetary scientists are intrigued as well. Indeed, finding salty liquid water below Antarctica provides strong support for the idea that Mars, an environment that resembles Antarctic summers, may have similar aquifers beneath its surface — aquifers that could support microscopic life.


"Before this study, we didn't know to what extent life could exist beneath the glaciers, beneath hundreds of meters of ice, beneath ice covered lakes and deep into the soil," says Ross Virginia, an ecosystem environmentalist at Dartmouth College and a co-author of the study, published in Nature Communications today. This study opens up "possibilities for better understanding the combinations of factors that might be found on other planets and bodies outside of the Earth" — including Mars.


Approximately 4.5 billion years ago, 20 percent of the Martian surface was likely covered in water. Today, Mars may still be home to small amounts of salty liquid water, which would exist on the planet's soil at night before evaporating during the daytime. Taken together, these findings are pretty exciting for those who hope to discover life on Mars — water, after all, is a requirement for life. Unfortunately, researchers have also pointed out that the Martian surface is far too cold for the survival of any known forms of life. That's why some scientists have started to wonder about what may lie beneath the Martian surface. If the extreme environment conditions found in Antarctica's subsurface contains all the elements necessary for life, it's possible that the Martian subsurface might as well.


In the study, researchers flew a helicopter more than 114 square miles over Taylor Valley— the southernmost of the three dry valleys. Below the helicopter, researchers suspended a large antenna. The technology, called SkyTem, acted as an airborne electromagnetic sensor that generated an electromagnetic field capable of penetrating through ice or into the soil in the dry valley. As the antenna surveyed the valley, the electromagnetic field reflected back information that was altered from the original signal depending on whether it encountered a brine or frozen soil or ice, Virginia explains. "So basically we're inferring the distribution of those types of materials based on what is reflected back to these helicopters flying over the surface of Antarctica."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New VR Technology Lets You Explore Worlds at the Nanoscale

New VR Technology Lets You Explore Worlds at the Nanoscale | Amazing Science | Scoop.it

Nanotronics Imaging, an Ohio-based company backed by PayPal founder and early Facebook investor Peter Thiel, makes atomic-scale microscopes that both researchers and industrial manufacturers can use in the production of nanoscale materials. Today at the Tribeca Disruptive Innovation Awards the company announced a new endeavor: the ability to view the microscopes’ output using virtual reality headsets like the Rift.


The new product, nVisible, will enable Nanotronics users to do virtual walkthroughs of nano-structures, which the company says will enable them to better visualize and understand the materials they’re working with. But most importantly, it could help manufacturers create more reliable processes for building nanoscale products—which has historically been a huge hurdle in working with such incredibly small materials.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Crime scene discovery – DNA methylation can tell DNA of identical twins apart

Crime scene discovery – DNA methylation can tell DNA of identical twins apart | Amazing Science | Scoop.it

SINCE its first use in the 1980s – a breakthrough dramatised in recent ITV series Code of a Killer– DNA profiling has been a vital tool for forensic investigators.  Now researchers at the University of Huddersfield have solved one of its few limitations by successfully testing a technique for distinguishing between the DNA – or genetic fingerprint – of identical twins.


The probability of a DNA match between two unrelated individuals is about one in a billion.  For two full siblings, the probability drops to one-in-10,000.  But identical twins present exactly the same DNA profile as each other and this has created legal conundrums when it was not possible to tell which of the pair was guilty or innocent of a crime.  This has led to prosecutions being dropped, rather than run the risk of convicting the wrong twin.


Now Dr Graham Williams (pictured right) and his Forensic Genetics Research Group at the University of Huddersfield have developed a solution to the problem and published their findings in the journal Analytical BiochemistryPrevious methods have been proposed for distinguishing the DNA of twins.  One is termed “mutation analysis”, where the whole genome of both twins is sequenced to identify mutations that might have occurred to one of them.


“If such a mutation is identified at a particular location in the twin, then that same particular mutation can be specifically searched for in the crime scene sample.  However, this is very expensive and time-consuming and is unlikely to be paid for by cash-strapped police forces,” according to Dr Williams, who has shown that a cheaper, quicker technique is available.


It is based on the concept of DNA methylation, which is effectively the molecular mechanism that turns various genes on and off. As twins get older, the degree of difference between them grows as they are subjected to increasingly different environments.  For example, one might take up smoking, or one might have a job outdoors and the other a desk job.  This will cause changes in the methylation status of the DNA.


In order to carry our speedy, inexpensive analysis of this, Dr Williams and his team propose a technique named “high resolution melt curve analysis” (HRMA). “What HRMA does is to subject the DNA to increasingly high temperatures until the hydrogen bonds break, known as the melting temperature.  The more hydrogen bonds that are present in the DNA, the higher the temperature required to melt them,” explains Dr Williams.


“Consequently, if one DNA sequence is more methylated than the other, then the melting temperatures of the two samples will differ – a difference that can be measured, and which will establish the difference between two identical twins.”


more...
Taylah Mancey's curator insight, March 24, 2016 3:09 AM

Where i see myself working after completeing my degree, in a lab doing forensic science.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Audi Has Made Diesel From Water And Carbon Dioxide

Audi Has Made Diesel From Water And Carbon Dioxide | Amazing Science | Scoop.it

It’s the holy grail in energy production: produce a fuel that is both carbon neutral and can be poured directly into our current cars without the need to retrofit. There are scores of companies out there trying to do just that using vegetable oil, algae, and even the microbes found in panda poop to turn bamboo into fuel.


This week, German car manufacturer Audi has declared that they have been able to create an "e-diesel," or diesel containing ethanol, by using renewable energy to produce a liquid fuel from nothing more than water and carbon dioxide. After a commissioning phase of just four months, the plant in Dresden operated by clean tech company Sunfire has managed to produce its first batch of what they’re calling “blue crude.” The product liquid is composed of long-chain hydrocarbon compounds, similar to fossil fuels, but free from sulfur and aromatics and therefore burns soot-free.


The first step in the process involves harnessing renewable energy through solar, wind or hydropower. This energy is then used to heat water to temperatures in excess of 800˚C (1472˚F). The steam is then broken down into oxygen and hydrogen through high temperature electrolysis, a process where an electric current is passed through a solution.


The hydrogen is then removed and mixed with carbon monoxide under high heat and pressure, creating a hydrocarbon product they’re calling "blue crude." Sunfire claim that the synthetic fuel is not only more environmentally friendly than fossil fuel, but that the efficiency of the overall process—from renewable power to liquid hydrocarbon—is very high at around 70%. The e-diesel can then be either mixed with regular diesel, or used as a fuel in its own right.


But all may not be as it seems. The process used by Audi is actually called the Fischer-Tropsch process and has been known by scientists since the 1920s. It was even used by the Germans to turn coal into diesel during the Second World War when fuel supplies ran short. The process is currently used by many different companies all around the world, especially in countries where reserves of oil are low but reserves of other fossils fuels, such as gas and coal, are high.


And it would seem that Audi aren’t the first to think about using biogas facilities to produce carbon neutral biofuels either. Another German company called Choren has already made an attempt at producing biofuel using biogas and the Fischer-Tropsch process. Backed by Shell and Volkswagen, the company had all the support and funding it needed, but in 2011 it filed for bankruptcy due to impracticalities in the process.


Audi readily admits that none of the processes they use are new, but claim it’s how they’re going about it that is. They say that increasing the temperature at which the water is split increases the efficiency of the process and that the waste heat can then be recovered. Whilst their announcement might not be heralding a new fossil fuel-free era, the tech of turning green power into synthetic fuel could have applications as a battery to store excess energy produced by renewables.

more...
Daniel Lindahl's curator insight, May 25, 2015 1:47 PM

Audi has successfully made a clean, carbon neutral form of diesel fuel known as "e-diesel". This will drastically change cars and fuel research in the future. Developments like these show the growth and change of industry as a whole. 

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Hijacking of a medical telerobot raises important questions over the security of remote surgery

Hijacking of a medical telerobot raises important questions over the security of remote surgery | Amazing Science | Scoop.it

A crucial bottleneck that prevents life-saving surgery being performed in many parts of the world is the lack of trained surgeons. One way to get around this is to make better use of the ones that are available. Sending them over great distances to perform operations is clearly inefficient because of the time that has to be spent travelling. So an increasingly important alternative is the possibility of telesurgery with an expert in one place controlling a robot in another that physically performs the necessary cutting and dicing. Indeed, the sale of medical robots is increasing at a rate of 20 percent per year.


But while the advantages are clear, the disadvantages have been less well explored. Telesurgery relies on cutting edge technologies in fields as diverse as computing, robotics, communications, ergonomics, and so on. And anybody familiar with these areas will tell you that they are far from failsafe.


Today, Tamara Bonaci and pals at the University of Washington in Seattle examine the special pitfalls associated with the communications technology involved in telesurgery. In particular, they show how a malicious attacker can disrupt the behavior of a telerobot during surgery and even take over such a robot, the first time a medical robot has been hacked in this way.


The first telesurgery took place in 2001 with a surgeon in New York successfully removing the gall bladder of a patient in Strasbourg in France, more than 6,000 kilometers away. The communications ran over a dedicated fiber provided by a telecommunications company specifically for the operation. That’s an expensive option since dedicated fibers can cost tens of thousands of dollars.


Since then, surgeons have carried out numerous remote operations and begun to experiment with ordinary communications links over the Internet, which are significantly cheaper. Although there are no recorded incidents in which the communications infrastructure has caused problems during a telesurgery operation, there are still questions over security and privacy which have never been full answered.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

DNA 'cage' holding a payload of drugs set to begin clinical trial soon

DNA 'cage' holding a payload of drugs set to begin clinical trial soon | Amazing Science | Scoop.it

Ido Bachelet, who was previously at Harvard’s Wyss Institute in Boston, Massachusetts and Israel’s Bar-Ilan University, intends to treat a patient who has been given six months to live. The patient is set to receive an injection of DNA nanocages designed to interact with and destroy leukemia cells without damaging healthy tissue. Speaking in December, he said: ‘Judging from what we saw in our tests, within a month that person is going to recover.


DNA nanocages can be programmed to independently recognize target cells and deliver payloads, such as cancer drugs, to these cells. 

George Church, who is involved in the research at the Wyss Institute explained the idea of the microscopic robots is to make a ‘cage’ that protects a fragile or toxic payload and ‘only releases it at the right moment.’


These nanostructures are built upon a single strand of DNA which is combined with short synthetic strands of DNA designed by the experts.  When mixed together, they self-assemble into a desired shape, which in this case looks a little like a barrel.


Dr Bachelet said: 'The nanorobot we designed actually looks like an open-ended barrel, or clamshell that has two halves linked together by flexible DNA hinges and the entire structure is held shut by latches that are DNA double helixes.’


A complementary piece of DNA is attached to a payload, which enables it to bind to the inside of the biological barrel. The double helixes stay closed until specific molecules or proteins on the surface of cancer cells act as a 'key' to open the ‘barrel’ so the payload can be deployed.


'The nanorobot is capable of recognizing a small population of target cells within a large healthy population,’ Dr Bachelet continued.

‘While all cells share the same drug target that we want to attack, only those target cells that express the proper set of keys open the nanorobot and therefore only they will be attacked by the nanorobot and by the drug.’


The team has tested its technique in animals as well as cell cultures and said the ‘nanorobot attacked these [targets] with almost zero collateral damage.’ The method has many advantages over invasive surgery and blasts of drugs, which can be ‘as painful and damaging to the body as the disease itself,’ the team added.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Mammoth genome sequence completed

Mammoth genome sequence completed | Amazing Science | Scoop.it

An international team of scientists has sequenced the complete genome of the woolly mammoth. A US team is already attempting to study the animals' characteristics by inserting mammoth genes into elephant stem cells. They want to find out what made the mammoths different from their modern relatives and how their adaptations helped them survive the ice ages.


The new genome study has been published in the journal Current BiologyDr Love Dalén, at the Swedish Museum of Natural History in Stockholm, told BBC News that the first ever publication of the full DNA sequence of the mammoth could help those trying to bring the creature back to life. "It would be a lot of fun (in principle) to see a living mammoth, to see how it behaves and how it moves," he said.


But he would rather his research was not used to this end. "It seems to me that trying this out might lead to suffering for female elephants and that would not be ethically justifiable."


Dr Dalén and the international group of researchers he is collaborating with are not attempting to resurrect the mammoth. But the Long Now Foundation, an organisation based in San Francisco, claims that it is. Now, with the publication of the complete mammoth genome, it could be a step closer to achieving its aim.


On its website, the foundation says its ultimate goal is "to produce new mammoths that are capable of repopulating the vast tracts of tundra and boreal forest in Eurasia and North America. "The goal is not to make perfect copies of extinct woolly mammoths, but to focus on the mammoth adaptations needed for Asian elephants to live in the cold climate of the tundra.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists use nanoscale building blocks and DNA 'glue' to shape 3-D superlattices

Scientists use nanoscale building blocks and DNA 'glue' to shape 3-D superlattices | Amazing Science | Scoop.it
aking child's play with building blocks to a whole new level-the nanometer scale-scientists at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory have constructed 3D "superlattice" multicomponent nanoparticle arrays where the arrangement of particles is driven by the shape of the tiny building blocks. The method uses linker molecules made of complementary strands of DNA to overcome the blocks' tendency to pack together in a way that would separate differently shaped components. The results, published in Nature Communications, are an important step on the path toward designing predictable composite materials for applications in catalysis, other energy technologies, and medicine. "If we want to take advantage of the promising properties of nanoparticles, we need to be able to reliably incorporate them into larger-scale composite materials for real-world applications," explained Brookhaven physicist Oleg Gang, who led the research at Brookhaven's Center for Functional Nanomaterials (CFN), a DOE Office of Science User Facility.

"Our work describes a new way to fabricate structured composite materials using directional bindings of shaped particles for predictable assembly," said Fang Lu, the lead author of the publication.

The research builds on the team's experience linking nanoparticles together using strands of synthetic DNA. Like the molecule that carries the genetic code of living things, these synthetic strands have complementary bases known by the genetic code letters G, C, T, and A, which bind to one another in only one way (G to C; T to A). Gang has previously used complementary DNA tethers attached to nanoparticles to guide the assembly of a range of arrays and structures. The new work explores particle shape as a means of controlling the directionality of these interactions to achieve long-range order in large-scale assemblies and clusters.

Spherical particles, Gang explained, normally pack together to minimize free volume. DNA linkers-using complementary strands to attract particles, or non-complementary strands to keep particles apart-can alter that packing to some degree to achieve different arrangements. For example, scientists have experimented with placing complementary linker strands in strategic locations on the spheres to get the particles to line up and bind in a particular way. But it's not so easy to make nanospheres with precisely placed linker strands.

"We explored an alternate idea: the introduction of shaped nanoscale 'blocks' decorated with DNA tethers on each facet to control the directional binding of spheres with complementary DNA tethers," Gang said.
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Technology Trends - Singularity Blog: Most Anticipated New Technologies

Technology Trends - Singularity Blog: Most Anticipated New Technologies | Amazing Science | Scoop.it
Future timeline, a timeline of humanity's future, based on current trends, long-term environmental changes, advances in technology such as Moore's Law, the latest medical advances, and the evolving geopolitical landscape.


more...
AugusII's curator insight, April 25, 2015 6:15 PM

Being up to date a must -  Learning on trends useful.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

The Current State of Machine Intelligence

The Current State of Machine Intelligence | Amazing Science | Scoop.it

A few years ago, investors and startups were chasing “big data”. Now we’re seeing a similar explosion of companies calling themselves artificial intelligence, machine learning, or collectively “machine intelligence”. The Bloomberg Beta fund, which is focused on the future of work, has been investing in these approaches.


Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus).


Machine intelligence technologies cut across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks). All of these technologies are reflected on this landscape.


What this landscape doesn’t include, however important, is “big data” technologies. Some have used this term interchangeably with machine learning and artificial intelligence, but I want to focus on the intelligence methods rather than data, storage, and computation pieces of the puzzle for this landscape (though of course data technologies enable machine intelligence).


We’ve seen a few great articles recently outlining why machine intelligence is experiencing a resurgence, documenting the enabling factors of this resurgence. Kevin Kelly, for example chalks it up to cheap parallel computing, large datasets, and better algorithms.


Machine intelligence is enabling applications we already expect like automated assistants (Siri), adorable robots (Jibo), and identifying people in images (like the highly effective but unfortunately named DeepFace). However, it’s also doing the unexpected: protecting children from sex trafficking, reducing the chemical content in the lettuce we eat, helping us buy shoes online that fit our feet precisely, anddestroying 80's classic video games.


Big companies have a disproportionate advantage, especially those that build consumer products. The giants in search (Google, Baidu), social networks (Facebook, LinkedIn, Pinterest), content (Netflix, Yahoo!), mobile (Apple) and e-commerce (Amazon) are in an incredible position. They have massive datasets and constant consumer interactions that enable tight feedback loops for their algorithms (and these factors combine to create powerful network effects) — and they have the most to gain from the low hanging fruit that machine intelligence bears.

Best-in-class personalization and recommendation algorithms have enabled these companies’ success (it’s both impressive and disconcerting that Facebook recommends you add the person you had a crush on in college and Netflix tees up that perfect guilty pleasure sitcom).

Now they are all competing in a new battlefield: the move to mobile. Winning mobile will require lots of machine intelligence: state of the art natural language interfaces (like Apple’s Siri), visual search (like Amazon’s “FireFly”), and dynamic question answering technology that tells you the answer instead of providing a menu of links (all of the search companies are wrestling with this).Large enterprise companies (IBM and Microsoft) have also made incredible strides in the field, though they don’t have the same human-facing requirements so are focusing their attention more on knowledge representation tasks on large industry datasets, like IBM Watson’s application to assist doctors with diagnoses.
more...
John Vollenbroek's curator insight, April 25, 2015 2:53 AM

I like this overview

pbernardon's curator insight, April 26, 2015 2:33 AM

Une infographie et une cartographie claire et très intéressante sur l'intelligence artificielle et les usages induits que les organisations vont devoir s'approprier.

 

#bigdata