Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
SINCE its first use in the 1980s – a breakthrough dramatised in recent ITV series Code of a Killer– DNA profiling has been a vital tool for forensic investigators. Now researchers at the University of Huddersfield have solved one of its few limitations by successfully testing a technique for distinguishing between the DNA – or genetic fingerprint – of identical twins.
The probability of a DNA match between two unrelated individuals is about one in a billion. For two full siblings, the probability drops to one-in-10,000. But identical twins present exactly the same DNA profile as each other and this has created legal conundrums when it was not possible to tell which of the pair was guilty or innocent of a crime. This has led to prosecutions being dropped, rather than run the risk of convicting the wrong twin.
Now Dr Graham Williams (pictured right) and his Forensic Genetics Research Group at the University of Huddersfield have developed a solution to the problem and published their findings in the journal Analytical Biochemistry. Previous methods have been proposed for distinguishing the DNA of twins. One is termed “mutation analysis”, where the whole genome of both twins is sequenced to identify mutations that might have occurred to one of them.
“If such a mutation is identified at a particular location in the twin, then that same particular mutation can be specifically searched for in the crime scene sample. However, this is very expensive and time-consuming and is unlikely to be paid for by cash-strapped police forces,” according to Dr Williams, who has shown that a cheaper, quicker technique is available.
It is based on the concept of DNA methylation, which is effectively the molecular mechanism that turns various genes on and off. As twins get older, the degree of difference between them grows as they are subjected to increasingly different environments. For example, one might take up smoking, or one might have a job outdoors and the other a desk job. This will cause changes in the methylation status of the DNA.
In order to carry our speedy, inexpensive analysis of this, Dr Williams and his team propose a technique named “high resolution melt curve analysis” (HRMA). “What HRMA does is to subject the DNA to increasingly high temperatures until the hydrogen bonds break, known as the melting temperature. The more hydrogen bonds that are present in the DNA, the higher the temperature required to melt them,” explains Dr Williams.
“Consequently, if one DNA sequence is more methylated than the other, then the melting temperatures of the two samples will differ – a difference that can be measured, and which will establish the difference between two identical twins.”
It’s the holy grail in energy production: produce a fuel that is both carbon neutral and can be poured directly into our current cars without the need to retrofit. There are scores of companies out there trying to do just that using vegetable oil, algae, and even the microbes found in panda poop to turn bamboo into fuel.
This week, German car manufacturer Audi has declared that they have been able to create an "e-diesel," or diesel containing ethanol, by using renewable energy to produce a liquid fuel from nothing more than water and carbon dioxide. After a commissioning phase of just four months, the plant in Dresden operated by clean tech company Sunfire has managed to produce its first batch of what they’re calling “blue crude.” The product liquid is composed of long-chain hydrocarbon compounds, similar to fossil fuels, but free from sulfur and aromatics and therefore burns soot-free.
The first step in the process involves harnessing renewable energy through solar, wind or hydropower. This energy is then used to heat water to temperatures in excess of 800oC (1472oF). The steam is then broken down into oxygen and hydrogen through high temperature electrolysis, a process where an electric current is passed through a solution.
The hydrogen is then removed and mixed with carbon monoxide under high heat and pressure, creating a hydrocarbon product they’re calling "blue crude." Sunfire claim that the synthetic fuel is not only more environmentally friendly than fossil fuel, but that the efficiency of the overall process—from renewable power to liquid hydrocarbon—is very high at around 70%. The e-diesel can then be either mixed with regular diesel, or used as a fuel in its own right.
But all may not be as it seems. The process used by Audi is actually called the Fischer-Tropsch process and has been known by scientists since the 1920s. It was even used by the Germans to turn coal into diesel during the Second World War when fuel supplies ran short. The process is currently used by many different companies all around the world, especially in countries where reserves of oil are low but reserves of other fossils fuels, such as gas and coal, are high.
And it would seem that Audi aren’t the first to think about using biogas facilities to produce carbon neutral biofuels either. Another German company called Choren has already made an attempt at producing biofuel using biogas and the Fischer-Tropsch process. Backed byShell and Volkswagen, the company had all the support and funding it needed, but in 2011 it filed for bankruptcy due to impracticalities in the process.
Audi readily admits that none of the processes they use are new, but claim it’s how they’re going about it that is. They say that increasing the temperature at which the water is split increases the efficiency of the process and that the waste heat can then be recovered. Whilst their announcement might not be heralding a new fossil fuel-free era, the tech of turning green power into synthetic fuel could have applications as a battery to store excess energy produced by renewables.
A crucial bottleneck that prevents life-saving surgery being performed in many parts of the world is the lack of trained surgeons. One way to get around this is to make better use of the ones that are available. Sending them over great distances to perform operations is clearly inefficient because of the time that has to be spent travelling. So an increasingly important alternative is the possibility of telesurgery with an expert in one place controlling a robot in another that physically performs the necessary cutting and dicing. Indeed, the sale of medical robots is increasing at a rate of 20 percent per year.
But while the advantages are clear, the disadvantages have been less well explored. Telesurgery relies on cutting edge technologies in fields as diverse as computing, robotics, communications, ergonomics, and so on. And anybody familiar with these areas will tell you that they are far from failsafe.
Today, Tamara Bonaci and pals at the University of Washington in Seattle examine the special pitfalls associated with the communications technology involved in telesurgery. In particular, they show how a malicious attacker can disrupt the behavior of a telerobot during surgery and even take over such a robot, the first time a medical robot has been hacked in this way.
The first telesurgery took place in 2001 with a surgeon in New York successfully removing the gall bladder of a patient in Strasbourg in France, more than 6,000 kilometers away. The communications ran over a dedicated fiber provided by a telecommunications company specifically for the operation. That’s an expensive option since dedicated fibers can cost tens of thousands of dollars.
Since then, surgeons have carried out numerous remote operations and begun to experiment with ordinary communications links over the Internet, which are significantly cheaper. Although there are no recorded incidents in which the communications infrastructure has caused problems during a telesurgery operation, there are still questions over security and privacy which have never been full answered.
Ido Bachelet, who was previously at Harvard’s Wyss Institute in Boston, Massachusetts and Israel’s Bar-Ilan University, intends to treat a patient who has been given six months to live. The patient is set to receive an injection of DNA nanocages designed to interact with and destroy leukemia cells without damaging healthy tissue. Speaking in December, he said: ‘Judging from what we saw in our tests, within a month that person is going to recover.
DNA nanocages can be programmed to independently recognize target cells and deliver payloads, such as cancer drugs, to these cells.
George Church, who is involved in the research at the Wyss Institute explained the idea of the microscopic robots is to make a ‘cage’ that protects a fragile or toxic payload and ‘only releases it at the right moment.’
These nanostructures are built upon a single strand of DNA which is combined with short synthetic strands of DNA designed by the experts. When mixed together, they self-assemble into a desired shape, which in this case looks a little like a barrel.
Dr Bachelet said: 'The nanorobot we designed actually looks like an open-ended barrel, or clamshell that has two halves linked together by flexible DNA hinges and the entire structure is held shut by latches that are DNA double helixes.’
A complementary piece of DNA is attached to a payload, which enables it to bind to the inside of the biological barrel. The double helixes stay closed until specific molecules or proteins on the surface of cancer cells act as a 'key' to open the ‘barrel’ so the payload can be deployed.
'The nanorobot is capable of recognizing a small population of target cells within a large healthy population,’ Dr Bachelet continued.
‘While all cells share the same drug target that we want to attack, only those target cells that express the proper set of keys open the nanorobot and therefore only they will be attacked by the nanorobot and by the drug.’
The team has tested its technique in animals as well as cell cultures and said the ‘nanorobot attacked these [targets] with almost zero collateral damage.’ The method has many advantages over invasive surgery and blasts of drugs, which can be ‘as painful and damaging to the body as the disease itself,’ the team added.
An international team of scientists has sequenced the complete genome of the woolly mammoth. A US team is already attempting to study the animals' characteristics by inserting mammoth genes into elephant stem cells. They want to find out what made the mammoths different from their modern relatives and how their adaptations helped them survive the ice ages.
The new genome study has been published in the journal Current Biology. Dr Love Dalén, at the Swedish Museum of Natural History in Stockholm, told BBC News that the first ever publication of the full DNA sequence of the mammoth could help those trying to bring the creature back to life. "It would be a lot of fun (in principle) to see a living mammoth, to see how it behaves and how it moves," he said.
But he would rather his research was not used to this end. "It seems to me that trying this out might lead to suffering for female elephants and that would not be ethically justifiable."
Dr Dalén and the international group of researchers he is collaborating with are not attempting to resurrect the mammoth. But the Long Now Foundation, an organisation based in San Francisco, claims that it is. Now, with the publication of the complete mammoth genome, it could be a step closer to achieving its aim.
On its website, the foundation says its ultimate goal is "to produce new mammoths that are capable of repopulating the vast tracts of tundra and boreal forest in Eurasia and North America. "The goal is not to make perfect copies of extinct woolly mammoths, but to focus on the mammoth adaptations needed for Asian elephants to live in the cold climate of the tundra.
aking child's play with building blocks to a whole new level-the nanometer scale-scientists at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory have constructed 3D "superlattice" multicomponent nanoparticle arrays where the arrangement of particles is driven by the shape of the tiny building blocks. The method uses linker molecules made of complementary strands of DNA to overcome the blocks' tendency to pack together in a way that would separate differently shaped components. The results, published in Nature Communications, are an important step on the path toward designing predictable composite materials for applications in catalysis, other energy technologies, and medicine. "If we want to take advantage of the promising properties of nanoparticles, we need to be able to reliably incorporate them into larger-scale composite materials for real-world applications," explained Brookhaven physicist Oleg Gang, who led the research at Brookhaven's Center for Functional Nanomaterials (CFN), a DOE Office of Science User Facility.
Future timeline, a timeline of humanity's future, based on current trends, long-term environmental changes, advances in technology such as Moore's Law, the latest medical advances, and the evolving geopolitical landscape.
A few years ago, investors and startups were chasing “big data”. Now we’re seeing a similar explosion of companies calling themselves artificial intelligence, machine learning, or collectively “machine intelligence”. The Bloomberg Beta fund, which is focused on the future of work, has been investing in these approaches.
Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus).
Machine intelligence technologies cut across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks). All of these technologies are reflected on this landscape.
What this landscape doesn’t include, however important, is “big data” technologies. Some have used this term interchangeably with machine learning and artificial intelligence, but I want to focus on the intelligence methods rather than data, storage, and computation pieces of the puzzle for this landscape (though of course data technologies enable machine intelligence).
We’ve seen a few great articles recently outlining why machine intelligence is experiencing a resurgence, documenting the enabling factors of this resurgence. Kevin Kelly, for example chalks it up to cheap parallel computing, large datasets, and better algorithms.
Machine intelligence is enabling applications we already expect like automated assistants (Siri), adorable robots (Jibo), and identifying people in images (like the highly effective but unfortunately named DeepFace). However, it’s also doing the unexpected: protecting children from sex trafficking, reducing the chemical content in the lettuce we eat, helping us buy shoes online that fit our feet precisely, anddestroying 80's classic video games.
Big companies have a disproportionate advantage, especially those that build consumer products. The giants in search (Google, Baidu), social networks (Facebook, LinkedIn, Pinterest), content (Netflix, Yahoo!), mobile (Apple) and e-commerce (Amazon) are in an incredible position. They have massive datasets and constant consumer interactions that enable tight feedback loops for their algorithms (and these factors combine to create powerful network effects) — and they have the most to gain from the low hanging fruit that machine intelligence bears.
Best-in-class personalization and recommendation algorithms have enabled these companies’ success (it’s both impressive and disconcerting that Facebook recommends you add the person you had a crush on in college and Netflix tees up that perfect guilty pleasure sitcom).
Now they are all competing in a new battlefield: the move to mobile. Winning mobile will require lots of machine intelligence: state of the art natural language interfaces (like Apple’s Siri), visual search (like Amazon’s “FireFly”), and dynamic question answering technology that tells you the answer instead of providing a menu of links (all of the search companies are wrestling with this).Large enterprise companies (IBM and Microsoft) have also made incredible strides in the field, though they don’t have the same human-facing requirements so are focusing their attention more on knowledge representation tasks on large industry datasets, like IBM Watson’s application to assist doctors with diagnoses.
Underneath the bubbling geysers and hot springs of Yellowstone National Park in Wyoming sits a volcanic hot spot that has driven some of the largest eruptions on Earth. Geoscientists have now completely imaged the subterranean plumbing system and have found not just one, but two magma chambers underneath the giant volcano.
An experimental drug has cured monkeys infected with the strain of the Ebola virus present in West Africa, US-based scientists say.
An experimental drug has cured monkeys infected with the Ebola virus, US-based scientists have said. The treatment, known as TKM-Ebola-Guinea, targets the Makona strain of the virus, which caused the current deadly outbreak in West Africa. All three monkeys receiving the treatment were healthy when the trial ended after 28 days; three untreated monkeys died within nine days. Scientists cautioned that the drug's efficacy has not been proven in humans. At present, there are no treatments or vaccines for Ebola that have been proven to work in humans.
University of Texas scientist Thomas Geisbert, who was the senior author of the study published in the journal Nature, said: "This is the first study to show post-exposure protection... against the new Makona outbreak strain of Ebola-Zaire virus." Results from human trials with the drug are expected in the second half of this year.
The current outbreak of Ebola virus in West Africa is unprecedented, causing more cases and fatalities than all previous outbreaks combined, and has yet to be controlled1. Several post-exposure interventions have been employed under compassionate use to treat patients repatriated to Europe and the United States2. However, the in vivo efficacy of these interventions against the new outbreak strain of Ebola virus is unknown.
In the current study, the scientists show that lipid-nanoparticle-encapsulated short interfering RNAs (siRNAs) rapidly adapted to target the Makona outbreak strain of Ebola virus are able to protect 100% of rhesus monkeys against lethal challenge when treatment was initiated at 3 days after exposure while animals were viremic and clinically ill.
Although all infected animals showed evidence of advanced disease including abnormal hematology, blood chemistry and coagulopathy, siRNA-treated animals had milder clinical features and fully recovered, while the untreated control animals succumbed to the disease. These results represent the first successful demonstration of therapeutic anti-Ebola virus efficacy against the new outbreak strain in non-human primates and highlight the rapid development of lipid-nanoparticle-delivered siRNA as a countermeasure against this highly lethal human disease.
A team of researchers in Japan and Thailand reports the first known nondestructive 3-D scan of a single biological cell using a revised form of “picosecond* ultrasound.” This new technique can achieve micrometer (millionth of a meter) resolution of live single cells, imaging their interiors in slices separated by 150 nanometers (.15 micrometer), in contrast to the typical 0.5-millimeter (500 micrometers) spatial resolution of a standard medical MRI scan. The work is a proof-of-principle that could open the door to new ways of studying the physical properties of living cells by imaging them non-destructively in vivo, the researchers say.
The team accomplished the imaging by first placing a cell in solution on a titanium-coated sapphire substrate and then scanning with a point source of high-frequency sound generated by using a beam of focused ultrashort laser pulses over the titanium film. This was followed by focusing another beam of laser pulses on the same point to pick up tiny changes in optical reflectance caused by the sound traveling through the cell tissue.
“By scanning both beams together, we’re able to build up an acoustic image of the cell that represents one slice of it,” explained co-author Professor Oliver B. Wright, who teaches in the Division of Applied Physics, Faculty of Engineering at Hokkaido University. “We can view a selected slice of the cell at a given depth by changing the timing between the two beams of laser pulses.”
“The time required for 3-D imaging [with conventional acoustic microscopes] probably remains too long to be practical,” Wright said. “Building up a 3-D acoustic image, in principle, allows you to see the 3-D relative positions of cell organelles without killing the cell.
“By using an ultraviolet-pulsed laser, we could improve the lateral resolution by about a factor of three — and greatly improve the image quality. And, switching to a diamond substrate instead of sapphire would allow better heat conduction away from the probed area, which, in turn, would enable us to increase the laser power and image quality.”
Researchers at Oregon State University have invented a new technology called WiFiFO (WiFi Free space Optic) that can increase the bandwidth of WiFi systems by 10 times, using optical transmission via LED lights. The technology could be integrated with existing WiFi systems to reduce bandwidth problems in crowded locations, such as airport terminals or coffee shops, and in homes where several people have multiple WiFi devices.
Experts say that recent advances in LED technology have made it possible to modulate the LED light more rapidly, opening the possibility of using light for wireless transmission in a “free space” optical communication system. “In addition to improving the experience for users, the two big advantages of this system are that it uses inexpensive components, and it integrates with existing WiFi systems,” said Thinh Nguyen, an OSU associate professor of electrical and computer engineering. Nguyen worked with Alan Wang, an assistant professor of electrical and computer engineering, to build the first prototype.
“I believe the WiFO system could be easily transformed into a marketable product, and we are currently looking for a company that is interested in further developing and licensing the technology,” Nguyen said.
The system can potentially send data at up to 100 megabits per second. Although some current WiFi systems have similar bandwidth, it has to be divided by the number of devices, so each user might be receiving just 5 to 10 megabits per second, whereas the hybrid system could deliver 50–100 megabits to each user. In a home where telephones, tablets, computers, gaming systems, and televisions may all be connected to the Internet, increased bandwidth would eliminate problems like video streaming that stalls and buffers (think Netflix).
The receivers are small photodiodes that cost less than a dollar each and could be connected through a USB port for current systems, or incorporated into the next generation of laptops, tablets, and smartphones. A provisional patent has been secured on the technology, and a paper was published in the 17th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems.
Scientists have for the first time captured live images of the process of taste sensation on the tongue. The international team imaged single cells on the tongue of a mouse with a specially designed microscope system. "We've watched live taste cells capture and process molecules with different tastes," said biomedical engineer Dr Steve Lee, from the ANU Research School of Engineering.
There are more than 2,000 taste buds on the human tongue, which can distinguish at least five tastes: salty, sweet, sour, bitter and umami.However the relationship between the many taste cells within a taste bud, and our perception of taste has been a long standing mystery, said Professor Seok-Hyun Yun from Harvard Medical School. "With this new imaging tool we have shown that each taste bud contains taste cells for different tastes," said Professor Yun.
The team also discovered that taste cells responded not only to molecules contacting the surface of the tongue, but also to molecules in the blood circulation." We were surprised by the close association between taste cells and blood vessels around them," said Assistant Professor Myunghwan (Mark) Choi, from the Sungkyunkwan University in South Korea. "We think that tasting might be more complex than we expected, and involve an interaction between the food taken orally and blood composition," he said.
The team imaged the tongue by shining a bright infrared laser on to the mouse's tongue, which caused different parts of the tongue and the flavor molecules to fluoresce. The scientists captured the fluorescence from the tongue with a technique known as intravital multiphoton microscopy. They were able to pick out the individual taste cells within each taste bud, as well as blood vessels up to 240 microns below the surface of the tongue. The breakthrough complements recent studies by other research groups that identified the areas in the brain associated with taste.
The team now hopes to develop an experiment to monitor the brain while imaging the tongue to track the full process of taste sensation. However to fully understand the complex interactions that form our basic sense of taste could take years, Dr Lee said. "Until we can simultaneously capture both the neurological and physiological events, we can't fully unravel the logic behind taste," he said.
The research has been published in the latest edition of Nature Publishing Group's Scientific Reports
By comparing the genes of current-day North and South Americans with African and European populations, an Oxford University study has found the genetic fingerprints of the slave trade and colonization that shaped migrations to the Americas hundreds of years ago.
The study published in Nature Communications found that:
'We found that the genetic profile of Americans is much more complex than previously thought,' said study leader Professor Cristian Capelli from the Department of Zoology. The research team analyzed DNA samples collected from people in Barbados, Columbia, the Dominican Republic, Ecuador, Mexico, Puerto Rico and African-Americans in the USA. They used a technique called haplotype-based analysis to compare the pattern of genes in these 'recipient populations' to 'donor populations' in areas where migrants to America came from.
One of the great challenges in molecular biology is to determine the three-dimensional structure of large biomolecules such as proteins. But this is a famously difficult and time-consuming task. The standard technique is x-ray crystallography, which involves analyzing the x-ray diffraction pattern from a crystal of the molecule under investigation. That works well for molecules that form crystals easily.
But many proteins, perhaps most, do not form crystals easily. And even when they do, they often take on unnatural configurations that do not resemble their natural shape. So finding another reliable way of determining the 3-D structure of large biomolecules would be a huge breakthrough. Today, Marcus Brubaker and a couple of pals at the University of Toronto in Canada say they have found a way to dramatically improve a 3-D imaging technique that has never quite matched the utility of x-ray crystallography.
The new technique is based on an imaging process called electron cryomicroscopy. This begins with a purified solution of the target molecule that is frozen into a thin film just a single molecule thick. This film is then photographed using a process known as transmission electron microscopy—it is bombarded with electrons and those that pass through are recorded. Essentially, this produces two-dimensional “shadowgrams” of the molecules in the film. Researchers then pick out each shadowgram and use them to work out the three-dimensional structure of the target molecule.
This process is hard for a number of reasons. First, there is a huge amount of noise in each image so even the two-dimensional shadow is hard to make out. Second, there is no way of knowing the orientation of the molecule when the shadow was taken so determining the 3-D shape is a huge undertaking.
The standard approach to solving this problem is little more than guesswork. Dream up a potential 3-D structure for the molecule and then rotate it to see if it can generate all of the shadowgrams in the dataset. If not, change the structure, test it, and so on.
Obviously, this is a time-consuming process. The current state-of-the-art algorithm running on 300 cores takes two weeks to find the 3-D structure of a single molecule from a dataset of 200,000 images.
A group of Chinese scientists just reported that they modified the genome of human embryos, something that has never been done in the history of the world, according to a report in Nature News.
A recent biotech discovery - one that has been called the biggest biotech discovery of the century - showed how scientists might be able to modify a human genome when that genome was still just in an embryo.
This could change not only the genetic material of a person, but could also change the DNA they pass on, removing "bad" genetic codes (and potentially adding "good" ones) and taking an active hand in evolution.
Concerned scientists published an argument that no one should edit the human genome in this way until we better understood the consequences after a report uncovered rumours that Chinese scientists were already working on using this technology.
But this new paper, published April 18 in the journal Protein and Cell by a Chinese group led by gene-function researcher Junjiu Huang of Sun Yat-sen University, shows that work has already been done, and Nature News spoke to a Chinese source that said at least four different groups are "pursuing gene editing in human embryos."
Specifically, the team tried to modify a gene in a non-viable embryo that would have been responsible for a deadly blood disorder. But they noted in the study that they encountered serious challenges, suggesting there are still significant hurdles before clinical use becomes a reality.
CRISPR, the technology that makes all this possible, can find bad sections of DNA and cut them and even replace them with DNA that doesn't code for deadly diseases, but it can also make unwanted substitutions. Its level of accuracy is still very low.
Huang's group successfully introduced the DNA they wanted in only "a fraction" of the 28 embryos that had been "successfully spliced" (they tried 86 embryos at the start and tested 54 of the 71 that survived the procedure). They also found a "surprising number of ‘off-target’ mutations," according to Nature News.
Huang told Nature News that they stopped then because they knew that if they were do this work medically, that success rate would need to be closer to 100 percent. Our understanding of CRISPR needs to significantly develop before we get there, but this is a new technology that's changing rapidly.
Even though the Chinese team worked with non-viable embryos, embryos that cannot result in a live birth, editing the human genome and changing the DNA of an embryo is considered ethically questionable, because it could lead to more uses of this technology in humans. Changing the DNA of viable embryos could have unpredictable results for future generations, and some researchers want us to understand this better before putting it into practice.
Still, many researchers think this technology (most don't think it's ready to be used yet) could be invaluable. It could eliminate genetic diseases like sickle cell anemia, Huntington's disease, and cystic fibrosis, all devastating illnesses caused by genes that could theoretically be removed. Others fear that once we can do this accurately, it will inevitably be used to create designer humans with specific desired traits. After all, even though this research is considered questionable now, it is still actively being experimented with.
Huang told Nature News that both Nature and Science journals rejected his paper on embryo editing, "in part because of ethical objections." Neither journal commented to Nature News on that statement. Huang plans on trying to improve the accuracy of CRISPR in animal models for now. But CRISPR is reportedly quite easy to use, according to scientists who previously argued against doing this research in embryos now, meaning that it's incredibly likely these experiments will continue.
Every time you make a memory, somewhere in your brain a tiny filament reaches out from one neuron and forms an electrochemical connection to a neighboring neuron. A team of biologists at Vanderbilt University, headed by Associate Professor of Biological Sciences Donna Webb, studies how these connections are formed at the molecular and cellular level.
The filaments that make these new connections are called dendritic spines and, in a series of experiments described in the April 17 issue of the Journal of Biological Chemistry, the researchers report that a specific signaling protein, Asef2, a member of a family of proteins that regulate cell migration and adhesion, plays a critical role in spine formation. This is significant because Asef2 has been linked to autism and the co-occurrence of alcohol dependency and depression.
"Alterations in dendritic spines are associated with many neurological and developmental disorders, such as autism, Alzheimer's disease and Down Syndrome," said Webb. "However, the formation and maintenance of spines is a very complex process that we are just beginning to understand."
Neuron cell bodies produce two kinds of long fibers that weave through the brain: dendrites and axons. Axons transmit electrochemical signals from the cell body of one neuron to the dendrites of another neuron. Dendrites receive the incoming signals and carry them to the cell body. This is the way that neurons communicate with each other.
As they wait for incoming signals, dendrites continually produce tiny flexible filaments called filopodia. These poke out from the surface of the dendrite and wave about in the region between the cells searching for axons. At the same time, biologists think that the axons secrete chemicals of an unknown nature that attract the filopodia. When one of the dendritic filaments makes contact with one of the axons, it begins to adhere and to develop into a spine. The axon and spine form the two halves of a synaptic junction. New connections like this form the basis for memory formation and storage.
The formation of spines is driven by actin, a protein that produces microfilaments and is part of the cytoskeleton. Webb and her colleagues showed that Asef2 promotes spine and synapse formation by activating another protein called Rac, which is known to regulate actin activity. They also discovered that yet another protein, spinophilin, recruits Asef2 and guides it to specific spines. "Once we figure out the mechanisms involved, then we may be able to find drugs that can restore spine formation in people who have lost it, which could give them back their ability to remember," said Webb.
Cardiff scientists have for the first time identified the potential root cause of asthma and an existing drug that offers a new treatment.
A research team led by UC San Francisco scientists has found the genetic signature of enterovirus D68 (EV-D68) in half of California and Colorado children diagnosed with acute flaccid myelitis -- sudden, unexplained muscle weakness and paralysis -- between 2012 and 2014, with most cases occurring during a nationwide outbreak of severe respiratory illness from EV-D68 last fall. The finding strengthens the association between EV-D68 infection and acute flaccid myelitis, which developed in only a small fraction of those who got sick. The scientists could not find any other pathogen capable of causing these symptoms, even after checking patient cerebrospinal fluid for every known infectious agent.
Researchers analyzed the genetic sequences of EV-D68 in children with acute flaccid myelitis and discovered that they all corresponded to a new strain of the virus, designated strain B1, which emerged about four years ago and had mutations similar to those found in poliovirus and another closely related nerve-damaging virus, EV-D70. The B1 strain was the predominant circulating strain detected during the 2014 EV-D68 respiratory outbreak, and the researchers found it both in respiratory secretions and -- for the first time -- in a blood sample from one child as his acute paralytic illness was worsening.
The study also included a pair of siblings, both of whom were infected with genetically identical EV-D68 virus, yet only one of whom developed acute flaccid myelitis.
"This suggests that it's not only the virus, but also patients' individual biology that determines what disease they may present with," said Charles Chiu, MD, PhD, an associate professor of Laboratory Medicine and director of UCSF-Abbott Viral Diagnostics and Discovery Center. "Given that none of the children have fully recovered, we urgently need to continue investigating this new strain of EV-D68 and its potential to cause acute flaccid myelitis."
Among the 25 patients with acute flaccid myelitis in the study, 16 were from California and nine were from Colorado. Eleven were part of geographic clusters of children in Los Angeles and in Aurora, Colorado, who became symptomatic at the same time, and EV-D68 was detected in seven of these patients.
Optical tweezers have been used as an invaluable tool for exerting micro-scale force on microscopic particles and manipulating three-dimensional (3-D) positions of particles. Optical tweezers employ a tightly-focused laser whose beam diameter is smaller than one micrometer (1/100 of hair thickness), which generates attractive force on neighboring microscopic particles moving toward the beam focus. Controlling the positions of the beam focus enabled researchers to hold the particles and move them freely to other locations so they coined the name “optical tweezers.”
To locate the optically-trapped particles by a laser beam, optical microscopes have usually been employed. Optical microscopes measure light signals scattered by the optically-trapped microscopic particles and the positions of the particles in two dimensions. However, it was difficult to quantify the particles’ precise positions along the optic axis, the direction of the beam, from a single image, which is analogous to the difficulty of determining the front and rear positions of objects when closing an eye due to a lack of depth perception. Furthermore, it became more difficult to measure precisely 3-D positions of particles when scattered light signals were distorted by optically-trapped particles having complicated shapes or other particles occlude the target object along the optic axis.
Professor YongKeun Park and his research team in the Department of Physics at the Korea Advanced Institute of Science and Technology (KAIST) employed an optical diffraction tomography (ODT) technique to measure 3-D positions of optically-trapped particles in high speed. The principle of ODT is similar to X-ray CT imaging commonly used in hospitals for visualizing the internal organs of patients. Like X-ray CT imaging, which takes several images from various illumination angles, ODT measures 3-D images of optically-trapped particles by illuminating them with a laser beam in various incidence angles.
The KAIST team used optical tweezers to trap a glass bead with a diameter of 2 micrometers, and moved the bead toward a white blood cell having complicated internal structures. The team measured the 3-D dynamics of the white blood cell as it responded to an approaching glass bead via ODT in the high acquisition rate of 60 images per second. Since the white blood cell screens the glass bead along an optic axis, a conventionally-used optical microscope could not determine the 3-D positions of the glass bead. In contrast, the present method employing ODT localized the 3-D positions of the bead precisely as well as measured the composition of the internal materials of the bead and the white blood cell simultaneously.
Hubble has made more than 1.2 million observations and generated 100 terabytes of data, all while whirling around the Earth at 17,000 mph.
“Hubble found out that there's a supermassive black hole in the center of every galaxy. And that was a surprise,” Garcia said. “The black holes and the galaxies know about each other. The size of the black hole is in lockstep with the size of the galaxy."
Hubble doesn’t just stare into deep space; the telescope is just as good at observing objects closer to home. Hubble has provided scientists with images of Pluto’s four moons and photographic evidence that Jupiter’s Great Red Spot has been shrinking, as well as treating viewers to the fragments of a comet crashing into the gas planet.
Garcia’s favorite Hubble image captures the Andromeda galaxy’s nucleus, 2 million light-years away, which is actually pretty close. "It's a double nucleus, which is really rare. And it surrounds a supermassive black hole," Garcia said. "Only an astronomer would love it. It’s not an image the public would go wild over."
But there are plenty of images the public has gone wild over. Hubble images are embedded in our culture – seen in frames on walls, on computer screen savers and postage stamps. “An image captures people’s imaginations right away,” said John Trauger, a senior scientist at the Jet Propulsion Laboratory in La Cañada-Flintridge. “Hubble has really helped the idea of communicating science.”
Trauger helped process one of Hubble’s most recognizable images, featuring a dying star throwing dust back into space. “MyCn18,” an hourglass-shaped nebula with a green eye-like center, was photographed in 1996 and made the cover of both National Geographic and the Pearl Jam album “Binaural.”
Trauger was also the principal investigator of JPL’s mission to repair Hubble after it was launched with a flaw that rendered all its instruments “unfocusable.” In 1993, Space Shuttle Endeavour installed a new camera, called Wide Field Planetary Camera 2, giving us a view into the deepest regions of space. The camera was replaced again with a third version in 2009. With Hubble, scientists can see the same amount of detail in objects 10 times farther than they would be able to get from a land-based observatory. That allows humans to view a region of space 1,000 times larger than what we can see from the ground, Trauger said.
Hubble’s quest to capture the universe in images has benefited Earthlings in other ways, too. As NASA and the military push for more advanced digital camera technologies, those improvements eventually find their way into our pocket-sized devices.
Twenty-five years after its launch and six years after its last servicing mission, Hubble is at its scientific peak of productivity. NASA expects the satellite to work well in to the 2020s. By 2037, the agency estimates atmospheric drag will start to take its toll. Then they’ll think about boosting it up or bringing it back.
Researchers have uncovered the first evidence of a genetic link between prodigy and autism. The scientists found that child prodigies in their sample share some of the same genetic variations with people who have autism. These shared genetic markers occur on chromosome, according to the researchers from The Ohio State University and Nationwide Children’s Hospital in Columbus.
The findings confirm a hypothesis made by Joanne Ruthsatz, co-author of the study and assistant professor of psychology at Ohio State’s Mansfield campus. In a previous study, Ruthsatz and a colleague had found that half of the prodigies in their sample had a family member or a first- or second-degree relative with an autism diagnosis.
“Based on my earlier work, I believed there had to be a genetic connection between prodigy and autism and this new research provides the first evidence to confirm that,” Ruthsatz said.
The new study appears online in the journal Human Heredity.
These findings are the first step toward answering the big question, Ruthsatz said. “We now know what connects prodigy with autism. What we want to know is what distinguishes them. We have a strong suspicion that there’s a genetic component to that, as well, and that’s the focus of our future work,” she said.
The Human Heredity study involved five child prodigies and their families that Ruthsatz has been studying, some for many years. Each of the prodigies had received national or international recognition for a specific skill, such as math or music. All took tests to confirm their exceptional skills.
The researchers took saliva samples from the prodigies, and from between four and 14 of each prodigy’s family members. Each prodigy had between one and five family members in the study who had received a diagnosis on the autism spectrum.
Researchers at the MIT Media Laboratory are developing a new wearable device that turns the user’s thumbnail into a miniature wireless track pad. They envision that the technology could let users control wireless devices when their hands are full — answering the phone while cooking, for instance. It could also augment other interfaces, allowing someone texting on a cellphone, say, to toggle between symbol sets without interrupting his or her typing. Finally, it could enable subtle communication in circumstances that require it, such as sending a quick text to a child while attending an important meeting.
The researchers describe a prototype of the device, called NailO, in a paper they’re presenting next week at the Association for Computing Machinery’s Computer-Human Interaction conference in Seoul, South Korea.
According to Cindy Hsin-Liu Kao, an MIT graduate student in media arts and sciences and one of the new paper’s lead authors, the device was inspired by the colorful stickers that some women apply to their nails. “It’s a cosmetic product, popular in Asian countries,” says Kao, who is Taiwanese. “When I came here, I was looking for them, but I couldn’t find them, so I’d have my family mail them to me.”
Indeed, the researchers envision that a commercial version of their device would have a detachable membrane on its surface, so that users could coordinate surface patterns with their outfits. To that end, they used capacitive sensing — the same kind of sensing the iPhone’s touch screen relies on — to register touch, since it can tolerate a thin, nonactive layer between the user’s finger and the underlying sensors.
A team from Disney Research, Carnegie Mellon University and Cornell University have devised a 3-D printer that layers together laser-cut sheets of fabric to form soft, squeezable objects such as phone cases and toys. These objects can have complex geometries and incorporate circuitry that makes them interactive.
“Today’s 3-D printers can easily create custom metal, plastic, and rubber objects,” said Jim McCann, associate research scientist at Disney Research Pittsburgh. “But soft fabric objects, like plush toys, are still fabricated by hand. Layered fabric printing is one possible method to automate the production of this class of objects.”
The fabric printer is similar in principle to laminated object manufacturing, which takes sheets of paper or metal that have each been cut into a 2-D shape and then bonds them together to form a 3-D object. Fabric presents particular cutting and handling challenges, however, which the Disney team has addressed in the design of its printer.
The latest soft printing apparatus includes two fabrication surfaces: an upper cutting platform and a lower bonding platform. Fabric is fed from a roll into the device, where a vacuum holds the fabric up against the upper cutting platform while a laser cutting head moves below. The laser cuts a rectangular piece out of the fabric roll, then cuts the layer’s desired 2-D shape or shapes within that rectangle. This second set of cuts is left purposefully incomplete so that the shapes receive support from the surrounding fabric during the fabrication process.
Once the cutting is complete, the bonding platform is raised up to the fabric and the vacuum is shut off to release the fabric. The platform is lowered and a heated bonding head is deployed, heating and pressing the fabric against previous layers. The fabric is coated with a heat-sensitive adhesive, so the bonding process is similar to a person using a hand iron to apply non-stitched fabric ornamentation onto a costume or banner.
Once the process is complete, the surrounding support fabric is torn away by hand to reveal the 3-D object. The researchers demonstrated this technique by using 32 layers of 2-millimeter-thick felt to create a 2 ½-inch bunny. The process took about 2 ½ hours.
Two types of material can be used to create objects by feeding one roll of fabric into the machine from left to right, while a second roll of a different material is fed front to back. If one of the materials is conductive, the equivalent of wiring can be incorporated into the device. The researchers demonstrated the possibilities by building a fabric starfish that serves as a touch sensor, as well as a fabric smartphone case with an antenna that can harvest enough energy from the phone to light an LED.
The first Polynesian settlers caused nearly 1,000 flightless birds like the dodo to go extinct, new research suggests
Almost 4,000 years ago, tropical Pacific Islands were an untouched paradise, but the arrival of the first people in places like Hawaii and Fiji caused irreversible damage to these natural havens, due to overhunting and deforestation. As a result, birds disappeared. But understanding the scale and extent of these extinctions has been hampered by uncertainties in the fossil record.
Professor Tim Blackburn, Director of ZSL's Institute of Zoology says: "We studied fossils from 41 tropical Pacific islands and using new techniques we were able to gauge how many extra species of bird disappeared without leaving any trace." They found that 160 species of non-passerine land birds (non-perching birds which generally have feet designed for specific functions, for example webbed for swimming) went extinct without a trace after the first humans arrived on these islands alone. "If we take into account all the other islands in the tropical Pacific, as well as seabirds and songbirds, the total extinction toll is likely to have been around 1,300 bird species," Professor Blackburn added.
Species lost include several species of moa-nalos, large flightless waterfowl from Hawai'i, and the New Caledonian Sylviornis, a relative of the game birds (pheasants, grouse, etc) but which weighed in at around 30kg, three times as heavy as a swan. Certain islands and bird species were particularly vulnerable to hunting and habitat destruction. Small, dry islands lost more species because they were more easily deforested and had fewer places for birds to hide from hunters. Flightless birds were over 30 times more likely to become extinct that those that could fly.
Bird extinctions in the tropical Pacific did not stop with these losses. Forty more species disappeared after Europeans arrived, and many more species are still threatened with extinction today.