Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
The two mirrors of an Arizona telescope reveal heat rising from the horseshoe-shaped Loki lava lake on the moon Io.
For the first time ever, researchers on Earth have been able to capture detailed images of the heat rising from an active volcano on another body in the solar system. Using its twin 8.4-meter mirrors, the Large Binocular telescope observatory in Arizona managed to spy on a large lava lake on Jupiter’s moon Io, located some 500 million miles (800 million kilometers) away.
The innermost moon of the gas giant, which is slightly larger than Earth’s own moon, is considered the most geologically active body in the entire solar system. At least 300 volcanic structures dot its surface. The largest of these features is a volcanic pit named after Loki, the Norse trickster god. The depression is filled with a sulfur-encrusted lake measuring around 125 miles (200 kilometers) across.
The Voyager 1 spacecraft first discovered Io’s volcanism back on March 5, 1979, when it barnstormed the Jovian moon and snapped dramatic images of a giant, nearly 200-mile-tall plume (300 kilometers) rising into space, later revealed to be emanating from Loki.
The new high-resolution infrared images of Loki produced by the Large Binocular Telescope show an active horseshoe-shaped lava lake with multiple bright spots, representing heat rising from the lake surface. That exceptional detail is thanks to the telescope’s ability to digitally stitch together its two mirrors, giving an image like that of one single optical unit spanning nearly 75 feet (23 meters) across.
“In this way, for the first time we can measure the brightness coming from different regions within the lake,” explained Al Conrad, the lead of the study and a Scientist at the Large Binocular Telescope Observatory (LBTO) in a press statement.
Scientists are attempting to control the weather by using lasers to create clouds, induce rain and even trigger lightning.
Professor Jean-Pierre Wolf and Dr Jerome Kasparian, both biophotonics experts at the University of Geneva, have now organised a conference at the WMO next month in an attempt to find ways of speeding up research on the topic. They said: “Ultra-short lasers launched into the atmosphere have emerged as a promising prospective tool for weather modulation and climate studies.
“Such prospects include lightning control and laser-assisted condensation.”
There is a long history of attempts by scientists to control the weather, including using techniques such as cloud seeding.
This involves spraying small particles and chemicals into the air to induce water vapour to condense into clouds.
In the 1960s the United States experimented with using silver iodide in an attempt to weaken hurricanes before they made landfall. The USSR was also claimed to have flown cloud seeding missions in an attempt to create rain clouds to protect Moscow from radioactive fallout from the Chernobyl nuclear disaster.
More recently the Russian Air force has also been reported to have used bags of cement to seed clouds.
Before the 2008 Olympic Games in Beijing, the Chinese authorities used aircraft and rockets to release chemicals into the atmosphere.
Other countries have been reported to be experimenting with cloud seeding to prevent flooding or smog.
However, Professor Wolf, Dr Kasparian and their colleagues believe that lasers could provide an easier and more controllable method of changing the weather. They began studying lasers for their use as a way of monitoring changes in the air and detecting aerosols high in the atmosphere.
Experiments using varying pulses of near infra-red laser light and ultraviolet lasers have, however, shown that they cause water to condense. They have subsequently found the lasers induce tiny ice crystals to form, which are a crucial step in the formation of clouds and eventual rainfall.
In new research published in the Proceedings of the National Academy of Sciences, Professor Wolf said the laser beams create plasma channels in the air that caused ice to form.
Researchers have captured the first 3D video of a living algal embryo (Volvox sp.) turning itself inside out, from a sphere to a mushroom shape and back again. The results could help unravel the mechanical processes at work during a similar process in animals, which has been called the "most important time in your life."
Researchers from the University of Cambridge have captured the first three-dimensional images of a live embryo turning itself inside out. The images, of embryos of a green alga called Volvox, make an ideal test case to understand how a remarkably similar process works in early animal development.
Using fluorescence microscopy to observe the Volvox embryos, the researchers were able to test a mathematical model of morphogenesis - the origin and development of an organism's structure and form - and understand how the shape of cells drives the process of inversion, when the embryo turns itself from a sphere to a mushroom shape and back again. Their findings are published today (27 April) in the journal Physical Review Letters.
The processes observed in the Volvox embryo are similar to the process of gastrulation in animal embryos - which biologist Lewis Wolpert called "the most important event in your life." During gastrulation, the embryo folds inwards into a cup-like shape, forming the primary germ layers which give rise to all the organs in the body. Volvox embryos undergo a similar process, but with an additional twist: the embryos literally turn themselves right-side out during the process.
Gastrulation in animals results from a complex interplay of cell shape changes, cell division and migration, making it difficult to develop a quantitative understanding of the process. However, Volvox embryos complete their shape change only by changing cell shapes and the location of the connections between cells, and this simplicity makes them an ideal model for understanding cell sheet folding.
In Volvox embryos, the process of inversion begins when the embryos start to fold inward, or invaginate, around their middle, forming two hemispheres. Next, one hemisphere moves inside the other, an opening at the top widens, and the outer hemisphere glides over the inner hemisphere, until the embryo regains its spherical shape. This remarkable process takes place over approximately one hour.
Once thought irreversible, vision loss sometimes associated with stroke may be treatable. By doing a set of vigorous visual exercises on a computer every day for several months, patients who had gone partially blind as a result of suffering a stroke were able to regain some vision. Some patients even were able to drive again.
“We were very surprised when we saw the results from our first patients,” says Krystel Huxlin, the neuroscientist and associate professor who led the study of seven patients at the University of Rochester’s Eye Institute. “This is a type of brain damage that clinicians and scientists have long believed you simply can’t recover from. It’s devastating, and patients are usually sent home to somehow deal with it the best they can.”
The results are a cause for hope for patients with vision damage from stroke or other causes, says Huxlin. The work also shows a remarkable capacity for “plasticity” in damaged, adult brains. It shows that the brain can change a great deal in older adults and that some brain regions are capable of covering for other areas that have been damaged.
Huxlin studied seven people who had suffered a stroke that damaged an area of the brain known as the primary visual cortex or V1, which serves as the gateway to the rest of the brain for all the visual information that comes through our eyes. V1 passes visual information along to dozens of other brain areas, which process and make sense of the information, ultimately allowing us to see.
Patients with damage to the primary visual cortex have severely impaired vision – they typically have a difficult or impossible time reading, driving, or getting out to do ordinary chores like grocery shopping. Patients may walk into walls, oftentimes cannot navigate stores without bumping into goods or other people, and they may be completely unaware of cars on the road coming toward them from the left or right.
The air around us consists of countless molecules, moving around randomly. It would be utterly impossible to track them all and to describe all their trajectories. But for many purposes, this is not necessary. Properties of the gas can be found which describe the collective behaviour of all the molecules, such as the air pressure or the temperature, which results from the particles' energy. On a hot summer's day, the molecules move at about 430 meters per second, in winter, it is a bit less.
This statistical view (which was developed by the Viennese physicist Ludwig Boltzmann) has proved to be extremely successful and describes many different physical systems, from pots of boiling water to phase transitions in liquid crystals in LCD-displays. However, in spite of huge efforts, open questions have remained, especially with regard to quantum systems. How the well-known laws of statistical physics emerge from many small quantum parts of a system remains one of the big open questions in physics.
Scientists at the Vienna University of Technology have now succeeded in studying the behaviour of a quantum physical multi-particle system in order to understand the emergence of statistical properties. The team of Professor Jörg Schmiedmayer used a special kind of microchip to catch a cloud of several thousand atoms and cool them close to absolute zero at -273°C, where their quantum properties become visible.
The experiment showed remarkable results: When the external conditions on the chip were changed abruptly, the quantum gas could take on different temperatures at once. It can be hot and cold at the same time. The number of temperatures depends on how exactly the scientists manipulate the gas. "With our microchip we can control the complex quantum systems very well and measure their behaviour", says Tim Langen, leading author of the paper published in Science. There had already been theoretical calculations predicting this effect, but it has never been possible to observe it and to produce it in a controlled environment.
The experiment helps scientists to understand the fundamental laws of quantum physics and their relationship with the statistical laws of thermodynamics. This is relevant for many different quantum systems, maybe even for technological applications. Finally, the results shed some light on the way our classical macroscopic world emerges from the strange world of tiny quantum objects.
The McMurdo Dry Valleys form the largest ice-free region in Antarctica. They also make up the coldest and driest environments on the planet. Yet, despite these extreme conditions, the valleys' surface is home to a large diversity of microbial life. Now, new evidence suggests that a vast network of salty liquid water exists 1,000 feet below the surface — a finding that lends support to the idea that microbial life may exist beneath Antarctica's surface as well. The finding isn't just exciting for Earth ecologists, however; planetary scientists are intrigued as well. Indeed, finding salty liquid water below Antarctica provides strong support for the idea that Mars, an environment that resembles Antarctic summers, may have similar aquifers beneath its surface — aquifers that could support microscopic life.
"Before this study, we didn't know to what extent life could exist beneath the glaciers, beneath hundreds of meters of ice, beneath ice covered lakes and deep into the soil," says Ross Virginia, an ecosystem environmentalist at Dartmouth College and a co-author of the study, published in Nature Communications today. This study opens up "possibilities for better understanding the combinations of factors that might be found on other planets and bodies outside of the Earth" — including Mars.
Approximately 4.5 billion years ago, 20 percent of the Martian surface was likely covered in water. Today, Mars may still be home to small amounts of salty liquid water, which would exist on the planet's soil at night before evaporating during the daytime. Taken together, these findings are pretty exciting for those who hope to discover life on Mars — water, after all, is a requirement for life. Unfortunately, researchers have also pointed out that the Martian surface is far too cold for the survival of any known forms of life. That's why some scientists have started to wonder about what may lie beneath the Martian surface. If the extreme environment conditions found in Antarctica's subsurface contains all the elements necessary for life, it's possible that the Martian subsurface might as well.
In the study, researchers flew a helicopter more than 114 square miles over Taylor Valley— the southernmost of the three dry valleys. Below the helicopter, researchers suspended a large antenna. The technology, called SkyTem, acted as an airborne electromagnetic sensor that generated an electromagnetic field capable of penetrating through ice or into the soil in the dry valley. As the antenna surveyed the valley, the electromagnetic field reflected back information that was altered from the original signal depending on whether it encountered a brine or frozen soil or ice, Virginia explains. "So basically we're inferring the distribution of those types of materials based on what is reflected back to these helicopters flying over the surface of Antarctica."
Nanotronics Imaging, an Ohio-based company backed by PayPal founder and early Facebook investor Peter Thiel, makes atomic-scale microscopes that both researchers and industrial manufacturers can use in the production of nanoscale materials. Today at the Tribeca Disruptive Innovation Awards the company announced a new endeavor: the ability to view the microscopes’ output using virtual reality headsets like the Rift.
The new product, nVisible, will enable Nanotronics users to do virtual walkthroughs of nano-structures, which the company says will enable them to better visualize and understand the materials they’re working with. But most importantly, it could help manufacturers create more reliable processes for building nanoscale products—which has historically been a huge hurdle in working with such incredibly small materials.
SINCE its first use in the 1980s – a breakthrough dramatised in recent ITV series Code of a Killer– DNA profiling has been a vital tool for forensic investigators. Now researchers at the University of Huddersfield have solved one of its few limitations by successfully testing a technique for distinguishing between the DNA – or genetic fingerprint – of identical twins.
The probability of a DNA match between two unrelated individuals is about one in a billion. For two full siblings, the probability drops to one-in-10,000. But identical twins present exactly the same DNA profile as each other and this has created legal conundrums when it was not possible to tell which of the pair was guilty or innocent of a crime. This has led to prosecutions being dropped, rather than run the risk of convicting the wrong twin.
Now Dr Graham Williams (pictured right) and his Forensic Genetics Research Group at the University of Huddersfield have developed a solution to the problem and published their findings in the journal Analytical Biochemistry. Previous methods have been proposed for distinguishing the DNA of twins. One is termed “mutation analysis”, where the whole genome of both twins is sequenced to identify mutations that might have occurred to one of them.
“If such a mutation is identified at a particular location in the twin, then that same particular mutation can be specifically searched for in the crime scene sample. However, this is very expensive and time-consuming and is unlikely to be paid for by cash-strapped police forces,” according to Dr Williams, who has shown that a cheaper, quicker technique is available.
It is based on the concept of DNA methylation, which is effectively the molecular mechanism that turns various genes on and off. As twins get older, the degree of difference between them grows as they are subjected to increasingly different environments. For example, one might take up smoking, or one might have a job outdoors and the other a desk job. This will cause changes in the methylation status of the DNA.
In order to carry our speedy, inexpensive analysis of this, Dr Williams and his team propose a technique named “high resolution melt curve analysis” (HRMA). “What HRMA does is to subject the DNA to increasingly high temperatures until the hydrogen bonds break, known as the melting temperature. The more hydrogen bonds that are present in the DNA, the higher the temperature required to melt them,” explains Dr Williams.
“Consequently, if one DNA sequence is more methylated than the other, then the melting temperatures of the two samples will differ – a difference that can be measured, and which will establish the difference between two identical twins.”
It’s the holy grail in energy production: produce a fuel that is both carbon neutral and can be poured directly into our current cars without the need to retrofit. There are scores of companies out there trying to do just that using vegetable oil, algae, and even the microbes found in panda poop to turn bamboo into fuel.
This week, German car manufacturer Audi has declared that they have been able to create an "e-diesel," or diesel containing ethanol, by using renewable energy to produce a liquid fuel from nothing more than water and carbon dioxide. After a commissioning phase of just four months, the plant in Dresden operated by clean tech company Sunfire has managed to produce its first batch of what they’re calling “blue crude.” The product liquid is composed of long-chain hydrocarbon compounds, similar to fossil fuels, but free from sulfur and aromatics and therefore burns soot-free.
The first step in the process involves harnessing renewable energy through solar, wind or hydropower. This energy is then used to heat water to temperatures in excess of 800oC (1472oF). The steam is then broken down into oxygen and hydrogen through high temperature electrolysis, a process where an electric current is passed through a solution.
The hydrogen is then removed and mixed with carbon monoxide under high heat and pressure, creating a hydrocarbon product they’re calling "blue crude." Sunfire claim that the synthetic fuel is not only more environmentally friendly than fossil fuel, but that the efficiency of the overall process—from renewable power to liquid hydrocarbon—is very high at around 70%. The e-diesel can then be either mixed with regular diesel, or used as a fuel in its own right.
But all may not be as it seems. The process used by Audi is actually called the Fischer-Tropsch process and has been known by scientists since the 1920s. It was even used by the Germans to turn coal into diesel during the Second World War when fuel supplies ran short. The process is currently used by many different companies all around the world, especially in countries where reserves of oil are low but reserves of other fossils fuels, such as gas and coal, are high.
And it would seem that Audi aren’t the first to think about using biogas facilities to produce carbon neutral biofuels either. Another German company called Choren has already made an attempt at producing biofuel using biogas and the Fischer-Tropsch process. Backed byShell and Volkswagen, the company had all the support and funding it needed, but in 2011 it filed for bankruptcy due to impracticalities in the process.
Audi readily admits that none of the processes they use are new, but claim it’s how they’re going about it that is. They say that increasing the temperature at which the water is split increases the efficiency of the process and that the waste heat can then be recovered. Whilst their announcement might not be heralding a new fossil fuel-free era, the tech of turning green power into synthetic fuel could have applications as a battery to store excess energy produced by renewables.
A crucial bottleneck that prevents life-saving surgery being performed in many parts of the world is the lack of trained surgeons. One way to get around this is to make better use of the ones that are available. Sending them over great distances to perform operations is clearly inefficient because of the time that has to be spent travelling. So an increasingly important alternative is the possibility of telesurgery with an expert in one place controlling a robot in another that physically performs the necessary cutting and dicing. Indeed, the sale of medical robots is increasing at a rate of 20 percent per year.
But while the advantages are clear, the disadvantages have been less well explored. Telesurgery relies on cutting edge technologies in fields as diverse as computing, robotics, communications, ergonomics, and so on. And anybody familiar with these areas will tell you that they are far from failsafe.
Today, Tamara Bonaci and pals at the University of Washington in Seattle examine the special pitfalls associated with the communications technology involved in telesurgery. In particular, they show how a malicious attacker can disrupt the behavior of a telerobot during surgery and even take over such a robot, the first time a medical robot has been hacked in this way.
The first telesurgery took place in 2001 with a surgeon in New York successfully removing the gall bladder of a patient in Strasbourg in France, more than 6,000 kilometers away. The communications ran over a dedicated fiber provided by a telecommunications company specifically for the operation. That’s an expensive option since dedicated fibers can cost tens of thousands of dollars.
Since then, surgeons have carried out numerous remote operations and begun to experiment with ordinary communications links over the Internet, which are significantly cheaper. Although there are no recorded incidents in which the communications infrastructure has caused problems during a telesurgery operation, there are still questions over security and privacy which have never been full answered.
Ido Bachelet, who was previously at Harvard’s Wyss Institute in Boston, Massachusetts and Israel’s Bar-Ilan University, intends to treat a patient who has been given six months to live. The patient is set to receive an injection of DNA nanocages designed to interact with and destroy leukemia cells without damaging healthy tissue. Speaking in December, he said: ‘Judging from what we saw in our tests, within a month that person is going to recover.
DNA nanocages can be programmed to independently recognize target cells and deliver payloads, such as cancer drugs, to these cells.
George Church, who is involved in the research at the Wyss Institute explained the idea of the microscopic robots is to make a ‘cage’ that protects a fragile or toxic payload and ‘only releases it at the right moment.’
These nanostructures are built upon a single strand of DNA which is combined with short synthetic strands of DNA designed by the experts. When mixed together, they self-assemble into a desired shape, which in this case looks a little like a barrel.
Dr Bachelet said: 'The nanorobot we designed actually looks like an open-ended barrel, or clamshell that has two halves linked together by flexible DNA hinges and the entire structure is held shut by latches that are DNA double helixes.’
A complementary piece of DNA is attached to a payload, which enables it to bind to the inside of the biological barrel. The double helixes stay closed until specific molecules or proteins on the surface of cancer cells act as a 'key' to open the ‘barrel’ so the payload can be deployed.
'The nanorobot is capable of recognizing a small population of target cells within a large healthy population,’ Dr Bachelet continued.
‘While all cells share the same drug target that we want to attack, only those target cells that express the proper set of keys open the nanorobot and therefore only they will be attacked by the nanorobot and by the drug.’
The team has tested its technique in animals as well as cell cultures and said the ‘nanorobot attacked these [targets] with almost zero collateral damage.’ The method has many advantages over invasive surgery and blasts of drugs, which can be ‘as painful and damaging to the body as the disease itself,’ the team added.
An international team of scientists has sequenced the complete genome of the woolly mammoth. A US team is already attempting to study the animals' characteristics by inserting mammoth genes into elephant stem cells. They want to find out what made the mammoths different from their modern relatives and how their adaptations helped them survive the ice ages.
The new genome study has been published in the journal Current Biology. Dr Love Dalén, at the Swedish Museum of Natural History in Stockholm, told BBC News that the first ever publication of the full DNA sequence of the mammoth could help those trying to bring the creature back to life. "It would be a lot of fun (in principle) to see a living mammoth, to see how it behaves and how it moves," he said.
But he would rather his research was not used to this end. "It seems to me that trying this out might lead to suffering for female elephants and that would not be ethically justifiable."
Dr Dalén and the international group of researchers he is collaborating with are not attempting to resurrect the mammoth. But the Long Now Foundation, an organisation based in San Francisco, claims that it is. Now, with the publication of the complete mammoth genome, it could be a step closer to achieving its aim.
On its website, the foundation says its ultimate goal is "to produce new mammoths that are capable of repopulating the vast tracts of tundra and boreal forest in Eurasia and North America. "The goal is not to make perfect copies of extinct woolly mammoths, but to focus on the mammoth adaptations needed for Asian elephants to live in the cold climate of the tundra.
Unlike their elephant cousins, woolly mammoths were creatures of the cold, with long hairy coats, thick layers of fat and small ears that kept heat loss to a minimum. For the first time, scientists have comprehensively catalogued the hundreds of genetic mutations that gave rise to these differences.
In the latest study, Vincent Lynch, an evolutionary geneticist at the University of Chicago, Illinois, and his team describe how they sequenced the genomes of three Asian elephants and two woolly mammoths (one died 20,000 years ago, another 60,000 years ago) to a very high quality. They found about 1.4 million DNA letters that differ between mammoths and elephants, which altered the sequence of more than 1,600 protein-coding genes. The study was posted on the biology preprint server bioRxiv.org on 23 April1.
The mammoth genomes also contained extra copies of a gene that controls the production of fat cells and variations in genes linked to insulin signaling, which are in turn linked to diabetes and diabetes prevention. And several of the genes that differ between mammoths and elephants are involved in sensing heat and transmitting that information to the brain.
The team ‘resurrected’ the mammoth version of one of the heat-sensing genes, which encodes a protein called TRPV3 that is expressed in skin and also regulates hair growth. They spliced the gene sequence into the genomes of human cells in the lab and exposed them to different temperatures, revealing that the mammoth TRPV3 protein is less responsive to heat than the elephant version is. The result chimes with a previous finding that mice with a deactivated version of TRPV3 are more likely to spend time in colder parts of their cage compared with normal rodents, and boast wavier hair.
The next step, says Lynch, is to insert the same gene into elephant cells that have been chemically programmed to behave like embryonic cells, and so can be turned into a variety of cell types. Such induced pluripotent stem (iPS) cells could then be used to examine expression of mammoth proteins in different tissues. Lynch's team also plans to test the effects of other mammoth mutations in iPS cells.
Similar work is already being carried out in the lab of George Church, a geneticist at Harvard Medical School in Boston. Using a technology known as CRISPR/Cas9 that allows genes to be easily edited, his team claims to have engineered elephant cells that contain the mammoth version of 14 genes potentially involved in cold tolerance — although the team has not yet tested how this affects the elephant cells. Church plans to do these experiments in “organoids” created from elephant iPS cells.
The work, says Church, is a preamble to editing an entire woolly mammoth genome — and perhaps even resurrecting the woolly mammoth, or at least giving an Asian elephant enough mammoth genes to survive in the Arctic. The second option would be easier to do because it would require fewer mutations than the first option. A 16-square-kilometre reserve in north Siberia, known as Pleistocene Park, has even been proposed as a potential home for such a population of cold-tolerant elephants.
However, whether anyone would want to do such a thing is a different question, says Lynch, and Shapiro agrees. In her book, she outlines the innumerable hurdles that stand in the way of breeding genetically modified ‘woolly elephants’ — from the ethics of applying reproductive technologies to an endangered species to the fact that the field of elephant reproductive biology is still immature.
How easy would it be to edit a human embryo using CRISPR? Very easy, experts say. “Any scientist with molecular biology skills and knowledge of how to work with embryos is going to be able to do this,” says Jennifer Doudna, a biologist at the University of California, Berkeley, who in 2012 co-discovered how to use CRISPR to edit genes.
To find out how it could be done, I visited the lab of Guoping Feng, a biologist at MIT’s McGovern Institute for Brain Research, where a colony of marmoset monkeys is being established with the aim of using CRISPR to create accurate models of human brain diseases. To create the models, Feng will edit the DNA of embryos and then transfer them into female marmosets to produce live monkeys. One gene Feng hopes to alter in the animals is SHANK3. The gene is involved in how neurons communicate; when it’s damaged in children, it is known to cause autism.
Feng said that before CRISPR, it was not possible to introduce precise changes into a primate’s DNA. With CRISPR, the technique should be relatively straightforward. The CRISPR system includes a gene-snipping enzyme and a guide molecule that can be programmed to target unique combinations of the DNA letters, A, G, C, and T; get these ingredients into a cell and they will cut and modify the genome at the targeted sites.
But CRISPR is not perfect—and it would be a very haphazard way to edit human embryos, as Feng’s efforts to create gene-edited marmosets show. To employ the CRISPR system in the monkeys, his students simply inject the chemicals into a fertilized egg, which is known as a zygote—the stage just before it starts dividing.
Feng said the efficiency with which CRISPR can delete or disable a gene in a zygote is about 40 percent, whereas making specific edits, or swapping DNA letters, works less frequently—more like 20 percent of the time. Like a person, a monkey has two copies of most genes, one from each parent. Sometimes both copies get edited, but sometimes just one does, or neither. Only about half the embryos will lead to live births, and of those that do, many could contain a mixture of cells with edited DNA and without. If you add up the odds, you find you’d need to edit 20 embryos to get a live monkey with the version you want.
That’s not an insurmountable problem for Feng, since the MIT breeding colony will give him access to many monkey eggs and he’ll be able to generate many embryos. However, it would present obvious problems in humans. Putting the ingredients of CRISPR into a human embryo would be scientifically trivial. But it wouldn’t be practical for much just yet. This is one reason that many scientists view such an experiment (whether or not it has really occurred in China) with scorn, seeing it more as a provocative bid to grab attention than as real science. Rudolf Jaenisch, an MIT biologist who works across the street from Feng and who in the 1970s created the first gene-modified mice, calls attempts to edit human embryos “totally premature.” He says he hopes these papers will be rejected and not published. “It’s just a sensational thing that will stir things up,” says Jaenisch. “We know it’s possible, but is it of practical use? I kind of doubt it.”
Among other problems, CRISPR can introduce off-target effects or change bits of the genome far from where scientists had intended. Any human embryo altered with CRISPR today would carry the risk that its genome had been changed in unexpected ways. But, Feng said, such problems may eventually be ironed out, and edited people will be born. “To me, it’s possible in the long run to dramatically improve health, lower costs. It’s a kind of prevention,” he said. “It’s hard to predict the future, but correcting disease risks is definitely a possibility and should be supported. I think it will be a reality.”
Elsewhere in the Boston area, scientists are exploring a different approach to engineering the germ line, one that is technically more demanding but probably more powerful. This strategy combines CRISPR with unfolding discoveries related to stem cells. Scientists at several centers, including Church’s, think they will soon be able to use stem cells to produce eggs and sperm in the laboratory. Unlike embryos, stem cells can be grown and multiplied. Thus they could offer a vastly improved way to create edited offspring with CRISPR. The recipe goes like this: First, edit the genes of the stem cells. Second, turn them into an egg or sperm. Third, produce an offspring.
Some investors got an early view of the technique on December 17, at the Benjamin Hotel in Manhattan, during commercial presentations by OvaScience. The company, which was founded four years ago, aims to commercialize the scientific work of David Sinclair, who is based at Harvard, and Jonathan Tilly, an expert on egg stem cells and the chairman of the biology department at Northeastern University (see “10 Emerging Technologies: Egg Stem Cells,”May/June 2012). It made the presentations as part of a successful effort to raise $132 million in new capital during January.
During the meeting, Sinclair, a velvet-voiced Australian whom Time last year named one of the “100 Most Influential People in the World,” took the podium and provided Wall Street with a peek at what he called “truly world-changing” developments. People would look back at this moment in time and recognize it as a new chapter in “how humans control their bodies,” he said, because it would let parents determine “when and how they have children and how healthy those children are actually going to be.”
The company has not perfected its stem-cell technology—it has not reported that the eggs it grows in the lab are viable—but Sinclair predicted that functional eggs were “a when, and not an if.” Once the technology works, he said, infertile women will be able to produce hundreds of eggs, and maybe hundreds of embryos. Using DNA sequencing to analyze their genes, they could pick among them for the healthiest ones.
Humans have been playing defense against viruses for much of history. Think about it, -people mainly take action against a virus once it has already become a threat. Just recently, researchers have switched tactics and taken the offensive. A team of scientists lead by Simon Anthony of Columbia University released a study this month in the journal mBio estimating the number of novel viruses in all mammalian species to be 320,000. Anthony and his colleagues' estimate of mammalian viruses is the first ever to be statistically supported. With this information scientists may discover potentially dangerous viruses before they transition from wildlife to humans.
Roughly 70% of all new viruses infecting humans originated in other animals. Viruses that originate in mammals are particularly hazardous because they are easily transferable to people; their exposure to other mammalian species allows them to skillfully "navigate our own warm-blooded bodies." The knowledge of just how many viruses may be lurking in mammals helps scientists assess the threat viruses pose to society. In order to calculate the number of viruses, scientists studied one species of flying fox (a type of bat) known as Pteropus giganteus. Found in Bangladesh, the flying fox is a known carrier of multiple viruses, such as Nipah, and was therefore well-suited for the study. Scientists repeatedly took biological samples - 1,900 in all - from bat populations over a five year span. From those samples fifty-five different viruses from nine viral families were identified. Only five of the viruses found were previously known to scientists! Calculating that another three unknown viruses were not accounted for in the study, the researchers estimated that flying foxes alone harbor 58 viruses. If all 5,486 known species of mammals carried 58 different viruses, then the total number of undiscovered mammalian viruses is at least 320,000.
To clarify, 320,000 viruses is a very rough estimate. The scientists assumed that every mammal carries 58 viruses based on their findings with flying foxes. The problem with this figure is that flying foxes, and bats in general, are virus friendly animals. The lifestyle factors that predispose bats to be good viral carriers are living in large communities, long distance travel and dispersal throughout the world. It is unlikely that the remaining 5,485 other mammals also carry exactly 58 viruses. Scientists are not sure by what factor the estimate could be off. Dr. Anthony explains "it is very likely that 320,000 viruses under-estimates the actual number of viruses, but we have no way of knowing by how much. It is for this very reason that we need to expand and repeat these systematic studies, and only then will we be able to refine our estimations with greater confidence." This study is significant because it presents the first statistically supported estimate of mammalian viruses.
Before now, some scientists speculated there to be millions of undiscovered mammalian viruses. Only 320,000 mammalian viruses is a much more reassuring and manageable number than ten million. Now that the number of estimated viruses has dropped from the millions to hundred thousands, identifying each one is feasible. The researchers' goal is to track down every single mammalian virus and catalogue it. Dr. Anthony predicts that all mammalian viruses can be identified over a 10 year period for a mere $6.3 billion. You may not think of $6.3 billion as a small amount, but in comparison to the cost of pandemics it's not too outlandish. The SARS outbreak alone cost $16 billion dollars. For just $1.2 billion the scientists estimate that 85% of mammalian viruses can be identified.
The research efforts in nanotechnology have significantly advanced development of display devices. Graphene, an atomic layer of carbon material that won scientists Andre Geim and Konstantin Novoselov the 2010 Nobel Prize in Physics, has emerged as a key component for flexible and wearable displaying devices. Owing to its fascinating electronic and optical properties, and high mechanical strength, graphene has been mainly used as touch screens in wearable devices such as mobiles. This technical advance has enabled devices such as smart watches, fitness bands and smart headsets to transition from science fiction into reality, even though the display is still 2D flat.
But wearable displaying devices, in particular devices with a floating display, will remain one of the most significant trends in the industry, which is projected to double every two years and exceed US$12 billion in the year of 2018.
In a paper, published in Nature Communications, we show how our technology realizes wide viewing-angle and full-color floating 3D display in graphene based materials. Ultimately this will help to transform wearable displaying devices into floating 3D displays.
A graphene enabled floating display is based on the principle of holography invented by Dennis Gabor, who was awarded the Nobel Prize in Physics in 1971. The idea of optical holography provides a revolutionary method for recording and displaying both 3D amplitude and phase of an optical wave that comes from an object of interest.
The physical realization of high definition and wide viewing angle holographic 3D displays relies on the generation of a digital holographic screen which is composed of many small pixels. These pixels are used to bend light carrying the information for display. The angle of bending is measured by the refractive index of the screen material – according to the holographic correlation.
The smaller the refractive index pixels, the larger the bending angle once the beam passes through the hologram. This nanometer size of pixels is of great significance for the reconstructed 3D object to be vividly viewed in a wide angle. The process is complex but the key physical step is to control the heating of photoreduction of graphene oxides, derivatives of graphene with analogous physical structures but presence of additional oxygen groups. Through a photoreduction process, without involving any temperature increment, graphene oxides can be reduced toward graphene by absorbing a single femtosecond pulsed laser beam.
During the photoreduction, a change in the refractive index can be created. Through such a photoreduction we are able to create holographically-correlated refractive index pixel at the nanometer scale. This technique enables the reconstructed floating 3D object to be vividly and naturally viewed in a wide angle up to 52 degrees.
This result corresponds to an improvement in viewing angles by one-order-of-magnitude compared with the current available 3D holographic displays based on liquid crystal phase modulators, limited to a few degrees. In addition, the constant refractive index change over the visible spectra in reduced graphene oxides enables full-color 3D display.
Separating a singer’s voice from background music has always been a uniquely human ability. Not anymore.
The cocktail party effect is the ability to focus on a specific human voice while filtering out other voices or background noise. The ease with which humans perform this trick belies the challenge that scientists and engineers have faced in reproducing it synthetically. By and large, humans easily outperform the best automated methods for singling out voices. A particularly challenging cocktail party problem is in the field of music, where humans can easily concentrate on a singing voice superimposed on a musical background that includes a wide range of instruments. By comparison, machines are poor at this task.
Today, that looks to be changing thanks to the work of Andrew Simpson and pals at the University of Surrey in the U.K. These guys have used some of the most recent advances associated with deep neural networks to separate human voices from the background in a wide range of songs. Their approach showcases the huge advances that have been made in recent years in machine learning and neural networks. And it paves the way for a more general solution to the famous cocktail party problem which should allow, among other things, the vocals to be easily separated from the music they accompany.
The method these guys use is relatively straightforward. They start with a database of 63 songs that are available as a set of individual tracks that each contain a different instrument or voice, as well as the fully mixed version of the song. Simpson and co divide each track into 20-second segments and create a spectrogram for each that shows how the frequencies in the sound vary over time. The result is a kind of unique fingerprint that identifies the instrument or voice.
They also create a spectrogram of the fully mixed version of the song. This is essentially all of the component spectrograms added together.
The task of picking out a voice from this mixture is essentially the task of separating the voice’s unique spectrogram from the other spectrograms that are present. Simpson and co trained their deep convolutional neural network to do exactly that. They used 50 of these songs to train the network while keeping the remaining 13 to test it on. In total that generated more than 20,000 spectrograms for training purposes.
The task for the neural network was simple. As an input, they gave it the fully mixed spectrogram and expected it to produce, essentially, the vocal spectrogram as the output. The task in this kind of machine learning is one of parameter optimization. Their deep neural network has a billion parameters that need to be tuned in a way that produces the desired output. This process of optimization—or learning—occurs by iteration. So the network begins with these parameters set randomly and then gradually improves the settings each time it scans through the database, which it did over a hundred iterations.
Having found a good setup for the network, Simpson and co then gave it the 13 songs it had not seen before to test how well it could separate the vocals from the mix. The outputs turned out to be impressive. “These results demonstrate that a convolutional deep neural network approach is capable of generalizing voice separation, learned in a musical context, to new musical contexts,” say the team.
At first glance, there is not the slightest doubt: to us, the universe looks three dimensional. But one of the most fruitful theories of theoretical physics in the last two decades is challenging this assumption. The "holographic principle" asserts that a mathematical description of the universe actually requires one fewer dimension than it seems. What we perceive as three dimensional may just be the image of two dimensional processes on a huge cosmic horizon.
Up until now, this principle has only been studied in exotic spaces with negative curvature. This is interesting from a theoretical point of view, but such spaces are quite different from the space in our own universe. Results obtained by scientists at TU Wien (Vienna) now suggest that the holographic principle even holds in a flat spacetime.
Gravitational phenomena are described in a theory with three spatial dimensions, the behavior of quantum particles is calculated in a theory with just two spatial dimensions - and the results of both calculations can be mapped onto each other. Such a correspondence is quite surprising. It is like finding out that equations from an astronomy textbook can also be used to repair a CD-player. But this method has proven to be very successful. More than ten thousand scientific papers about Maldacena's "AdS-CFT-correspondence" have been published to date.
For theoretical physics, this is extremely important, but it does not seem to have much to do with our own universe. Apparently, we do not live in such an anti-de-sitter-space. These spaces have quite peculiar properties. They are negatively curved, any object thrown away on a straight line will eventually return. "Our universe, in contrast, is quite flat - and on astronomic distances, it has positive curvature", says Daniel Grumiller.
However, Grumiller has suspected for quite some time that a correspondence principle could also hold true for our real universe. To test this hypothesis, gravitational theories have to be constructed, which do not require exotic anti-de-sitter spaces, but live in a flat space. For three years, he and his team at TU Wien (Vienna) have been working on that, in cooperation with the University of Edinburgh, Harvard, IISER Pune, the MIT and the University of Kyoto. Now Grumiller and colleagues from India and Japan have published an article in the journal Physical Review Letters, confirming the validity of the correspondence principle in a flat universe.
Nine hundred kilometers off the east coast of Madagascar lies the tiny island paradise of Mauritius. The waters are pristine, the beaches bright white, and the average temperature hovers between 22°C and 28°C (72°F to 82°F) year-round. But conditions there may not have always been so idyllic. A new study suggests that about 4000 years ago, a prolonged drought on the island left many of the native species, such as dodo birds and giant tortoises, dead in a soup of poisonous algae and their own feces.
The die-off happened in an area known as Mare aux Songes, which once held a shallow lake that was an important source of fresh water for nonmigratory animals. Today, it’s just a grassy swamp, but beneath the surface, fossils are so common and so well preserved that the area qualifies as what scientists call a Lagerstätte, which in German means “storage space.” "What I wanted to know was, how did this drought cause this graveyard?” says Erik de Boer, a paleoecologist at the University of Amsterdam. “How did so many animals die?”
To find out, de Boer and colleagues analyzed sediment cores taken from the area. The layers in a core contain markers that can help scientists reconstruct an ecosystem’s history, such as preserved pollens and microbes. About 4200 years ago, monsoon activity declined dramatically, causing a 50-year megadrought on the island. The cores revealed that during the same time period, the ancient lake became a muddy, salty swamp. “Annually, the lake would get some fresh water in, however this drinking water turned foul during the dry season,” de Boer says.
Things got bad fairly quickly for local animals once the lake began to dry up, the team reports in the current issue of The Holocene. Sanitation appears to have become a major issue with so many animals crowding around the shrinking source of fresh water. “The animals lived around the edges, and the excrements probably got mixed up in the wetlands," de Boer says. "It’s like a big toilet.” Even worse, the researchers’ analysis shows that the feces-flooded waters encouraged the growth of single-celled algae and bacteria—diatoms and cyanobacteria—which can cause poisonous algal blooms. The circumstances combined to create what the scientists refer to as a “deadly cocktail” that they think killed many of the animals preserved as fossils at Mare aux Songes today.
Scientists have for the first time captured live images of the process of taste sensation on the tongue. The international team imaged single cells on the tongue of a mouse with a specially designed microscope system. "We've watched live taste cells capture and process molecules with different tastes," said biomedical engineer Dr Steve Lee, from the ANU Research School of Engineering.
There are more than 2,000 taste buds on the human tongue, which can distinguish at least five tastes: salty, sweet, sour, bitter and umami.However the relationship between the many taste cells within a taste bud, and our perception of taste has been a long standing mystery, said Professor Seok-Hyun Yun from Harvard Medical School. "With this new imaging tool we have shown that each taste bud contains taste cells for different tastes," said Professor Yun.
The team also discovered that taste cells responded not only to molecules contacting the surface of the tongue, but also to molecules in the blood circulation." We were surprised by the close association between taste cells and blood vessels around them," said Assistant Professor Myunghwan (Mark) Choi, from the Sungkyunkwan University in South Korea. "We think that tasting might be more complex than we expected, and involve an interaction between the food taken orally and blood composition," he said.
The team imaged the tongue by shining a bright infrared laser on to the mouse's tongue, which caused different parts of the tongue and the flavor molecules to fluoresce. The scientists captured the fluorescence from the tongue with a technique known as intravital multiphoton microscopy. They were able to pick out the individual taste cells within each taste bud, as well as blood vessels up to 240 microns below the surface of the tongue. The breakthrough complements recent studies by other research groups that identified the areas in the brain associated with taste.
The team now hopes to develop an experiment to monitor the brain while imaging the tongue to track the full process of taste sensation. However to fully understand the complex interactions that form our basic sense of taste could take years, Dr Lee said. "Until we can simultaneously capture both the neurological and physiological events, we can't fully unravel the logic behind taste," he said.
The research has been published in the latest edition of Nature Publishing Group's Scientific Reports
By comparing the genes of current-day North and South Americans with African and European populations, an Oxford University study has found the genetic fingerprints of the slave trade and colonization that shaped migrations to the Americas hundreds of years ago.
The study published in Nature Communications found that:
'We found that the genetic profile of Americans is much more complex than previously thought,' said study leader Professor Cristian Capelli from the Department of Zoology. The research team analyzed DNA samples collected from people in Barbados, Columbia, the Dominican Republic, Ecuador, Mexico, Puerto Rico and African-Americans in the USA. They used a technique called haplotype-based analysis to compare the pattern of genes in these 'recipient populations' to 'donor populations' in areas where migrants to America came from.
One of the great challenges in molecular biology is to determine the three-dimensional structure of large biomolecules such as proteins. But this is a famously difficult and time-consuming task. The standard technique is x-ray crystallography, which involves analyzing the x-ray diffraction pattern from a crystal of the molecule under investigation. That works well for molecules that form crystals easily.
But many proteins, perhaps most, do not form crystals easily. And even when they do, they often take on unnatural configurations that do not resemble their natural shape. So finding another reliable way of determining the 3-D structure of large biomolecules would be a huge breakthrough. Today, Marcus Brubaker and a couple of pals at the University of Toronto in Canada say they have found a way to dramatically improve a 3-D imaging technique that has never quite matched the utility of x-ray crystallography.
The new technique is based on an imaging process called electron cryomicroscopy. This begins with a purified solution of the target molecule that is frozen into a thin film just a single molecule thick. This film is then photographed using a process known as transmission electron microscopy—it is bombarded with electrons and those that pass through are recorded. Essentially, this produces two-dimensional “shadowgrams” of the molecules in the film. Researchers then pick out each shadowgram and use them to work out the three-dimensional structure of the target molecule.
This process is hard for a number of reasons. First, there is a huge amount of noise in each image so even the two-dimensional shadow is hard to make out. Second, there is no way of knowing the orientation of the molecule when the shadow was taken so determining the 3-D shape is a huge undertaking.
The standard approach to solving this problem is little more than guesswork. Dream up a potential 3-D structure for the molecule and then rotate it to see if it can generate all of the shadowgrams in the dataset. If not, change the structure, test it, and so on.
Obviously, this is a time-consuming process. The current state-of-the-art algorithm running on 300 cores takes two weeks to find the 3-D structure of a single molecule from a dataset of 200,000 images.
A group of Chinese scientists just reported that they modified the genome of human embryos, something that has never been done in the history of the world, according to a report in Nature News.
A recent biotech discovery - one that has been called the biggest biotech discovery of the century - showed how scientists might be able to modify a human genome when that genome was still just in an embryo.
This could change not only the genetic material of a person, but could also change the DNA they pass on, removing "bad" genetic codes (and potentially adding "good" ones) and taking an active hand in evolution.
Concerned scientists published an argument that no one should edit the human genome in this way until we better understood the consequences after a report uncovered rumours that Chinese scientists were already working on using this technology.
But this new paper, published April 18 in the journal Protein and Cell by a Chinese group led by gene-function researcher Junjiu Huang of Sun Yat-sen University, shows that work has already been done, and Nature News spoke to a Chinese source that said at least four different groups are "pursuing gene editing in human embryos."
Specifically, the team tried to modify a gene in a non-viable embryo that would have been responsible for a deadly blood disorder. But they noted in the study that they encountered serious challenges, suggesting there are still significant hurdles before clinical use becomes a reality.
CRISPR, the technology that makes all this possible, can find bad sections of DNA and cut them and even replace them with DNA that doesn't code for deadly diseases, but it can also make unwanted substitutions. Its level of accuracy is still very low.
Huang's group successfully introduced the DNA they wanted in only "a fraction" of the 28 embryos that had been "successfully spliced" (they tried 86 embryos at the start and tested 54 of the 71 that survived the procedure). They also found a "surprising number of ‘off-target’ mutations," according to Nature News.
Huang told Nature News that they stopped then because they knew that if they were do this work medically, that success rate would need to be closer to 100 percent. Our understanding of CRISPR needs to significantly develop before we get there, but this is a new technology that's changing rapidly.
Even though the Chinese team worked with non-viable embryos, embryos that cannot result in a live birth, editing the human genome and changing the DNA of an embryo is considered ethically questionable, because it could lead to more uses of this technology in humans. Changing the DNA of viable embryos could have unpredictable results for future generations, and some researchers want us to understand this better before putting it into practice.
Still, many researchers think this technology (most don't think it's ready to be used yet) could be invaluable. It could eliminate genetic diseases like sickle cell anemia, Huntington's disease, and cystic fibrosis, all devastating illnesses caused by genes that could theoretically be removed. Others fear that once we can do this accurately, it will inevitably be used to create designer humans with specific desired traits. After all, even though this research is considered questionable now, it is still actively being experimented with.
Huang told Nature News that both Nature and Science journals rejected his paper on embryo editing, "in part because of ethical objections." Neither journal commented to Nature News on that statement. Huang plans on trying to improve the accuracy of CRISPR in animal models for now. But CRISPR is reportedly quite easy to use, according to scientists who previously argued against doing this research in embryos now, meaning that it's incredibly likely these experiments will continue.
Every time you make a memory, somewhere in your brain a tiny filament reaches out from one neuron and forms an electrochemical connection to a neighboring neuron. A team of biologists at Vanderbilt University, headed by Associate Professor of Biological Sciences Donna Webb, studies how these connections are formed at the molecular and cellular level.
The filaments that make these new connections are called dendritic spines and, in a series of experiments described in the April 17 issue of the Journal of Biological Chemistry, the researchers report that a specific signaling protein, Asef2, a member of a family of proteins that regulate cell migration and adhesion, plays a critical role in spine formation. This is significant because Asef2 has been linked to autism and the co-occurrence of alcohol dependency and depression.
"Alterations in dendritic spines are associated with many neurological and developmental disorders, such as autism, Alzheimer's disease and Down Syndrome," said Webb. "However, the formation and maintenance of spines is a very complex process that we are just beginning to understand."
Neuron cell bodies produce two kinds of long fibers that weave through the brain: dendrites and axons. Axons transmit electrochemical signals from the cell body of one neuron to the dendrites of another neuron. Dendrites receive the incoming signals and carry them to the cell body. This is the way that neurons communicate with each other.
As they wait for incoming signals, dendrites continually produce tiny flexible filaments called filopodia. These poke out from the surface of the dendrite and wave about in the region between the cells searching for axons. At the same time, biologists think that the axons secrete chemicals of an unknown nature that attract the filopodia. When one of the dendritic filaments makes contact with one of the axons, it begins to adhere and to develop into a spine. The axon and spine form the two halves of a synaptic junction. New connections like this form the basis for memory formation and storage.
The formation of spines is driven by actin, a protein that produces microfilaments and is part of the cytoskeleton. Webb and her colleagues showed that Asef2 promotes spine and synapse formation by activating another protein called Rac, which is known to regulate actin activity. They also discovered that yet another protein, spinophilin, recruits Asef2 and guides it to specific spines. "Once we figure out the mechanisms involved, then we may be able to find drugs that can restore spine formation in people who have lost it, which could give them back their ability to remember," said Webb.
Cardiff scientists have for the first time identified the potential root cause of asthma and an existing drug that offers a new treatment.