New kind of cartograms to map population densities on this planet from an article in the Telegraph. The cartograms use data from the Global Rural-Urban Mapping Project.
Your new post is loading...
Toll Free:1-800-605-8422 FREE
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
• 3D-printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green-energy • history • language • map • material-science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
In the military world, fighter pilots have long been described as the best of the best. As Tom Wolfe famously wrote, only those with the "right stuff" can handle the job. Now, it seems, the right stuff may no longer be the sole purview of human pilots.
A pilot A.I. developed by a doctoral graduate from the University of Cincinnati has shown that it can not only beat other A.I.s, but also a professional fighter pilot with decades of experience. In a series of flight combat simulations, the A.I. successfully evaded retired U.S. Air Force Colonel Gene "Geno" Lee, and shot him down every time. Lee called it "the most aggressive, responsive, dynamic and credible A.I. I've seen to date."
And "Geno" is no slouch. He's a former Air Force Battle Manager and adversary tactics instructor. He's controlled or flown in thousands of air-to-air intercepts as mission commander or pilot. In short, the guy knows what he's doing. Plus he's been fighting A.I. opponents in flight simulators for decades. But he says this one is different. "I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed."
The A.I., dubbed ALPHA, was developed by Psibernetix, a company founded by University of Cincinnati doctoral graduate Nick Ernest, in collaboration with the Air Force Research Laboratory. According to the developers, ALPHA was specifically designed for research purposes in simulated air-combat missions.
The secret to ALPHA's superhuman flying skills is a decision-making system called a genetic fuzzy tree, a subtype of fuzzy logic algorithms. The system approaches complex problems much like a human would, says Ernest, breaking the larger task into smaller subtasks, which include high-level tactics, firing, evasion, and defensiveness. By considering only the most relevant variables, it can make complex decisions with extreme speed. As a result, the A.I. can calculate the best maneuvers in a complex, dynamic environment, over 250 times faster than its human opponent can blink.
After hour-long combat missions against ALPHA, Lee says, "I go home feeling washed out. I'm tired, drained and mentally exhausted. AI has superhuman reflexes and there is no way to win. This may be artificial intelligence, but it represents a real challenge."
The results of the dogfight simulations are published in the Journal of Defense Management.
Google is one of the companies at the forefront of robotics and artificial intelligence research, and being in that position means they have the most to worry about. The idea of a robot takeover may still be an abstract, science fictional concept to us, but Google has actually compiled a list of behaviors that would cause them great concern, both for efficiency and safety in the future.
Via Ben van Lier
A vast ocean of water beneath the icy crust of Saturn’s moon Enceladus may be more accessible than previously thought, according to new research. A new study has revealed that near the moon’s poles, the ice covering Enceladus could be just two kilometers (one mile) thick—the thinnest known ice shell of any ocean-covered moon. The discovery not only changes scientists’ understanding of Enceladus’ structure, but also makes the moon a more appealing target for future exploration, according to the study’s authors.
Until recently, scientists saw Jupiter’s moon Europa as the moon most likely to yield new understanding into worlds with ice-covered oceans, according to Gabriel Tobie, a planetary scientist at the Laboratory of Planetology and Geodynamics of CNRS, the University of Nantes, and the University of Angers in Nantes, France and co-author of the new study.
Estimates of Europa’s ice shell thickness range from just a few kilometers to over 10 kilometers to over 20 kilometers (12 miles) thick. By comparison, Enceladus’ ice was previously thought to be 20 to 60 kilometers (12 to 37 miles) thick. But the new study suggests that at its south pole, Enceladus’ ice is less than five kilometers (three miles) thick, and possibly as little as two.
The thinness of the ice opens up future mission possibilities, according to authors of the new study published inGeophysical Research Letters, a journal of the American Geophysical Union. With ice this thin, an orbiting probe could use radar to see what lies beneath the moon’s shell. Though substantial engineering challenges would have to be solved first, scientists could even land a probe on the moon itself to drill down through the ice and sample the water below, Tobie said. Other scientists have proposed that ocean-covered moons like Europa could harbor life, and getting a look at Enceladus’ oceans could help us understand whether life could exist there, according to the authors.
The study yielded a second unexpected result: Enceladus’ core is likely much hotter than previously thought. Ice acts as an insulator, keeping the planet’s global oceans warm, but a thinner ice shell holds less heat. To maintain the same amount of water in the global oceans, with a thinner ice shell, Enceladus’ rocky core would have to generate much more heat than previously thought, according to the authors.
The amplitude and frequency of gravitational waves could reveal the initial mass of the seeds from which the first black holes grew since they were formed 13 billion years ago and provide further clues about what caused them and where they formed, the researchers said.
The research is being presented today (Monday, June 27, 2016) at the Royal Astronomical Society's National Astronomy Meeting in Nottingham, UK. It was funded by the Science and Technology Facilities Council, the European Research Council and the Belgian Interuniversity Attraction Poles Programme.
The study combined simulations from the EAGLE project - which aims to create a realistic simulation of the known Universe inside a computer - with a model to calculate gravitational wave signals.
Two detections of gravitational waves caused by collisions between supermassive black holes should be possible each year using space-based instruments such as the Evolved Laser Interferometer Space Antenna (eLISA) detector that is due to launch in 2034, the researchers said.
Mammalian hairs and avian feathers develop from a similar primordial structure called a 'placode': a local thickening of the epidermis with columnar cells that reduce their rate of proliferation and express very specific genes. This observation has puzzled evolutionary and developmental biologists for many years because birds and mammals are not sister groups: they evolved from different reptilian lineages. According to previous studies, reptiles' scales however do not develop from an anatomical placode. This would imply that birds and mammals have independently 'invented' placodes during their evolution.
In 2015, a team from Yale University (USA) published an article showing that scales, hairs and feathers share molecular signatures during their development. These results fueled an old debate between two schools. One defends that these molecular signatures suggest a common evolutionary origin of skin appendages, whereas the other proposes that the same genes are re-used for developing different skin appendages.
Now, Nicolas Di-Poï and Michel C. Milinkovitch at the Department of Genetics and Evolution of the UNIGE Faculty of Science and at the SIB put this long controversy to rest by demonstrating that scales in reptiles develop from a placode with all the anatomical and molecular signatures of avian and mammalian placodes. The two scientists finely observed and analysed the skin morphological and molecular characteristics during embryonic development in crocodiles, snakes and lizards. 'Our study not only provides new molecular data that complement the work of the American team but also reveals key microanatomical facts, explains Michel Milinkovitch. Indeed, we have identified in reptiles new molecular signatures that are identical to those observed during the development of hairs and feathers, as well as the presence of the same anatomical placode as in mammals and birds. This indicates that the three types of skin appendages are homologous: the reptilian scales, the avian feathers and the mammalian hairs, despite their very different final shapes, evolved from the scales of their reptilian common ancestor.'
During their new study, the researchers from UNIGE and SIB also investigated the bearded dragon, a species of lizard that comes in three variants. The first is the normal wild-type form. The second has scales of reduced size because it bears one copy of a natural genetic mutation. The third has two copies of the mutation ... and lacks all scales. By comparing the genome of these three variants, Di-Poï and Milinkovitch have discovered the gene affected by this mutation. 'We identified that the peculiar look of these naked lizards is due to the disruption of the ectodysplasin-A (EDA), a gene whose mutations in humans and mice are known to generate substantial abnormalities in the development of teeth, glands, nails and hairs', says Michel Milinkovitch. The Swiss researchers have demonstrated that, when EDA is malfunctioning in lizards, they fail to develop a proper scale placode, exactly as mammals or birds affected with similar mutations in that same gene cannot develop proper hairs or feathers placodes. These data all coherently indicate the common ancestry between scales, feathers and hairs.
The next challenge for the Swiss team, and many other researchers around the world, is to decipher the fine mechanisms explaining the diversity of forms of skin appendages. How has the ancestral scaly skin given rise to the very different morphologies of scales, feathers and hairs, as well as the astonishing variety of forms that these appendages can take? These future studies will hopefully fine-tune our understanding of the physical and molecular mechanisms generating the complexity and the diversity of life during evolution.
Experiments confirm that the barium-144 nucleus is pear shaped and hint that this asymmetry is more pronounced than previously thought.
Most nuclei are round or slightly squashed, like a football. But in certain nuclei, protons and neutrons arrange in a more pear-shaped configuration. Only a handful of these distorted nuclei have been seen in experiments. Now, researchers have confirmed that barium-144 (144Ba) is a member of this exclusive club. Moreover, it may be more distorted than theorists expected, a finding that could challenge current nuclear structure models.
The most direct test of whether a nucleus is pear shaped is to look for so-called octupole transitions between nuclear states, which are suppressed in more symmetric nuclei. Using this method, researchers have confirmed that radium-224, radium-226, and a few other heavy nuclei are pear shaped. For decades, theorists have predicted that 144Ba , a relatively light nucleus, should also be asymmetric. But until now, there were no techniques that allowed a sufficient number of the short-lived barium isotopes to be prepared and studied before they decayed.
A team of scientists from the US, the UK, and France used Argonne National Lab’s CARIBU fission source and ATLAS accelerator to prepare a beam of 144Ba, which they collided with a lead foil to kick the nuclei into excited states. By analyzing the spectrum of gamma rays emitted by the nuclei, the researchers found that the strengths of several octupole transitions—and hence the distortion—were more than twice the values predicted by nuclear structure models. The finding might mean that these models need to be revised. But it’s too soon to say because the experimental uncertainty in the measured distortion is still large.
This research is published in Physical Review Letters.
Physicists in Innsbruck have realized the first quantum simulation of lattice gauge theories, building a bridge between high-energy theory and atomic physics. In the journal Nature, Rainer Blatt's and Peter Zoller's research teams describe how they simulated the creation of elementary particle pairs out of the vacuum by using a quantum computer.
Elementary particles are the fundamental buildings blocks of matter, and their properties are described by the Standard Model of particle physics. The discovery of the Higgs boson at the CERN in 2012 constitutes a further step towards the confirmation of the Standard Model. However, many aspects of this theory are still not understood because their complexity makes it hard to investigate them with classical computers. Quantum computers may provide a way to overcome this obstacle as they can simulate certain aspects of elementary particle physics in a well-controlled quantum system.
Physicists from the University of Innsbruck and the Institute for Quantum Optics and Quantum Information (IQOQI) at the Austrian Academy of Sciences have now done exactly that: In an international first, Rainer Blatt's and Peter Zoller's research teams have simulated lattice gauge theories in a quantum computer. They describe their work in the journal Nature.
Simulation of particle-antiparticle pairs using a quantum computer
Gauge theories describe the interaction between elementary particles, such as quarks and gluons, and they are the basis for our understanding of fundamental processes. "Dynamical processes, for example, the collision of elementary particles or the spontaneous creation of particle-antiparticle pairs, are extremely difficult to investigate," explains Christine Muschik, theoretical physicist at the IQOQI. "However, scientists quickly reach a limit when processing numerical calculations on classical computers. For this reason, it has been proposed to simulate these processes by using a programmable quantum system." In recent years, many interesting concepts have been proposed, but until now it was impossible to realize them. "We have now developed a new concept that allows us to simulate the spontaneous creation of electron-positron pairs out of the vacuum by using a quantum computer," says Muschik.
The quantum system consists of four electromagnetically trapped calcium ions that are controlled by laser pulses. "Each pair of ions represent a pair of a particle and an antiparticle," explains experimental physicist Esteban A. Martinez. "We use laser pulses to simulate the electromagnetic field in a vacuum. Then we are able to observe how particle pairs are created by quantum fluctuations from the energy of this field. By looking at the ion's fluorescence, we see whether particles and antiparticles were created. We are able to modify the parameters of the quantum system, which allows us to observe and study the dynamic process of pair creation."
For a long time, biologists thought our DNA resided only in the control center of our cells, the nucleus.
Induced pluripotent stem cells were supposed to herald a medical revolution.
Shinya Yamanaka looked up in surprise at the postdoc who had spoken. “We have colonies,” Kazutoshi Takahashi said again. Yamanaka jumped from his desk and followed Takahashi to their tissue-culture room, at Kyoto University in Japan. Under a microscope, they saw tiny clusters of cells — the culmination of five years of work and an achievement that Yamanaka hadn't even been sure was possible.
Two weeks earlier, Takahashi had taken skin cells from adult mice and infected them with a virus designed to introduce 24 carefully chosen genes. Now, the cells had been transformed. They looked and behaved like embryonic stem (ES) cells — 'pluripotent' cells, with the ability to develop into skin, nerve, muscle or practically any other cell type. Yamanaka gazed at the cellular alchemy before him. “At that moment, I thought, 'This must be some kind of mistake',” he recalls. He asked Takahashi to perform the experiment again — and again. Each time, it worked.
Over the next two months, Takahashi narrowed down the genes to just four that were needed to wind back the developmental clock. In June 2006, Yamanaka presented the results to a stunned room of scientists at the annual meeting of the International Society for Stem Cell Research in Toronto, Canada. He called the cells 'ES-like cells', but would later refer to them as induced pluripotent stem cells, or iPS cells. “Many people just didn't believe it,” says Rudolf Jaenisch, a biologist at the Massachusetts Institute of Technology in Cambridge, who was in the room. But Jaenisch knew and trusted Yamanaka's work, and thought it was “ingenious”.
The cells promised to be a boon for regenerative medicine: researchers might take a person's skin, blood or other cells, reprogram them into iPS cells, and then use those to grow liver cells, neurons or whatever was needed to treat a disease. This personalized therapy would get around the risk of immune rejection, and sidestep the ethical concerns of using cells derived from embryos.
Ten years on, the goals have shifted — in part because those therapies have proved challenging to develop. The only clinical trial using iPS cells was halted in 2015 after just one person had received a treatment. But iPS cells have made their mark in a different way. They have become an important tool for modelling and investigating human diseases, as well as for screening drugs. Improved ways of making the cells, along with gene-editing technologies, have turned iPS cells into a lab workhorse — providing an unlimited supply of once-inaccessible human tissues for research. This has been especially valuable in the fields of human development and neurological diseases, says Guo-li Ming, a neuroscientist at Johns Hopkins University in Baltimore, Maryland, who has been using iPS cells since 2006.
The field is still experiencing growing pains. As more and more labs adopt iPS cells, researchers struggle with consistency. “The greatest challenge is to get everyone on the same page with quality control,” says Jeanne Loring, a stem-cell biologist at the Scripps Research Institute in La Jolla, California. “There are still papers coming out where people have done something remarkable with one cell line, and it turns out nobody else can do it,” she says. “We've got all the technology. We just need to have people use it right.”
DNA molecules don’t just code our genetic instructions. They can also conduct electricity and self-assemble into well-defined shapes, making them potential candidates for building low-cost nanoelectronic devices.
A team of researchers from Duke University and Arizona State University has shown how specific DNA sequences can turn these spiral-shaped molecules into electron “highways,” allowing electricity to more easily flow through the strand.
The results may provide a framework for engineering more stable, efficient and tunable DNA nanoscale devices, and for understanding how DNA conductivity might be used to identify gene damage. The study appears online June 20 in Nature Chemistry.
Scientists have long disagreed over exactly how electrons travel along strands of DNA, says David N. Beratan, professor of chemistry at Duke University and leader of the Duke team. Over longer distances, they believe electrons travel along DNA strands like particles, “hopping” from one molecular base or “unit” to the next. Over shorter distances, the electrons use their wave character, being shared or “smeared out” over multiple bases at once. But recent experiments lead by Nongjian Tao, professor of electrical engineering at Arizona State University and co-author on the study, provided hints that this wave-like behavior could be extended to longer distances.
This result was intriguing, says Duke graduate student and study lead author Chaoren Liu, because electrons that travel in waves are essentially entering the “fast lane,” moving with more efficiency than those that hop.
“In our studies, we first wanted to confirm that this wave-like behavior actually existed over these lengths,” Liu said. “And second, we wanted to understand the mechanism so that we could make this wave-like behavior stronger or extend it to even longer distances.”
DNA strands are built like chains, with each link comprising one of four molecular bases whose sequence codes the genetic instructions for our cells. Using computer simulations, Beratan’s team found that manipulating these same sequences could tune the degree of electron sharing between bases, leading to wave-like behavior over longer or shorter distances. In particular, they found that alternating blocks of five guanine (G) bases on opposite DNA strands created the best construct for long-range wave-like electronic motions.
The team theorizes that creating these blocks of G bases causes them to all “lock” together so the wave-like behavior of the electrons is less likely to be disrupted by random wiggling in the DNA strand.
Carsonella ruddii is a bacterium that lives symbiotically inside some insects. Its sheltered life has allowed it to reduce its genome to only about 160,000 base pairs. With less than 200 genes, it lacks some genes necessary for survival, but these genes are supplied by its insect host. In fact, Carsonella has such a small genome that biologists have conjectured that it is losing its “bacterial” identity and turning into an organelle, which is part of the host’s genome. This transition from bacterium to organelle has happened many times during evolutionary history; in fact, the mitochondrion responsible for energy production in human cells was once a free-roaming bacterium that we assimilated in the distant past.
At first, the researchers assumed that genes would shut down shortly after death, like the parts of a car that has run out of gas. What they found instead was that hundreds of genes ramped up. Although most of these genes upped their activity in the first 24 hours after the animals expired and then tapered off, in the fish some genes remained active 4 days after death.
Many of these postmortem genes are beneficial in emergencies; they perform tasks such as spurring inflammation, firing up the immune system, and counteracting stress. Other genes were more surprising. “What’s jaw-dropping is that developmental genes are turned on after death,” Noble says. These genes normally help sculpt the embryo, but they aren’t needed after birth. One possible explanation for their postmortem reawakening, the researchers say, is that cellular conditions in newly dead corpses resemble those in embryos.
The team also found that several genes that promote cancer became more active. That result could explain why people who receive transplants from the recently deceased have a higher risk of cancer, Noble says. He and his colleagues posted their results on the preprint server bioRxiv last week, and Noble says their paper is undergoing peer review at a journal.
“This is a rare study,” says molecular pharmacologist Ashim Malhotra of Pacific University, Hillsboro, in Oregon, who wasn’t connected to the research. “It is important to understand what happens to organs after a person dies, especially if we are going to transplant them.” The team’s approach for measuring gene activity could be “used as a diagnostic tool for predicting the quality of a transplant.”
In an accompanying paper on bioRxiv, Noble and two colleagues demonstrated another possible use for gene activity measurements, showing that they can provide accurate estimates of the time of death. Those results impress forensic scientist David Carter of Chaminade University of Honolulu. Although making a time of death estimate is crucial for many criminal investigations, “we are not very good at it,” he says. Such estimates often rely on evidence that isn’t directly connected to the body, such as the last calls or texts on the victim’s cellphone. Noble and his colleagues, Carter says, have “established a technique that has a great deal of potential to help death investigation.”
A mouse or zebrafish doesn’t benefit, no matter which genes turn on after its death. The patterns of gene activity that the researchers observed may represent what happens when the complex network of interacting genes that normally keeps an organism functioning unwinds. Some genes may turn on, for example, because other genes that normally help kept them silent have shut off. By following these changes, researchers might be able to learn more about how these networks evolved, Noble says. “The headline of this study is that we can probably get a lot of information about life by studying death.”
Hydrogen is the most-abundant element in the universe. It's also the simplest--sporting only a single electron in each atom. But that simplicity is deceptive, because there is still so much we have to learn about hydrogen.
One of the biggest unknowns is its transformation under the extreme pressures and temperatures found in the interiors of giant planets, where it is squeezed until it becomes liquid metal, capable of conducting electricity. New work published in Physical Review Letters by Carnegie's Alexander Goncharov and University of Edinburgh's Stewart McWilliams measures the conditions under which hydrogen undergoes this transition in the lab and finds an intermediate state between gas and metal, which they're calling "dark hydrogen."
On the surface of giant planets like Jupiter, hydrogen is a gas. But between this gaseous surface and the liquid metal hydrogen in the planet's core lies a layer of dark hydrogen, according to findings gleaned from the team's lab mimicry. Using a laser-heated diamond anvil cell to create the conditions likely to be found in gas giant planetary interiors, the team probed the physics of hydrogen under a range of pressures from 10,000 to 1.5 million times normal atmospheric pressure and up to 10,000 degrees Fahrenheit. They discovered this unexpected intermediate phase, which does not reflect or transmit visible light, but does transmit infrared radiation, or heat.
"This observation would explain how heat can easily escape from gas giant planets like Saturn," explained Goncharov. They also found that this intermediate dark hydrogen is somewhat metallic, meaning it can conduct an electric current, albeit poorly. This means that it could play a role in the process by which churning metallic hydrogen in gas giant planetary cores produces a magnetic field around these bodies, in the same way that the motion of liquid iron in Earth's core created and sustains our own magnetic field.
"This dark hydrogen layer was unexpected and inconsistent with what modeling research had led us to believe about the change from hydrogen gas to metallic hydrogen inside of celestial objects," Goncharov added.
he Department of Transportation’s Federal Aviation Administration has finalized the first operational rules (PDF) for routine commercial use of small unmanned aircraft systems (UAS or “drones”), opening pathways towards fully integrating UAS into the nation’s airspace. These new regulations work to harness new innovations safely, to spur job growth, advance critical scientific research and save lives.
“We are part of a new era in aviation, and the potential for unmanned aircraft will make it safer and easier to do certain jobs, gather information, and deploy disaster relief,” said U.S. Transportation Secretary Anthony Foxx. “We look forward to working with the aviation community to support innovation, while maintaining our standards as the safest and most complex airspace in the world.”
According to industry estimates, the rule could generate more than $82 billion for the U.S. economy and create more than 100,000 new jobs over the next 10 years.
The new rule, which takes effect in late August, offers safety regulations for unmanned aircraft drones weighing less than 55 pounds that are conducting non-hobbyist operations.
The rule’s provisions are designed to minimize risks to other aircraft and people and property on the ground. The regulations require pilots to keep an unmanned aircraft within visual line of sight. Operations are allowed during daylight and during twilight if the drone has anti-collision lights. The new regulations also address height and speed restrictions and other operational limits, such as prohibiting flights over unprotected people on the ground who aren’t directly participating in the UAS operation.
The FAA is offering a process to waive some restrictions if an operator proves the proposed flight will be conducted safely under a waiver. The FAA will make an online portal available to apply for these waivers in the months ahead.
“With this new rule, we are taking a careful and deliberate approach that balances the need to deploy this new technology with the FAA’s mission to protect public safety,” said FAA Administrator Michael Huerta. “But this is just our first step. We’re already working on additional rules that will expand the range of operations.”
Under the final rule, the person actually flying a drone must be at least 16 years old and have a remote pilot certificate with a small UAS rating, or be directly supervised by someone with such a certificate. To qualify for a remote pilot certificate, an individual must either pass an initial aeronautical knowledge test at an FAA-approved knowledge testing center or have an existing non-student Part 61 pilot certificate. If qualifying under the latter provision, a pilot must have completed a flight review in the previous 24 months and must take a UAS online training course provided by the FAA. The TSA will conduct a security background check of all remote pilot applications prior to issuance of a certificate.
For 3 billion years, one of the major carriers of information needed for life, RNA, has had a glitch that creates errors when making copies of genetic information. Researchers at The University of Texas at Austin have developed a fix that allows RNA to accurately proofread for the first time. The new discovery, published June 23 in the journal Science, will increase precision in genetic research and could dramatically improve medicine based on a person's genetic makeup.
Certain viruses called retroviruses can cause RNA to make copies of DNA, a process called reverse transcription. This process is notoriously prone to errors because an evolutionary ancestor of all viruses never had the ability to accurately copy genetic material.
The new innovation engineered at UT Austin is an enzyme that performs reverse transcription but can also "proofread," or check its work while copying genetic code. The enzyme allows, for the first time, for large amounts of RNA information to be copied with near perfect accuracy.
"We created a new group of enzymes that can read the genetic information inside living cells with unprecedented accuracy," says Jared Ellefson, a postdoctoral fellow in UT Austin's Center for Systems and Synthetic Biology. "Overlooked by evolution, our enzyme can correct errors while copying RNA."
Reverse transcription is mainly associated with retroviruses such as HIV. In nature, these viruses' inability to copy DNA accurately may have helped create variety in species over time, contributing to the complexity of life as we know it.
Since discovering reverse transcription, scientists have used it to better understand genetic information related to inheritable diseases and other aspects of human health. Still, the error-prone nature of existing RNA sequencing is a problem for scientists.
"With proofreading, our new enzyme increases precision and fidelity of RNA sequencing," says Ellefson. "Without the ability to faithfully read RNA, we cannot accurately determine the inner workings of cells. These errors can lead to misleading data in the research lab and potential misdiagnosis in the clinical lab."
Ellefson and the team of researchers engineered the new enzyme using directed evolution to train a high-fidelity (proofreading) DNA polymerase to use RNA templates. The new enzyme, called RTX, retains the highly accurate and efficient proofreading function, while copying RNA. Accuracy is improved at least threefold, and it may be up to 10 times as accurate. This new enzyme could enhance the methods used to read RNA from cells.
A smart material that switches back and forth between transparent and opaque could be installed in buildings or automobiles, potentially reducing energy bills by avoiding the need for costly air conditioning.
Imagine a glass skyscraper in which all of the windows could go from clear to opaque at the flick of a switch, allowing occupants to regulate the amount of sunlight coming through the windows without having to rely on costly air conditioning or other artificial methods of temperature control.
Researchers at the University of Cambridge have developed a type of 'smart' glass that switches back and forth between transparent and opaque, while using very low amounts of energy. The material, known as Smectic A composites, could be used in buildings, automotive or display applications.
Working with industrial partners including Dow Corning, the Cambridge researchers have been developing 'Smectic A' composites over the past two decades. The team, based at the Centre for Advanced Photonics and Electronics (CAPE), has made samples of Smectic A based glass, and is also able to produce it on a roll-to-roll process so that it can be printed onto plastic. It can be switched back and forth from transparent to opaque millions of times, and can be kept in either state for as long as the user wants.
The technique would help address problems that classical computers can't handle.
Physicists have performed the first full simulation of a high-energy physics experiment — the creation of pairs of particles and their antiparticles — on a quantum computer1. If the team can scale it up, the technique promises access to calculations that would be too complex for an ordinary computer to deal with.
To understand exactly what their theories predict, physicists routinely do computer simulations. They then compare the outcomes of the simulations with actual experimental data to test their theories.
In some situations, however, the calculations are too hard to allow predictions from first principles. This is particularly true for phenomena that involve the strong nuclear force, which governs how quarks bind together into protons and neutrons and how these particles form atomic nuclei, says Christine Muschik, a theoretical physicist at the University of Innsbruck in Austria and a member of the simulation team.
Many researchers hope that future quantum computers will help to solve this problem. These machines, which are still in the earliest stages of development, exploit the physics of objects that can be in multiple states at once, encoding information in ‘qubits’, rather than in the on/off state of classical bits. A computer made of a handful of qubits can perform many calculations simultaneously, and can complete certain tasks exponentially faster than an ordinary computer.
Esteban Martinez, an experimental physicist at the University of Innsbruck, and his colleagues completed a proof of concept for a simulation of a high-energy physics experiment in which energy is converted into matter, creating an electron and its antiparticle, a positron.
The team used a tried-and-tested type of quantum computer in which an electromagnetic field traps four ions in a row, each one encoding a qubit, in a vacuum. They manipulated the ions’ spins — their magnetic orientations — using laser beams. This coaxed the ions to perform logic operations, the basic steps in any computer calculation.
After sequences of about 100 steps, each lasting a few milliseconds, the team looked at the state of the ions using a digital camera. Each of the four ions represented a location, two for particles and two for antiparticles, and the orientation of the ion revealed whether or not a particle or an antiparticle had been created at that location.
The team’s quantum calculations confirmed the predictions of a simplified version of quantum electrodynamics, the established theory of the electromagnetic force. “The stronger the field, the faster we can create particles and antiparticles,” Martinez says. He and his collaborators describe their results on 22 June in Nature1.
Sperm whales share something fundamental with humans. Both of our species form groups with unique languages and traditions known as "cultures." A new study of sperm whale groups in the Caribbean suggests that these animals are shaped profoundly by their culture, which governs everything from hunting patterns to babysitting techniques. Whale researcher Shane Gero, who has spent thousands of hours with sperm whales, says that whale culture leads to behaviors that are "uncoupled from natural selection."
Gero and his colleagues recently published a paper on Caribbean whale culture in Royal Society Open Science, in which they describe the discovery of a new clan. Though this clan may have lived in the Caribbean for centuries, it's just coming to light now because sperm whales live and hunt in vast territories. This makes them hard to track. Like many scientists who study these wide-ranging creatures, Gero observes them by lowering specialized microphones into the water and recording the sounds they make to communicate.
Northwestern University's Ken Forbus is closing the gap between humans and machines. Using cognitive science theories, Forbus and his collaborators have developed a model that could give computers the ability to reason more like humans and even make moral decisions. Called the structure-mapping engine (SME), the new model is capable of analogical problem solving, including capturing the way humans spontaneously use analogies between situations to solve moral dilemmas.
"In terms of thinking like humans, analogies are where it's at," said Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science in Northwestern's McCormick School of Engineering. "Humans use relational statements fluidly to describe things, solve problems, indicate causality, and weigh moral dilemmas."
The theory underlying the model is psychologist Dedre Gentner's structure-mapping theory of analogy and similarity, which has been used to explain and predict many psychology phenomena. Structure-mapping argues that analogy and similarity involve comparisons between relational representations, which connect entities and ideas, for example, that a clock is above a door or that pressure differences cause water to flow.
Analogies can be complex (electricity flows like water) or simple (his new cell phone is very similar to his old phone). Previous models of analogy, including prior versions of SME, have not been able to scale to the size of representations that people tend to use. Forbus's new version of SME can handle the size and complexity of relational representations that are needed for visual reasoning, cracking textbook problems, and solving moral dilemmas.
"Relational ability is the key to higher-order cognition," said Gentner, Alice Gabrielle Twight Professor in Northwestern's Weinberg College of Arts and Sciences. "Although we share this ability with a few other species, humans greatly exceed other species in ability to represent and reason with relations."
Supported by the Office of Naval Research, Defense Advanced Research Projects Agency, and Air Force Office of Scientific Research, Forbus and Gentner's research is described in the June 20 issue of the journal Cognitive Science. Andrew Lovett, a postdoctoral fellow in Gentner's laboratory, and Ronald Ferguson, a PhD graduate from Forbus's laboratory, also authored the paper.
Many artificial intelligence systems -- like Google's AlphaGo -- rely on deep learning, a process in which a computer learns examining massive amounts of data. By contrast, people -- and SME-based systems -- often learn successfully from far fewer examples. In moral decision-making, for example, a handful of stories suffices to enable an SME-based system to learn to make decisions as people do in psychological experiments.
"Given a new situation, the machine will try to retrieve one of its prior stories, looking for analogous sacred values, and decide accordingly," Forbus said.
A new study published in Nature presents one of the most complete models of matter in the universe and predicts hundreds of massive black hole mergers each year observable with the second generation of gravitational wave detectors.
The model anticipated the massive black holes observed by the Laser Interferometer Gravitational-wave Observatory. The two colliding masses created the first directly detected gravitational waves and confirmed Einstein's general theory of relativity.
"The universe isn't the same everywhere," said Richard O'Shaughnessy, assistant professor in RIT's School of Mathematical Sciences, and co-author of the study led by Krzysztof Belczynski from Warsaw University. "Some places produce many more binary black holes than others. Our study takes these differences into careful account."
Massive stars that collapse upon themselves and end their lives as black holes, like the pair LIGO detected, are extremely rare, O'Shaughnessy said. They are less evolved, "more primitive stars," that occur in special configurations in the universe. These stars from the early universe are made of more pristine hydrogen, a gas which makes them "Titans among stars," at 40 to 100 solar masses. In contrast, younger generations of stars consumed the corpses of their predecessors containing heavy elements, which stunted their growth.
"Because LIGO is so much more sensitive to these heavy black holes, these regions of pristine gas that make heavy black holes are extremely important," O'Shaughnessy said. "These rare regions act like factories for building identifiable pairs of black holes."
O'Shaughnessy and his colleagues predict that massive black holes like these spin in a stable way, with orbits that remain in the same plane. The model shows that the alignment of these massive black holes are impervious to the tiny kick that follows the stars' core collapse. The same kick can change the alignment of smaller black holes and rock their orbital plane.
The calculations reported in Nature are the most detailed calculations of its kind ever performed, O'Shaughnessy said. He likens the model to a laboratory for assessing future prospects for gravitational wave astronomy. Other gravitational wave astronomers are now using the model in their own investigations as well.
"We've already seen that we can learn a lot about Einstein's theory and massive stars, just from this one event," said O'Shaughnessy, also a member of the LIGO Scientific Collaboration that helped make and interpret the first discovery of gravitational waves. "LIGO is not going to see 1,000 black holes like these each year, but many of them will be even better and more exciting because we will have a better instrument--better glasses to view them with and better techniques."
Every school kid learns the basic structure of the Earth: a thin outer crust, a thick mantle, and a Mars-sized core. But is this structure universal? Will rocky exoplanets orbiting other stars have the same three layers? New research suggests that the answer is yes - they will have interiors very similar to Earth. "We wanted to see how Earth-like these rocky planets are. It turns out they are very Earth-like," says lead author Li Zeng of the Harvard-Smithsonian Center for Astrophysics (CfA).
To reach this conclusion Zeng and his co-authors applied a computer model known as the Preliminary Reference Earth Model (PREM), which is the standard model for Earth's interior. They adjusted it to accommodate different masses and compositions, and applied it to six known rocky exoplanets with well-measured masses and physical sizes.
They found that the other planets, despite their differences from Earth, all should have a nickel/iron core containing about 30 percent of the planet's mass. In comparison, about a third of the Earth's mass is in its core. The remainder of each planet would be mantle and crust, just as with Earth.
"We've only understood the Earth's structure for the past hundred years. Now we can calculate the structures of planets orbiting other stars, even though we can't visit them," adds Zeng.
The new code also can be applied to smaller, icier worlds like the moons and dwarf planets in the outer solar system. For example, by plugging in the mass and size of Pluto, the team finds that Pluto is about one-third ice, mostly water ice but also ammonia and methane ice varieties.
The model assumes that distant exoplanets have chemical compositions similar to Earth. This is reasonable based on the relevant abundances of key chemical elements like iron, magnesium, silicon, and oxygen in nearby systems. However, planets forming in more or less metal-rich regions of the galaxy could show different interior structures. The team expects to explore these questions in future research.
The paper detailing this work, authored by Li Zeng, Dimitar Sasselov, and Stein Jacobsen (Harvard University), has been accepted for publication in The Astrophysical Journal and is available online.
First China conquered DNA sequencing. Now it wants to dominate precision medicine too.
Six years ago, China became the global leader in DNA sequencing — and it was all down to one company, BGI. The Shenzen-based firm had just purchased 128 of the world's fastest sequencing machines and was said to have more than half the world's capacity for decoding DNA. It was assembling an army of upstart young bioinformaticians, collaborating with leading researchers worldwide and publishing the sequences of creatures ranging from ancient humans to the giant panda. The firm was quickly gaining a reputation as a brute-force genome factory — more brawn than brains, said some.
Six years later, the scene is quite different. BGI's most famous scientist and visionary leader, Jun Wang, left last July. The machine that had given the company its dominance is outdated, and the firm's attempt to develop its own industrial-scale whole-genome sequencer hit a roadblock last November, forcing it to lay off employees at its US subsidiary. Meanwhile, the competing system — Illumina's X series — has been selling briskly, raising the speed and dropping the price of sequencing worldwide.
Armed with the latest sequencers, rival companies to BGI have emerged. Most prominent of these is Novogene in Beijing, founded in 2011 by former BGI vice-president Ruiqiang Li. And although BGI might not have the uncontested dominance it once did, it still claims to have the world's largest sequencing capacity as well as major scientific ambitions — including to sequence the genomes of one million people, one million plants and animals and one million microbial ecosystems. Today, China is being reborn as a sequencing power with a broader base.
Fuelling the drive is a multibillion-dollar, 15-year precision-medicine initiative, which China announced in March and which rivals a similar initiative in the United States. If these efforts fulfil their goals, doctors envision being able to use a person's genome and physiology to pick the best treatments for his or her disease. The goal now for sequencing companies is to turn the bounty of genomic data into medical benefits. To do that, sequence data alone are not enough — so some Chinese companies are going beyond brute-force sequencing to work out how lifestyle factors such as diet are also important for understanding disease risk and for finding therapies. “The thing about China is the ambition they have for their precision-medicine programme is orders of magnitude larger than the United States',” says Hannes Smárason, chief operating officer and co-founder of WuXiNextCODE, a genomics company in Cambridge, Massachusetts, that is part of Shanghai-based WuXi AppTec. “They are dynamic and receptive. There, the idea of integrating of genomics into health care is very real.”
The new energy behind sequencing is largely thanks to one machine: Illumina's HiSeq X Ten, so called because it is generally sold as sets of ten units. When the machine hit the market in 2014, one set was able to sequence a human genome for close to US$1,000, and power through some 18,000 human genomes per year. Companies that wanted to rival BGI saw an opportunity — and leapt.
Novogene was the first. Following a model similar to BGI's, Li has been building up a large staff of bioinformaticians to generate and interpret sequence data as part of collaborative basic-research projects on the snub-nosed monkey (Rhinopithecus roxellana)1, cotton (Gossypium hirsutum)2 and other plants and animals. Using the same machine, a handful of other companies — including WuXi PharmaTech and Cloud Health, both in Shanghai — focus more on offering sequencing as a service to pharmaceutical or personal-genomics companies.
The growth is accelerating. Novogene added a second X Ten set in April, and Cloud Health chief executive Jason Gang Jin says that the company will add another two sets this year. By the end of the year, China will probably have at least 70 units. (Illumina says that 300 units were sold worldwide by the end of last year.)
BGI has been trying to keep pace. In 2013, it purchased Complete Genomics in Mountain View, California, in a bid to create its own advanced sequencing machines for in-house use and for sale. The firm announced a system called Revolocity, its attempt to match the HiSeq X, last June. But in November, having taken just three orders, it suddenly suspended sales. BGI is now left with its ageing fleet of 128 Illumina HiSeq 2000 machines and a mélange of newer sequencers from various companies, including its own.
Estimates of China's share of the world's sequencing-capacity range from 20% to 30% — still lower than when BGI was in its heyday, but expected to increase fast. “Sequencing capacity is rising rapidly everywhere, but it's rising more rapidly in China than anywhere else,” says Richard Daly, chief executive of DNAnexus in Mountain View, which supplies cloud platforms for large-scale genomics.
BGI has another machine up its sleeve. The BGISEQ-500 is designed as more of a desktop instrument for research labs. It is also based on the Complete Genomics technology and is set to begin shipping this year. Yiwu He, BGI's new global head of research, says that the system can sequence a human genome for $1,000, and by being smaller in scale and more flexible to use, it will meet China's emerging need for clinical sequencing. “There will be more sequencing done outside of research institutes, in the hospitals,” says He. The company will bring the price of one human genome sequence down to $200 in the next few years, he predicts boldly. “China is the most exciting place to do biomedical research.”
Histone proteins at the core of nucleosomes and their tails exert control over the exposure of genes for binding, as demonstrated in simulations by Rice researchers.
The protein complex that holds strands of DNA in compact spools partially disassembles itself to help genes reveal themselves to specialized proteins and enzymes for activation, according to Rice University researchers and their colleagues.
The team’s detailed computer models support the idea that DNA unwrapping and core protein unfolding are coupled, and that DNA unwrapping can happen asymmetrically to expose specific genes. The study of nucleosome disassembly by Rice theoretical biological physicist Peter Wolynes, former Rice postdoctoral researcher Bin Zhang, postdoctoral researcher Weihua Zheng and University of Maryland theoretical chemist Garegin Papoian appears in the Journal of the American Chemical Society. The research is part of a drive by Rice’s Center for Theoretical Biological Physics (CTBP) to understand the details of DNA’s structure, dynamics and function.
The spools at the center of nucleosomes, the fundamental unit of DNA organization, are histone protein core complexes. Nucleosomes are buried deep within a cell’s nucleus. About 147 DNA base pairs (from the more than 3 billion in the human genome) wrap around each histone core 1.7 times. The double helix moves on to spiral around the next core, and the next, with linker sections of 20 to 90 base pairs in between. The structure helps squeeze a 6-foot-long strand of DNA in each cell into as compact a form as possible while facilitating the controlled exposure of genes along the strand for protein expression.
The spools consist of two pairs of heterodimers, macromolecules that join to form the core. The core is stable until genes along the DNA are called upon by transcription factors or RNA polymerases; the researchers’ goal was to simulate what happens as the DNA unwinds from the core, making itself available to bind to outside proteins or make contact with other genes along the strand.
The researchers used their energy landscape models to simulate the nucleosome disassembly mechanism based on the energetic properties of its constituent DNA and proteins. The landscape maps the energies of all the possible forms a protein can take as it folds and functions. Conceptual insights from energy landscape theory have been implemented in an open-source biomolecular modeling framework called AWSEM Molecular Dynamics, which was jointly developed by the Papoian and Wolynes groups.
Wolynes said most studies elsewhere treated the histone core as if it were rigid and irreversibly disassociated when DNA unwrapped. But more recent experimental studies that involved gently pulling strands of DNA or used fluorescent resonance energy transfer, which measures energy moving between two molecules, showed the protein core is flexible and does not completely disassemble during unwrapping.
In their simulations, the researchers found the core changed its shape as the DNA unwound. Without DNA, they found the histone core was completely unstable in physiological conditions.
Their simulations showed that histone tails – the terminal regions of the core proteins – play a crucial role in nucleosome stability. The tails are highly charged and bind tightly with DNA, keeping its genomic content from being exposed until necessary. Their models predicted a faster unwrapping for tail-less nucleosomes, as seen in experiments.
The nucleosome study is part of a larger effort both by Papoian at Maryland and by Wolynes with his colleagues at CTBP to understand the mechanics of DNA, from how it functions to how it reproduces during mitosis. Wolynes said the new study and another new one by his lab on DNA during mitosis represent the opposite ends of the size scale.
“We can understand things at each end of the scale, but there’s a no-man’s land in between,” he said. “We’ll have to see whether the phenomena in the present-day no-man’s land can be understood. I don’t believe in magic; I believe they eventually will.”
Last year, biophysicist Moh El-Naggar and his graduate student Yamini Jangir plunged beneath South Dakota’s Black Hills into an old gold mine that is now more famous as a home to a dark matter detector. Unlike most scientists who make pilgrimages to the Black Hills these days, El-Naggar and Jangir weren’t there to hunt for subatomic particles. They came in search of life.
The electricity-eating microbes that the researchers were hunting for belong to a larger class of organisms that scientists are only beginning to understand. They inhabit largely uncharted worlds: the bubbling cauldrons of deep sea vents; mineral-rich veins deep beneath the planet’s surface; ocean sediments just a few inches below the deep seafloor. The microbes represent a segment of life that has been largely ignored, in part because their strange habitats make them incredibly difficult to grow in the lab.
Yet early surveys suggest a potential microbial bounty. A recent sampling of microbes collected from the seafloor near Catalina Island, off the coast of Southern California, uncovered a surprising variety of microbes that consume or shed electrons by eating or breathing minerals or metals. El-Naggar’s team is still analyzing their gold mine data, but he says that their initial results echo the Catalina findings. Thus far, whenever scientists search for these electron eaters in the right locations — places that have lots of minerals but not a lot of oxygen — they find them.
As the tally of electron eaters grows, scientists are beginning to figure out just how they work. How does a microbe consume electrons out of a piece of metal, or deposit them back into the environment when it is finished with them? A study published last year revealed the way that one of these microbes catches and consumes its electrical prey. And not-yet-published work suggests that some metal eaters transport electrons directly across their membranes — a feat once thought impossible.
Though eating electricity seems bizarre, the flow of current is central to life. All organisms require a source of electrons to make and store energy. They must also be able to shed electrons once their job is done. In describing this bare-bones view of life, Nobel Prize-winning physiologist Albert Szent-Györgyi once said, “Life is nothing but an electron looking for a place to rest.”
The microbes’ apparent ability to ingest electrons — known as direct electron transfer — is particularly intriguing because it seems to defy the basic rules of biophysics. The fatty membranes that enclose cells act as an insulator, creating an electrically neutral zone once thought impossible for an electron to cross. “No one wanted to believe that a bacterium would take an electron from inside of the cell and move it to the outside,” said Kenneth Nealson, a geobiologist at the University of Southern California, in a lecture to the Society for Applied Microbiology in London last year.
In the 1980s, Nealson and others discovered a surprising group of bacteria that can expel electrons directly onto solid minerals. It took until 2006 to discover the molecular mechanism behind this feat: A trio of specialized proteins sits in the cell membrane, forming a conductive bridge that transfers electrons to the outside of cell.
Ord and his colleagues at UNSW colleague Georgina Cooke analyzed the evolutionary relationships between fish species with out-of-water adaptations, and also looked at the ecological and evolutionary conditions that might inspire fish to move from water to land.
The researchers identified 33 fish families with at least one species that showcases amphibious tendencies. They published their findings in the journal Evolution. "These forays onto land have occurred in fish that live in different climates, eat different diets and live in range of aquatic environments, from freshwater rivers to the ocean," atted Ord. "While many species only spend a short time out of water, others, like mudskippers and some eels can last for hours or days."
The new study also documents a unique group of intertidal fish called blennies, which includes several species that hop around on land full-time as adults, staying within the vicinity of crashing waves and hiding in the crevices of wet rocks at low tide. "In this one family of fish alone, an amphibious lifestyle appears to have evolved repeatedly, between three and seven times," added Ord.