The afternoon of May 6, 2010 was among the strangest in economic history. Starting at 2:42 p.m. EDT, the Dow Jones stock index fell 600 points in just 6 minutes. Its nadir represented the deepest single-day decline in that ...
Via Jacek Rajewski
Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Creating matter from light? It sounds like something out of science fiction, right? Well scientists have managed to take something that seems improbable and devise an experiment to make it possible.
Creating matter from light with todays high-tech methods would physically prove a theory first devised by scientists Breit and Wheeler in 1934.
Breit and Wheeler suggested that it should, in theory, be possible to turn light into matter by smashing together only two particles of light, called photons, in order to create an electron and a positron. Yet this has never physically been demonstrated.
In order to take theory into practice, the researchers devised a method that could potentially turn light into matter. They created the idea of a "photon-photon collider," which would convert light directly into matter using technology that is already available. The experiment itself would actually recreate a process that was important in the first 100 seconds of the beginning of the universe.
The experiment itself involves two key steps. First, the researchers would use an extremely powerful, high-intensity laser in order to speed up electrons to just below the speed of light. Then, they would fire these electrons into a slab of gold. This would create a beam of photons about a billion times more energetic than visible light. After that, the researchers would fire a high-energy laser at the inner surface of a tiny gold can called a hohlraum in order to create a thermal radiation field and generate light similar to the light emitted by stars. Then, they would need to direct the photon beam from the first stage of the experiment through the center of the can, causing the photons from the two sources to collide and form matter.
"Despite all physicists accepting the theory to be true, when Breit and Wheeler first proposed the theory, they said that they never expected it to be shown in the laboratory," said Steve Rose, one of the researchers, in a news release. "Today, nearly 80 years later, we prove them wrong. What was so surprising to us was the discovery of how we can create matter directly from light using the technology that we have today. As we are theorists, we are now talking to others who can use our ideas to undertake this landmark experiment."
While this is only theory, it could certainly be taken into practice. Researchers are now looking for ways to create this experiment and prove this theory to, in fact, be true.
Via CineversityTV, Blujay
By themselves, graphene is too conductive while boron nitride nanotubes are too insulating, but combining them could create a workable digital switch — which can be used for controlling electrons in computers and other electronic devices.
To create this serendipitous super-hybrid, Yoke Khin Yap, a professor of physics at Michigan Technological University, and his team exfoliated (peeled off) graphene (from graphite) and modified the material’s surface with tiny pinholes, then grew the nanotubes up and through the pinholes — like a plant randomly poking up through a crack in a concrete pavement. That formed a “band gap” mismatch, which created “a potential barrier that stops electrons,” he said. In other words, a switch.
The band gap mismatch results from the materials’ structure: graphene’s flat sheet conducts electricity quickly, and the atomic structure in the nanotubes halts electric currents. This disparity creates a barrier, caused by the difference in electron movement as currents move next to and past the hair-like boron nitride nanotubes. These points of contact between the materials, called heterojunctions, are what make the digital on/off switch possible.
Yap and his research team have also shown that because the materials are respectively so effective at conducting or stopping electricity, the resulting switching ratio is high. So how fast the materials can turn on and off is several orders of magnitude greater than current graphene switches. And this speed could eventually quicken the pace of electronics and computing.
Yap says this study is a continuation of past research into making transistors without semiconductors. The problem with semiconductors like silicon is that they can only get so small, and they give off a lot of heat; the use of graphene and nanotubes bypasses those problems. In addition, the graphene and boron nitride nanotubes have the same atomic arrangement pattern, or lattice matching. With their aligned atoms, the graphene-nanotube digital switches could avoid the issues of electron scattering.
“You want to control the direction of the electrons,” Yap explains, comparing the challenge to a pinball machine that traps, slows down and redirects electrons. “This is difficult in high speed environments, and the electron scattering reduces the number and speed of electrons.”
The journal Scientific Reports recently published their work in an open-access paper.
Brain-controlled prostheses sample a few hundred neurons to estimate motor commands that involve millions of neurons. So tiny sampling errors can reduce the precision and speed of thought-controlled keypads. A Stanford technique can analyze this sample and quickly make dozens of corrective adjustments to make thought control more precise.
An interdisciplinary team led by Stanford electrical engineer Krishna Shenoy has developed a technique to improve brain-controlled prostheses. These brain-computer-interface (BCI) devices, for people with neurological disease or spinal cord injury, deliver thought commands to devices such as virtual keypads, bypassing the damaged area.
The new technique addresses a problem with these brain-controlled prostheses: they currently access a sample of only a few hundred neurons, so tiny errors in the sample — neurons that fire too fast or too slow — reduce the precision and speed of thought-controlled keypads.
In essence the new prostheses analyze the neuron sample and quickly make dozens of corrective adjustments to the estimate of the brain’s electrical pattern.
Shenoy’s team tested a brain-controlled cursor meant to operate a virtual keyboard. The system is intended for people with paralysis and amyotrophic lateral sclerosis (ALS), also called Lou Gehrig’s disease, a condition that Stephen Hawking has. ALS degrades one’s ability to move.
The new corrective technique is based on a recently discovered understanding of how monkeys naturally perform arm movements. The researchers studied animals that were normal in every way. The monkeys used their arms, hands and fingers to reach for targets presented on a video screen. The researchers sought to learn, through hundreds of experiments, what the electrical patterns from the 100- to 200-neuron sample looked like during a normal reach — to understand the “brain dynamics” underlying reaching arm movements.
“These brain dynamics are analogous to rules that characterize the interactions of the millions of neurons that control motions,” said Jonathan Kao, a doctoral student in electrical engineering and first author of the open-access Nature Communications paper on the research. “They enable us to use a tiny sample more precisely.”
The Navier-Stokes equations of fluid flow, used to model ocean currents, weather patterns and other phenomena, have been dubbed one of the seven most important problems in modern mathematics.
Now, in a paper posted online on February 3, Terence Tao of the University of California, Los Angeles, a winner of the Fields Medal, mathematics’ highest honor, offers a possible way to break the impasse. He has shown that in an alternative abstract universe closely related to the one described by the Navier-Stokes equations, it is possible for a body of fluid to form a sort of computer, which can build a self-replicating fluid robot that keeps transferring its energy to smaller and smaller copies of itself until the fluid “blows up.” As strange as it sounds, it may be possible, Tao proposes, to construct the same kind of self-replicator in the case of the true Navier-Stokes equations. If so, this fluid computer would settle a question that the Clay Mathematics Institute in 2000 dubbed one of the seven most important problems in modern mathematics, and for which it offered a million-dollar prize. Is a fluid governed by the Navier-Stokes equations guaranteed to flow smoothly for all time, the problem asks, or could it eventually hit a “blowup” in which something physically impossible happens, such as a non-zero amount of energy concentrated into a single point in space?
Tao’s proposal is “a tall order,” said Charles Fefferman of Princeton University. “But it’s a very interesting way of thinking about the long-term future of the problem.” The real ocean doesn’t spontaneously blow up, of course, and perhaps for that reason, most mathematicians have concentrated their energy on trying to prove that the solutions to the Navier-Stokes equations remain smooth and well-behaved forever, a property called global regularity. Purported proofs of global regularity surface every few months, but so far each one has had a fatal flaw. The most recent attempt to garner serious attention, by Mukhtarbay Otelbaev of the Eurasian National University in Astana, Kazakhstan, is still under review, but mathematicians have already uncovered significant problems with the proof, which Otelbaev is trying to solve.
“Everyone in the research community would agree that the tools we have at the moment are not sufficient to prove global regularity,” said Susan Friedlander, of the University of Southern California in Los Angeles. Tao originally set out with a fairly modest goal: simply to make rigorous the intuition that the existing tools are not good enough. Many would-be proofs of global regularity have tried to exploit a principle of conservation of energy, and Tao set out to show that this principle is not sufficient to establish global regularity. He constructed a counterexample, a sort of toy fluid-flow universe whose governing equations have many commonalities with the Navier-Stokes equations, including conservation of energy, but whose solutions can blow up.
A decade earlier, Nets Katz, now of the California Institute of Technology in Pasadena, and Natasa Pavlovic, now of the University of Texas, Austin, had established blowupfor a toy version of a simpler fluid flow model by showing how to transfer a given amount of energy into smaller and smaller size scales until, after a finite amount of time, all the energy would be packed into a single point, and the fluid would blow up. But Katz and Pavlovic’s process distributed the energy across many different size scales at the same time, as if the Cat had lifted his hat to reveal not Little Cat A, but weak versions of many of the smaller cats. When Katz and Pavlovic tried to extend their process to a toy version of the Navier-Stokes equations, the fluid’s viscosity snuffed out this thinned-out energy and no blowup occurred.
The greatest risk factor for Alzheimer's disease is advancing age. After 65, the risk doubles every five years, and 40 percent or more of people 85 and older are estimated to be living with the devastating condition.
Researchers at Washington University School of Medicine in St. Louis have identified some of the key changes in the aging brain that lead to the increased risk. The changes center on amyloid beta 42, a main ingredient of Alzheimer'sbrain plaques. The protein, a natural byproduct of brain activity, normally is cleared from the brain before it can clump together into plaques. Scientists long have suspected it is a primary driver of the disease.
"We found that people in their 30s typically take about four hours to clear half the amyloid beta 42 from the brain," said senior author Randall J. Bateman, MD, the Charles F. and Joanne Knight Distinguished Professor of Neurology. "In this new study, we show that at over 80 years old, it takes more than 10 hours."
The slowdown in clearance results in rising levels of amyloid beta 42 in the brain. Higher levels of the protein increase the chances that it will clump together to form Alzheimer's plaques. The results will appear in the Annals of Neurology.
For the study, the researchers tested 100 volunteers ages 60 to 87. Half had clinical signs of Alzheimer's disease, such as memory problems. Plaques had begun to form in the brains of 62 participants. The subjects were given detailed mental and physical evaluations, including brain scans to check for the presence of plaques.
The researchers also studied participants' cerebrospinal fluids using a technology developed by Bateman and co-author David Holtzman, MD, the Andrew B. and Gretchen P. Jones Professor and head of the Department of Neurology at Washington University. The technology—known as stable isotope-linked kinetics (SILK)— allowed the researchers to monitor the body's production and clearance of amyloid beta 42 and other proteins.
In patients with evidence of plaques, the researchers observed that amyloid beta 42 appears to be more likely to drop out of the fluid that bathes the brain and clump together into plaques. Reduced clearance rates of amyloid beta 42, such as those seen in older participants, were associated with clinical symptoms of Alzheimer's disease, such as memory loss, dementia and personality changes.
Scientists believe the brain disposes of amyloid beta in four ways: by moving it into the spine, pushing it across the blood-brain barrier, breaking it down or absorbing it with other proteins, or depositing it into plaques. "Through additional studies like this, we're hoping to identify which of the first three channels for amyloid betadisposal are slowing down as the brain ages," Bateman said. "That may help us in our efforts to develop new treatments."
Researchers have recorded rapid rises in meltwater and alarming rates of glacial retreat, which are accelerating at a pace double that of a decade ago.
The world’s glaciers are in retreat. The great tongues of ice high in the Himalayas, the Andes, the Alps and the Rockies are going back uphill at ever greater speeds, according to new research. And this loss of ice is both accelerating and “historically unprecedented”, say scientists who report in the Journal of Glaciology.
In the past year or so, researchers have identified rapid rises in meltwater and alarming cases of glacial retreat in Greenland, West Antarctica, the Canadian and Alaskan coastal mountains, in Europe and in the Himalayan massif. They have also watched glaciers pick up speed downhill. One satellite-based study, confirmed by on-the-ground measurements, of the Jakobshavn glacier in Greenland, confirms that the river of ice is now moving at the rate of 46 metres a day, 17 kilometres a year, which is twice the speed recorded in 2003, which in turn was twice as fast as measured in 1997.
The World Glacier Monitoring Service, based at the University of Zurich in Switzerland and with partners in 30 countries, has been compiling data on changes in glaciers over the last 120 years. And it has just compared all known 21st century observations with data from site measurements, aerial photography and satellite observations and evidence from pictorial and written sources. Altogether, the service has collected 5,000 measurements of glacier volume and changes in mass since 1850, and 42,000 records of variations in glacier fronts from records dating back to the 16th century.
A new technology developed by UC Berkeley bioengineers promises to make a workhorse lab tool cheaper, more portable and many times faster by accelerating the heating and cooling of genetic samples with the switch of a light. This turbocharged thermal cycling, described in a paper published July 31 in the journal Light: Science & Application, greatly expands the clinical and research applications of the polymerase chain reaction (PCR) test, with results ready in minutes instead of an hour or more.
The PCR test, which amplifies a single copy of a DNA sequence to produce thousands to millions of copies, has become vital in genomics applications, ranging from cloning research to forensic analysis to paternity tests. PCR is used in the early diagnosis of hereditary and infectious diseases, and for analysis of ancient DNA samples of mummies and mammoths.
Using light-emitting diodes, or LEDs, the UC Berkeley researchers were able to heat electrons at the interface of thin films of gold and a DNA solution. They clocked the speed of heating the solution at around 55 degrees Fahrenheit per second. The rate of cooling was equally impressive, coming in at about 43.9 degrees per second.
“PCR is powerful, and it is widely used in many fields, but existing PCR systems are relatively slow,” said study senior author Luke Lee, a professor of bioengineering. “It is usually done in a lab because the conventional heater used for this test requires a lot of power and is expensive. Because it takes an hour or longer to complete each test, it is not practical for use for point-of-care diagnostics. Our system can generate results within minutes.”
To pick up the pace of this thermal cycling, Lee and his team of researchers took advantage of plasmonics, or the interaction between light and free electrons on a metal’s surface. When exposed to light, the free electrons get excited and begin to oscillate, generating heat. Once the light is off, the oscillations and the heating stop.
Gold, it turns out, is a popular metal for this plasmonic photothermal heating because it is so efficient at absorbing light. It has the added benefit of being inert to biological systems, so it can be used in biomedical applications.
For their experiments, the researchers used thin films of gold that were 120 nanometers thick, or about the width of a rabies virus. The gold was deposited onto a plastic chip with microfluidic wells to hold the PCR mixture with the DNA sample.
The light source was an array of off-the-shelf LEDs positioned beneath the PCR wells. The peak wavelength of the blue LED light was 450 nanometers, tuned to get the most efficient light-to-heat conversion. The researchers were able to cycle from 131 degrees to 203 degrees Fahrenheit 30 times in less than five minutes.
They tested the ability of the photonic PCR system to amplify a sample of DNA, and found that the results compared well with conventional PCR tests. “This photonic PCR system is fast, sensitive and low-cost,” said Lee, who is also co-director of the Berkeley Sensor and Actuator Center. “It can be integrated into an ultrafast genomic diagnostic chip, which we are developing for practical use in the field. Because this technology yields point-of-care results, we can use this in a wide range of settings, from rural Africa to a hospital ER.”
In an attempt to harvest the kinetic energy of airflow, researchers have demonstrated the ability to harvest energy directly from the vibrations of a flexible, piezoelectric beam placed in a wind tunnel. While the general approach to harvesting energy from these "aeroelastic" vibrations is to attach the beam to a secondary vibrating structure, such as a wing section, the new design eliminates the need for the secondary vibrating structure because the beam is designed so that it produces self-induced and self-sustaining vibrations. As a result, the new system can be made very small, which increases its efficiency and makes it more practical for applications, such as self-powered sensors.
The researchers, Mohamed Y. Zakaria, Mohammad Y. Al-Haik, and Muhammad R. Hajj from the Center for Energy Harvesting Materials and Systems at Virginia Tech, have published a paper on the new energy-harvesting method in a recent issue of Applied Physics Letters.
"The greatest significance of the work is the reduction of the volume of the harvester, which translates to an increase in the power density, by eliminating the need for a secondary structure to be attached to the beam," Zakaria said. "This reduction is important in the design of very small harvesters that can be used to develop self-powered sensors."
The research shows that subjecting a flexible beam to wind at the right angle of attack can cause the beam to bend so much that the beam's "flutter speed" is significantly reduced. A large degree of bending also induces a change in the beam's natural frequencies that basically results in a synchronization of the beam's bending and twisting frequencies. Specifically, the beam's second bending frequency and torsional frequency coalesce, resulting in "self-induced flutter" of the beam. Complex aerodynamic effects ensure that the vibrations are self-sustaining, allowing for continuous energy harvesting.
Researchers at the University of St Andrews, Scotland, UK, are claiming a photonics-based breakthrough in biomedicine; having successfully tracked a day-in-the-life of a number of white blood cells by feeding them microlasers, according to a research report published in Nano Letters The technique is expected to allow new insights into how cancers spread in the human body.
The Soft Matter Photonics Group led by Professor Malte Gather of the School of Physics and Astronomy, in collaboration with immunologists in the University’s School of Medicine, found that by “swallowing” an optical micro-resonator, cells gain the ability to produce green laser light.
Research groups around the world have worked on lasers based on single cells for several years now. However, all previously reported cell lasers required optical resonators that were much larger than the cell itself, meaning that the cell had to be inserted into these resonators. By drastically shrinking resonator size and exploiting the capability of cells to spontaneously take up foreign objects, the latest work now allows generation of laser light within a single living cell.
Dr Gather said, “This miniaturization paves the way to applying cell lasers as a new tool in biophotonics. In the future, these new lasers can help us understand important processes in biomedicine. For instance, we may be able to track—one by one—a large number of cancer cells as they invade tissue or follow each immune cell migrating to a site of inflammation.”
He continued, “The ability to track the movement of large number of cells will widen our understanding of a number of important processes in biology. For instance being able to see where and when circulating tumor cells invade healthy tissue can provide insight into how cancers spread in the body which would allow scientists to develop more targeted therapies in the future.”
The investigators put different types of cells onto a diet of optical "whispering gallery" micro-resonators. Some types of cells were particularly quick to ‘swallow’ the resonators; macrophages—immune cells responsible amongst other things for ‘garbage collection’ in our body—internalized the resonators within less than five minutes. However, even cells without particularly pronounced capacity for endocytosis readily internalized the micro-resonators, showing that laser barcodes are applicable to many different cell types.
What are future objectives?
Dr Gather believes these self-contained cell lasers have great potential to become a widely used tool in biology. Conventional fluorescent tags have rather broad emission spectra which means that one can only distinguish a limited number of different tags. The narrow spectrum of the cell laser facilitates distinguishing hundreds of thousands of different tags. The availability of such a tool will lead to new insights in cancer research as it would allow one to monitor how the cells from a tumor form metastasis, providing single cell resolution; i.e. one could see exactly which cells and how many cells from a primary tumor invade healthy tissue and form a new tumor site. The objectives are to develop the technology further, by confirming accuracy, improving speed, and reducing the size of the micro-resonators required to guarantee that their presence does not influence the behavior of the cell.
Earth's magnetic field is 800 million years older than previously thought, new research suggests.
A new analysis of Western Australian zircon minerals has found the engine that generates the field started not long after the planet formed. Earth's so-called "geodynamo", involving the movement of molten iron in the Earth's outer core, began 4.22 billion years ago, say researchers today in the journal Science.
"This opens a window into a period that we know almost nothing about," says co-author, Professor Francis Nimmo of the University of California, Santa Cruz. "Before this study we knew that the dynamo had existed for around three and a half billion years. What this study has done is push back the age of the dynamo by another 800 million years."
Earth's magnetic field acts as a shield protecting the planet's atmosphere and water, which make life on Earth possible. Without the magnetic field Earth's atmosphere would have been eroded away by the solar wind, a stream of charged particles flowing from the Sun.
The magnetic field was particularly important in Earth's early history when solar winds were about 100 times stronger than they are now.
"The young Sun was very active, and so having a strong magnetic field early on allows you to hang on to your atmosphere," says Nimmo.
"Mars had a dynamo early on, but then that dynamo died," he says. "Part of the reason that Mars lost its atmosphere is not simply that it has less gravity, but also that it didn't have a magnetic field protecting the atmosphere from being blown away."
Docile ants become aggressive guard dogs after a secret signal from their caterpillar overlord. The idea turns on its head the assumption that the two species exchange favours in an even-handed relationship.
The caterpillars of the Japanese oakblue butterfly (Narathura japonica) grow up wrapped inside leaves on oak trees. To protect themselves against predators like spiders and wasps, they attract ant bodyguards, Pristomyrmex punctatus, with an offering of sugar droplets.
The relationships was thought to be a fair exchange of services in which both parties benefit. But Masaru Hojo from Kobe University in Japan noticed something peculiar: the caterpillars were always attended by the same ant individuals.
“It also seemed that the ants never moved away or returned to their nests,” he says. They seemed to abandon searching for food, and were just standing around guarding the caterpillar.
Cells contain an ocean of twisting and turning RNA molecules. Now researchers are working out the structures — and how important they could be.
When Philip Bevilacqua decided to work out the shapes of all the RNA molecules in a living plant cell, he faced two problems. First, he had not studied plant biology since high school. And second, biochemists had tended to examine single RNA molecules; tackling the multitudes that waft around in a cell was a much thornier challenge.
Bevilacqua, an RNA chemist at Pennsylvania State University in University Park, was undeterred. He knew that RNA molecules were vital regulators of cell biology and that their structures might offer broad lessons about how they work. He brushed up on plant anatomy in an undergraduate course and worked with molecular plant biologist Sarah Assmann to develop a technique that could cope with RNAs at scale.
In November 2013, they and their teams became the first to describe the shapes of thousands of RNAs in a living cell — revealing a veritable sculpture garden of different forms in the weedy thale cress, Arabidopsis thaliana1.
One month later, a group at the University of California, San Francisco, reported a comparable study of yeast and human cells2. The number of RNA structures they managed to resolve was “unprecedented”, says Alain Laederach, an RNA biologist at the University of North Carolina at Chapel Hill (UNC).
Scientists' view of RNA has transformed over the past few decades. Once, most RNAs were thought to be relatively uninteresting pieces of limp spaghetti that ferried information between the molecules that mattered, DNA and protein. Now, biologists know that RNAs serve many other essential functions: they help with protein synthesis, control gene activity and modify other RNAs. At least 85% of the human genome is transcribed into RNA, and there is vigorous debate about what, if anything, it does.
But a key mystery has remained: its convoluted structures. Unlike DNA, which forms a predictable double helix, RNA comprises a single strand that folds up into elaborate loops, bulges, pseudo-knots, hammerheads, hairpins and other 3D motifs. These structures flip and twist between different forms, and are thought to be central to the operation of RNA, albeit in ways that are not yet known. “It's a big missing piece of the puzzle of understanding how RNAs work,” says Jonathan Weissman, a biophysicist and leader of the yeast and human RNA study.
In the past few years, researchers have begun to get a toehold on the problem. Bevilacqua, Weissman and others have devised techniques that allow them to take snapshots of RNA configurations en masse inside cells — and found that the molecules often look nothing like what is seen when RNA folds under artificial conditions. The work is helping them to decipher some of the rules that govern RNA structure, which might be useful in understanding human variation and disease — and even in improving agricultural crops.
“It gets at the very basic problem of how do living things evolve and how do these molecular rules affect what we look like and how we function,” says Laederach. “And that, fundamentally as a biologist, is really exciting.” The best-described RNA structures are what Kevin Weeks, a chemical biologist at the UNC, calls “RNA rocks”: molecules that have changed little in their sequence or structure over evolutionary time. These include transfer RNAs and ribosomal RNAs (both involved in protein synthesis) as well as enzymatic RNAs known as ribozymes. “But in the world of RNAs,” Weeks says, “these are probably huge outliers.”
Superintelligence: Paths, Dangers, Strategies is an astonishing book with an alarming thesis: Intelligent machines are “quite possibly the most important and most daunting challenge humanity has ever faced.” In it, Oxford University philosopher Nick Bostrom, who has built his reputation on the study of “existential risk,” argues forcefully that artificial intelligence might be the most apocalyptic technology of all. With intellectual powers beyond human comprehension, he prognosticates, self-improving artificial intelligences could effortlessly enslave or destroy Homo sapiens if they so wished. While he expresses skepticism that such machines can be controlled, Bostrom claims that if we program the right “human-friendly” values into them, they will continue to uphold these virtues, no matter how powerful the machines become.
These views have found an eager audience. In August 2014, PayPal cofounder and electric car magnate Elon Musk tweeted “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.” Bill Gates declared, “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” More ominously, legendary astrophysicist Stephen Hawking concurred: “I think the development of full artificial intelligence could spell the end of the human race.” Proving his concern went beyond mere rhetoric, Musk donated $10 million to the Future of Life Institute “to support research aimed at keeping AI beneficial for humanity.”
Superintelligence is propounding a solution that will not work to a problem that probably does not exist, but Bostrom and Musk are right that now is the time to take the ethical and policy implications of artificial intelligence seriously. The extraordinary claim that machines can become so intelligent as to gain demonic powers requires extraordinary evidence, particularly since artificial intelligence (AI) researchers have struggled to create machines that show much evidence of intelligence at all. While these investigators’ ultimate goals have varied since the emergence of the discipline in the mid-1950s, the fundamental aim of AI has always been to create machines that demonstrate intelligent behavior, whether to better understand human cognition or to solve practical problems.
Some AI researchers even tried to create the self-improving reasoning machines Bostrom fears. Through decades of bitter experience, however, they learned not only that creating intelligence is more difficult than they initially expected, but also that it grows increasingly harder the smarter one tries to become. Bostrom’s concept of “superintelligence,” which he defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest,” builds upon similar discredited assumptions about the nature of thought that the pioneers of AI held decades ago. A summary of Bostrom’s arguments, contextualized in the history of artificial intelligence, demonstrates how this is so.
A paper in Physics Letters B has raised the possibility that the Large Hadron Collider (LHC) could make a discovery that would put its previous triumph with the Higgs Boson in the shade. The authors suggest it could detect mini black holes. Such a finding would be a matter of huge significance on its own, but might be an indication of even more important things.
Few ideas from theoretical physics capture the public imagination as much as the “many-worlds hypothesis,” which proposes an infinite number of universes that differ from our own in ways large and small. The idea has provided great fodder for science fiction writers and comedians.
However, according to Professor Mir Faizal from the University of Waterloo, "Normally, when people think of the multiverse, they think of the many-worlds interpretation of quantum mechanics, where every possibility is actualized," he said to Phys.org. "This cannot be tested and so it is philosophy and not science." Nonetheless, Faizal considers the test for a different sort of parallel universes almost within our grasp.
“What we mean is real universes in extra dimensions,” says Faizal. “As gravity can flow out of our universe into the extra dimensions, such a model can be tested by the detection of mini black holes at the LHC.”
The idea that the universe may be filled with minute black holes has been proposed to explain puzzles such as the nature of dark matter. However, the energy required to create such objects depends on the number of dimensions the universe has. In a conventional four-dimensional universe, these holes would require 1016 TeV, 15 orders of magnitude beyond the capacity of the LHC to produce.
String theory, on the other hand, proposes 10 dimensions, six of which have been wrapped up so we can't experience them. Attempts to model such a universe suggest that the energy required to make these tiny black holes would be a great deal smaller, so much so that some scientists believe they should have been detected in experiments the LHC has already run.
So if no detection, no string theory? Not according to Faizal and his co-authors. They argue that the models used to predict the energy of the black holes in a 10-dimensional universe have left out quantum deformation of spacetime that changes gravity slightly.
Whether this deformation is real is a rapidly developing question, but if it is, the paper argues that the black holes will have energy levels much smaller than in a four-dimensional universe, but about twice as large as that detectable for any test run so far. The LHC is designed to reach 14 TeV, but so far has only gone to 5.3 TeV, while the paper thinks the holes might be lurking at 11.9 TeV. In this case, once the LHC reaches its full capacity, we should find them.
Such a discovery would demonstrate the microscale deformation of spacetime, the existence of extra dimensions, parallel universes within them and string theory. If found at the right energy levels, the holes would confirm the team's interpretation of a new theory on black hole behavior named gravity's rainbow, after the influential novel. Such an astonishing quadruple revelation would transform physics, although the researchers are already considering the most likely flaws in their work if the holes prove elusive.
Via Chuck Sherwood, Senior Associate, TeleDimensions, Inc, Blujay
Rett syndrome is a very rare autistic-like disorder that is not detectable at birth (in most cases signs of it develop at 6 to 18 months of age) and occurs in about 1 in 10,000 to 1 in 15,000 females, as 90% of the children who have Rett syndrome are girls. Most boys diagnosed with this disorder die within the first 2 years due to a severe encephalopathy. Mostly girls are affected because they have two X chromosomes, and the disorder is caused by mutations in a gene on the X chromosome called MECP2.
Rett syndrome is considered as one of the most devastating and deadliest neurological disorders and current therapy relies only on treatment of the symptoms. Children diagnosed with this disease typically have no communicating skills, most of the affected individuals can’t walk, and some of the other very common problems that occur are growth problems, scoliosis, constipation, and brain decline.
Professor Nicholas Tonks at the Cold Spring Harbor Laboratory and his team developed the first drug, codenamed CPT157633, that shows promise at reversing Rett syndrome symptoms. First studies have been done on mice and X-ray crystallography at atomic level shows that the drug binds to its target, the enzyme PTP1B that was discovered after 25 years of research, which helps in regulating key metabolic and signaling pathways.
The PTP1B group of enzymes is responsible for growth and development and Tonks team found 105 different PTP1B enzymes. One of these PTP1B enzymes was found to be elevated in mice that had the chromosome defect causing Rett syndrome, and Tonks’ team developed a few drugs that inhibit the function of this PTP1B enzyme that is associated with the MECP2 chromosome.
These drugs extended lifespan in male mice by as much as 90 days; female mice showed reversal of symptoms with at least 25% efficiency making this the first drug therapy that has the potential to reverse Rett syndrome at atomic-level, showing a lot of promise to further test it’s effects and hopefully in a very near future help the children coping with this disorder to get the best possible treatment.
The findings have been published in The Journal of Clinical Investigation titled “PTP1B inhibition suggests a therapeutic strategy for Rett syndrome”.
The Internet of Things — the vision of a world brimming with communicating sensors and digital smarts — occupies the peak of Gartner’s most recent “hype cycle.” And a report released two months ago by the McKinsey Global Institute laid out the potential multitrillion-dollar payoff from the emerging technology.
At a two-day workshop last week in San Jose, Calif., hosted by the National Science Foundation and the National Consortium for Data Science, a few dozen academics, corporate technologists and government officials met and mostly wrestled with the thorny technical and policy issues that must be addressed if the potential of the Internet of Things is to be realized. They were working to come up with a research agenda to make practical progress on challenges like security, privacy and standards. A glimpse of the looming security concerns came two weeks ago, when Fiat Chrysler recalled 1.4 million vehicles after two researchers hacked into a Jeep Cherokee and showed they could remotely control its engine, brakes and steering.
But the Silicon Valley gathering also underlined the societal needs that Internet-of-Things technology could help address. Lance Donny, founder of an agricultural technology start-up, OnFarm Systems, gave a wide-ranging talk that laid out the history of farming and presented the case for its data-driven future. Inexpensive sensors, cloud computing and intelligent software, he suggested, hold the potential to transform agriculture and help feed the world’s growing population.
Venture capitalists seem to share some of Mr. Donny’s optimism. In the first half of this year, venture investment in so-called agtech start-ups reached $2.06 billion in 228 deals, according to a studypublished last week by AgFunder, an equity crowdfunding platform for agricultural technology. The half-year total was close to the $2.36 billion raised in all of 2014, which was a record year
In his presentation, Mr. Donny placed the progression of farming in three stages. The first, preindustrial agriculture, dating from before Christ to about 1920, consisted of labor-intensive, essentially subsistence farming on small farms, which took two acres to feed one person. In the second stage, industrial agriculture, from 1920 to about 2010, tractors and combine harvesters, chemical fertilizers and seed science opened the way to large commercial farms. One result has been big gains in productivity, with one acre feeding five people.
The third stage, which Mr. Donny calls Ag 3.0, is just getting underway and involves exploiting data from many sources — sensors on farm equipment and plants, satellite images and weather tracking. In the near future, the use of water and fertilizer will be measured and monitored in detail, sometimes on a plant-by-plant basis.
Programmed cell death occurs throughout life in all tissues of the body, and more than a billion cells die every day as part of normal processes. Thus, rapid and efficient clearance of cell corpses is a vital prerequisite for homeostatic maintenance of tissue health. Failure to clear dying cells can lead to the accumulation of auto-antigens in tissues that foster diseases, such as chronic inflammation, autoimmunity, and developmental abnormalities. In the normal immune system, phagocytic engulfment of apoptotic cells is accompanied by induction of a certain degree of immune tolerance in order to prevent self-antigen recognition. Over the past few decades, enormous efforts have been made toward understanding various mechanisms of tumor suppressor p53–mediated apoptosis. However, the involvement of p53 in post-apoptosis has yet to be addressed.
One of the most intriguing, yet enigmatic, questions in studying homeostatic control of efficient dead cell clearance and proper immune tolerance is how these two essential activities are interrelated: The complexity of these processes is demonstrated by the many receptors and signaling pathways involved in the engulfment of apoptotic cells and stringent discrimination of self antigens from non-self antigens. Thus, there must be key connection(s) linking the balance between immune homeostasis and inflammation. In addition to the anti-tumor functions of p53, p53 has been implicated in immune responses and inflammatory diseases, with various roles in the immune system becoming apparent. We identified a post-apoptotic target gene of p53, Death Domain1α(DD1α), that is responsive to genotoxic stresses and expressed in immune cells. DD1α appears to function as an immunoregulator of T cell tolerance. p53 controls signaling-mediated phagocytosis of apoptotic cells through its target, DD1α. A group of scientists now determined that DD1α functions as an engulfment ligand or receptor that is involved in homophilic intermolecular interaction at intercellular junctions of apoptotic cells and macrophages. They also investigated whether DD1α deficiency caused any defects in dead cell clearance in vivo.
The researchers found that DD1α has similarity with several members of the immunoglobulin superfamily with the extracellular immunoglobulin V (IgV) domain, such as TIM family proteins and an immune checkpoint regulator, PD-L1. They also found that the p53 induction and maintenance of DD1α expression in apoptotic cells and its subsequent functional intercellular homophilic interaction between apoptotic cells and macrophages are required for engulfment of apoptotic cells. DD1α-deficient mice showed less reduction in organ size and cell number after ionizing radiation (IR), owing to defective dead cell clearance. DD1α-null mice are viable and indistinguishable in appearance from wild-type littermates at an early age. However, at a later age, DD1α deficiency resulted in the development of autoimmune phenotypes and prominent formation of immune infiltrates in the skin, lung, and kidney, which indicated an immune dysregulation and breakdown of self-tolerance in DD1α-null mice. The team demonstrated that DD1α also plays an important role as an intercellular homophilic receptor on T cells, which suggests that DD1α is a key-connecting molecule linking postapoptotic processes to immune surveillance. DD1α deficiency in T cells impaired DD1α-mediated inhibitory activity of T cell proliferation. These data indicate that potential homophilic DD1α interactions are important for the DD1α-mediated T cell inhibitory role. Therefore, the results indicate a role for p53 in regulating expression of immune checkpoint regulators, including PD-1, PD-L1, and DD1α.
The first ever genetic analysis of people with extremely high intelligence has revealed small but important genetic differences between some of the brightest people in the United States and the general population.
Published today in Molecular Psychiatry, the King's College London study selected 1,400 high-intelligence individuals from the Duke University Talent Identification Program. Representing the top 0.03 per cent of the ‘intelligence distribution’, these individuals have an IQ of 170 or more - substantially higher than that of Nobel Prize winners, who have an average IQ of around 145.
Genetic research on intelligence consistently indicates that around half of the differences between people can be explained by genetic factors. This study’s unique design, which focused on the positive end of the intelligence distribution and compared genotyping data against more than 3,000 people from the general population, greatly enhanced the study’s power to detect genes responsible for the heritability of intelligence.
Researchers analyszd single nucleotide polymorphisms (SNPs), which are DNA differences (polymorphisms) between individuals in the 3 billion nucleotide base pairs of DNA - steps in the spiral staircase of the double helix of DNA that make up the human genome. Each SNP represents a difference in a single nucleotide base pair, and these SNPs account for inherited differences between people, including intelligence. The study focused, for the first time, on rare, functional SNPs – rare because previous research had only considered common SNPs and functional because these are SNPs that are likely to cause differences in the creation of proteins.
The researchers did not find any individual protein-altering SNPs that met strict criteria for differences between the high-intelligence group and the control group. However, for SNPs that showed some difference between the groups, the rare allele was less frequently observed in the high intelligence group. This observation is consistent with research indicating that rare functional alleles are more often detrimental than beneficial to intelligence.
Professor Robert Plomin from the Institute of Psychiatry, Psychology & Neuroscience (IoPPN) at King’s College London, said: ‘Rare functional alleles do not account for much on their own but in combination, their impact is significant.
‘Our research shows that there are not genes for genius. However, to have super-high intelligence you need to have many of the positive alleles and importantly few of the negative rare effects, such as the rare functional alleles identified in our study.’ The researchers also analysed genome-wide similarity to explore the genetic architecture of intelligence.
Professor Plomin added: ‘Previous research suggests that common SNPs in total account for around 25 per cent of the variance in intelligence. The question we asked, for the first time, was - how much will these functional variants account for? We found that the functional SNPs in our study explain around 17 per cent of the differences between people in intelligence.’
The authors acknowledge that environmental influences also have an impact, often interacting with genetic factors. Professor Plomin said: ‘Clearly super-bright people such as those in our study are more likely to select environments conducive to their genetic propensity, so they might have grown up reading books that present intellectual problems or be more likely to attend a university.’
Professor Michael Simpson from the Division of Genetic and Molecular Medicine at King’s College London, said: ‘Our study demonstrates the challenges in identifying specific genetic variants that contribute to this complex trait, but provides potential insight into its genetic architecture that will inform future studies.’
Physicists say they have produced stanene - a 2D layer of tin (Sn) atoms. It forms a honeycomb structure 'buckled' on top of a bismuth telluride support (centre: top view; right: side view).
Two years after physicists predicted that tin should be able to form a mesh just one atom thick, researchers say that they have made it. The thin film, called stanene, is reported on 3 August inNature Materials1. But researchers have not been able to confirm whether the material has the predicted exotic electronic properties that have excited theorists, such as being able to conduct electricity without generating any waste heat.
Stanene (from the Latin stannum meaning tin, which also gives the element its chemical symbol, Sn), is the latest cousin of graphene, the honeycomb lattice of carbon atoms that has spurred thousands of studies into related 2D materials. Those include sheets of silicene, made from silicon atoms; phosphorene, made from phosphorus; germanene, from germanium; and thin stacks of sheets that combine different kinds of chemical elements (see ‘The super materials that could trump graphene’).
Many of these sheets are excellent conductors of electricity, but stanene is — in theory — extra-special. At room temperature, electrons should be able to travel along the edges of the mesh without colliding with other electrons and atoms as they do in most materials. This should allow the film to conduct electricity without losing energy as waste heat, according to predictions2 made in 2013 by Shou-Cheng Zhang, a physicist at Stanford University in California, who is a co-author of the latest study.
That means that a thin film of stanene might be the perfect highway along which to ferry current in electric circuits, says Peide Ye, a physicist and electrical engineer at Purdue University in West Lafayette, Indiana. “I'm always looking for something not only scientifically interesting but that has potential for applications in a device,” he says. “It’s very interesting work.”
Stanene is predicted to be an example of a topological insulator, in which charge carriers (such as electrons) cannot travel through a material’s centre but can move freely along its edge, with their direction of travel dependent on whether their spin — a quantum property — points ‘up’ or ‘down’. Electric current is not dissipated because most impurities do not affect the spin and cannot slow the electrons, says Zhang.
When it comes to vaccinating their babies, bees don't have a choice—they naturally immunize their offspring against specific diseases found in their environments. And now for the first time, scientists have discovered how they do it.
Researchers from Arizona State University, University of Helsinki, University of Jyväskylä and Norwegian University of Life Sciences made the discovery after studying a bee blood protein called vitellogenin. The scientists found that this protein plays a critical, but previously unknown role in providing bee babies protection against disease.
The findings appear in the journal PLOS Pathogens. "The process by which bees transfer immunity to their babies was a big mystery until now. What we found is that it's as simple as eating," said Gro Amdam, a professor with ASU's School of Life Sciences and co-author of the paper. "Our amazing discovery was made possible because of 15 years of basic research on vitellogenin. This exemplifies how long-term investments in basic research pay off."
Co-author Dalial Freitak, a postdoctoral researcher with University of Helsinki adds: "I have been working on bee immune priming since the start of my doctoral studies. Now almost 10 years later, I feel like I've solved an important part of the puzzle. It's a wonderful and very rewarding feeling!"
In a honey bee colony, the queen rarely leaves the nest, so worker bees must bring food to her. Forager bees can pick up pathogens in the environment while gathering pollen and nectar. Back in the hive, worker bees use this same pollen to create "royal jelly"—a food made just for the queen that incidentally contains bacteria from the outside environment.
After eating these bacteria, the pathogens are digested in the gut and transferred to the body cavity; there they are stored in the queen's 'fat body'—an organ similar to a liver. Pieces of the bacteria are then bound to vitellogenin—a protein—and carried via blood to the developing eggs. Because of this, bee babies are 'vaccinated' and their immune systems better prepared to fight diseases found in their environment once they are born. Vitellogenin is the carrier of these immune-priming signals, something researchers did not know until now.
While bees vaccinate their babies against some diseases, many pathogens are deadly and the insects are unable to fight them.
But now that Amdam and Freitak understand how bees vaccinate their babies, this opens the door to creating the first edible and natural vaccine for insects.
"We are patenting a way to produce a harmless vaccine, as well as how to cultivate the vaccines and introduce them to bee hives through a cocktail the bees would eat. They would then be able to stave off disease," said Freitak.
One destructive disease that affects bees is American Foul Brood, which spreads quickly and destroys hives. The bacterium infects bee larvae as they ingest food contaminated with its spores. These spores get their nourishment from the larvae, eventually killing them. This disease is just one example where the researchers say a vaccine would be extremely beneficial.
Scientists at the Swiss Nanoscience Institute at the University of Basel have used resonators made from single-crystalline diamonds to develop a novel device in which a quantum system is integrated into a mechanical oscillating system. For the first time, the researchers were able to show that this mechanical system can be used to coherently manipulate an electron spin embedded in the resonator - without external antennas or complex microelectronic structures. The results of this experimental study will be published in Nature Physics.
In previous publications, the research team led by Georg H. Endress Professor Patrick Maletinsky described how resonators made from single-crystalline diamonds with individually embedded electrons are highly suited to addressing the spin of these electrons. These diamond resonators were modified in multiple instances so that a carbon atom from the diamond lattice was replaced with a nitrogen atom in their crystal lattices with a missing atom directly adjacent. In these "nitrogen-vacancy centers," individual electrons are trapped. Their "spin" or intrinsic angular momentum is examined in this research.
When the resonator now begins to oscillate, strain develops in the diamond's crystal structure. This, in turn, influences the spin of the electrons, which can indicate two possible directions ("up" or "down") when measured. The direction of the spin can be detected with the aid of fluorescence spectroscopy.
In this latest publication, the scientists have shaken the resonators in a way that allows them to induce a coherent oscillation of the coupled spin for the first time. This means that the spin of the electrons switches from up to down and vice versa in a controlled and rapid rhythm and that the scientists can control the spin status at any time. This spin oscillation is fast compared with the frequency of the resonator. It also protects the spin against harmful decoherence mechanisms.
It is conceivable that this diamond resonator could be applied to sensors - potentially in a highly sensitive way - because the oscillation of the resonator can be recorded via the altered spin. These new findings also allow the spin to be coherently rotated over a very long period of close to 100 microseconds, making the measurement more precise. Nitrogen-vacancy centers could potentially also be used to develop a quantum computer. In this case, the quick manipulation of its quantum states demonstrated in this work would be a decisive advantage.
After debuting the world’s first solar air battery last fall, researchers at The Ohio State University have now reached a new milestone.
In the Journal of the American Chemical Society, they report that their patent-pending design—which combines a solar cell and a battery into a single device—now achieves a 20 percent energy savings over traditional lithium-iodine batteries.
The 20 percent comes from sunlight, which is captured by a unique solar panel on top of the battery, explained Yiying Wu, professor of chemistry and biochemistry at Ohio State. The solar panel is now a solid sheet, rather than a mesh as in the previous design. Another key difference comes from the use of a water-based electrolyte inside the battery.
Because water circulates inside it, the new design belongs to an emerging class of batteries called aqueous flow batteries.
“The truly important innovation here is that we’ve successfully demonstrated aqueous flow inside our solar battery,” Wu said. As such, it is the first aqueous flow battery with solar capability. Or, as Wu and his team have dubbed it, the first “aqueous solar flow battery.”
“It’s also totally compatible with current battery technology, very easy to integrate with existing technology, environmentally friendly and easy to maintain,” he added.
Researchers around the world are working to develop aqueous flow batteries because they could theoretically provide affordable power grid-level energy storage someday.
In a research article by Dr Fred Goesmann from the Max Planck Institute for Solar System Research in Germany and his colleagues, the team analyzes the composition of Comet 67P/Churyumov-Gerasimenko using the COmetary SAmpling and Composition (COSAC) instrument, designed to identify organic compounds in the comet and thus contribute to a deeper understanding of the history of life on Earth.
The instrument collected molecules from 10 km (6.2 miles) above the comet surface, after the initial touchdown, and at the final site. 16 organic compounds were identified, divided into six classes of organic molecules (alcohols, carbonyls, amines, nitriles, amides and isocyanates). Of these, four organic compounds were detected for the first time on a comet (methyl isocyanate, acetone, propionaldehyde and acetamide).
Almost all the compounds detected are potential precursors, products, combinations or by-products of each other, which provides a glimpse of the chemical processes at work in a cometary nucleus, and even in the collapsing Solar Nebula in the very early Solar System.
COSAC identified a large number of nitrogen compounds but no sulfur compounds, contrary to what the ROSINA instrument on board Rosetta had observed. This suggests that the chemical composition varies depending on the area sampled.
A special issue of the journal Science highlights seven new studies that delve into the data that has been collected by ESA’s probe Philae on 67P/Churyumov-Gerasimenko.
A vaccine against Ebola has been shown to be 100% successful in trials conducted during the outbreak in Guinea and is likely to bring the west African epidemic to an end, experts say. The results of the trials involving 4,000 people are remarkable because of the unprecedented speed with which the development of the vaccine and the testing were carried out.
Scientists, doctors, donors and drug companies collaborated to race the vaccine through a process that usually takes more than a decade in just 12 months.
“Having seen the devastating effects of Ebola on communities and even whole countries with my own eyes, I am very encouraged by today’s news,” said Børge Brende, the foreign minister of Norway, which helped fund the trial.
A new technique for finding and characterizing microbes has boosted the number of known bacteria by almost 50 percent, revealing a hidden world all around us.
A team of microbiologists based at the University of California, Berkeley, recently figured out one such new way of detecting life. At a stroke, their work expanded the number of known types — or phyla — of bacteria by nearly 50 percent, a dramatic change that indicates just how many forms of life on earth have escaped our notice so far.
“Some of the branches in the tree of life had been noted before,” said Chris Brown, a student in the lab of Jill Banfield and lead author of the paper. “With this study we were able to fill in many gaps.”
As an organizational tool, the tree of life has been around for a long time. Lamarck had his version. Darwin had another. The basic structure of the current tree goes back 40 years to the microbiologist Carl Woese, who divided life into three domains: eukaryotes, which include all plants and animals; bacteria; and archaea, single-celled microorganisms with their own distinct features. After a point, discovery came to hinge on finding new ways of searching. “We used to think there were just plants and animals,” said Edward Rubin, director of the U.S. Department of Energy’s Joint Genome Institute. “Then we got microscopes, and got microbes. Then we got small levels of DNA sequencing.”
DNA sequencing is at the heart of this current study, though the researchers’ success also owes a debt to more basic technology. The team gathered water samples from a research site on the Colorado River near the town of Rifle, Colo. Before doing any sequencing, they passed the water through a pair of increasingly fine filters — with pores 0.2 and 0.1 microns wide — and then analyzed the cells captured by the filters. At this point they already had undiscovered life on their hands, for the simple reason that scientists had not thought to look on such a tiny scale before. “Most people assumed that bacteria were bigger, and most bacteria are bigger,” Rubin said. “Banfield has shown that there are whole populations that are very small.”
The researchers extracted the DNA from the cellular material and sent it to the Joint Genome Institute for sequencing. What they got back was a mess. Imagine being handed a box of pieces from thousands of different jigsaw puzzles and having to assemble them without knowing what any of the final images look like. That’s the challenge researchers face when performing metagenomic analysis — sequencing scrambled genetic material from many organisms at once.
The Berkeley team began the reassembly process with algorithms that assembled bits of the sequenced genetic code into slightly longer strings called contigs. “You no longer have tiny pieces of DNA, you have bigger pieces,” Brown said. “Then you figure out which of these larger pieces are part of a single genome.”
This part of the process, in which contigs are combined to reconstruct the genome sequence, is called genome binning. To execute it, the researchers relied on another set of algorithms, customized for the task by Itai Sharon, a co-author of the study. They also assembled some of the genomes manually, making decisions about what goes where based on the fact that some characteristics are consistent for a given genome. For example, the percentage of Gs and Cs will be similar on any part of an organism’s DNA.
When the assembly was complete, the researchers had eight full bacterial genomes and 789 draft genomes that were roughly 90 percent complete. Some of the organisms had been glimpsed before; many others were completely new.
Via Integrated DNA Technologies