Your new post is loading...
Toll Free:1-800-605-8422 FREE
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
• 3D-printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green-energy • history • language • map • material-science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Physicists in Innsbruck have realized the first quantum simulation of lattice gauge theories, building a bridge between high-energy theory and atomic physics. In the journal Nature, Rainer Blatt's and Peter Zoller's research teams describe how they simulated the creation of elementary particle pairs out of the vacuum by using a quantum computer.
Elementary particles are the fundamental buildings blocks of matter, and their properties are described by the Standard Model of particle physics. The discovery of the Higgs boson at the CERN in 2012 constitutes a further step towards the confirmation of the Standard Model. However, many aspects of this theory are still not understood because their complexity makes it hard to investigate them with classical computers. Quantum computers may provide a way to overcome this obstacle as they can simulate certain aspects of elementary particle physics in a well-controlled quantum system.
Physicists from the University of Innsbruck and the Institute for Quantum Optics and Quantum Information (IQOQI) at the Austrian Academy of Sciences have now done exactly that: In an international first, Rainer Blatt's and Peter Zoller's research teams have simulated lattice gauge theories in a quantum computer. They describe their work in the journal Nature.
Simulation of particle-antiparticle pairs using a quantum computer
Gauge theories describe the interaction between elementary particles, such as quarks and gluons, and they are the basis for our understanding of fundamental processes. "Dynamical processes, for example, the collision of elementary particles or the spontaneous creation of particle-antiparticle pairs, are extremely difficult to investigate," explains Christine Muschik, theoretical physicist at the IQOQI. "However, scientists quickly reach a limit when processing numerical calculations on classical computers. For this reason, it has been proposed to simulate these processes by using a programmable quantum system." In recent years, many interesting concepts have been proposed, but until now it was impossible to realize them. "We have now developed a new concept that allows us to simulate the spontaneous creation of electron-positron pairs out of the vacuum by using a quantum computer," says Muschik.
The quantum system consists of four electromagnetically trapped calcium ions that are controlled by laser pulses. "Each pair of ions represent a pair of a particle and an antiparticle," explains experimental physicist Esteban A. Martinez. "We use laser pulses to simulate the electromagnetic field in a vacuum. Then we are able to observe how particle pairs are created by quantum fluctuations from the energy of this field. By looking at the ion's fluorescence, we see whether particles and antiparticles were created. We are able to modify the parameters of the quantum system, which allows us to observe and study the dynamic process of pair creation."
For a long time, biologists thought our DNA resided only in the control center of our cells, the nucleus.
Induced pluripotent stem cells were supposed to herald a medical revolution.
Shinya Yamanaka looked up in surprise at the postdoc who had spoken. “We have colonies,” Kazutoshi Takahashi said again. Yamanaka jumped from his desk and followed Takahashi to their tissue-culture room, at Kyoto University in Japan. Under a microscope, they saw tiny clusters of cells — the culmination of five years of work and an achievement that Yamanaka hadn't even been sure was possible.
Two weeks earlier, Takahashi had taken skin cells from adult mice and infected them with a virus designed to introduce 24 carefully chosen genes. Now, the cells had been transformed. They looked and behaved like embryonic stem (ES) cells — 'pluripotent' cells, with the ability to develop into skin, nerve, muscle or practically any other cell type. Yamanaka gazed at the cellular alchemy before him. “At that moment, I thought, 'This must be some kind of mistake',” he recalls. He asked Takahashi to perform the experiment again — and again. Each time, it worked.
Over the next two months, Takahashi narrowed down the genes to just four that were needed to wind back the developmental clock. In June 2006, Yamanaka presented the results to a stunned room of scientists at the annual meeting of the International Society for Stem Cell Research in Toronto, Canada. He called the cells 'ES-like cells', but would later refer to them as induced pluripotent stem cells, or iPS cells. “Many people just didn't believe it,” says Rudolf Jaenisch, a biologist at the Massachusetts Institute of Technology in Cambridge, who was in the room. But Jaenisch knew and trusted Yamanaka's work, and thought it was “ingenious”.
The cells promised to be a boon for regenerative medicine: researchers might take a person's skin, blood or other cells, reprogram them into iPS cells, and then use those to grow liver cells, neurons or whatever was needed to treat a disease. This personalized therapy would get around the risk of immune rejection, and sidestep the ethical concerns of using cells derived from embryos.
Ten years on, the goals have shifted — in part because those therapies have proved challenging to develop. The only clinical trial using iPS cells was halted in 2015 after just one person had received a treatment. But iPS cells have made their mark in a different way. They have become an important tool for modelling and investigating human diseases, as well as for screening drugs. Improved ways of making the cells, along with gene-editing technologies, have turned iPS cells into a lab workhorse — providing an unlimited supply of once-inaccessible human tissues for research. This has been especially valuable in the fields of human development and neurological diseases, says Guo-li Ming, a neuroscientist at Johns Hopkins University in Baltimore, Maryland, who has been using iPS cells since 2006.
The field is still experiencing growing pains. As more and more labs adopt iPS cells, researchers struggle with consistency. “The greatest challenge is to get everyone on the same page with quality control,” says Jeanne Loring, a stem-cell biologist at the Scripps Research Institute in La Jolla, California. “There are still papers coming out where people have done something remarkable with one cell line, and it turns out nobody else can do it,” she says. “We've got all the technology. We just need to have people use it right.”
DNA molecules don’t just code our genetic instructions. They can also conduct electricity and self-assemble into well-defined shapes, making them potential candidates for building low-cost nanoelectronic devices.
A team of researchers from Duke University and Arizona State University has shown how specific DNA sequences can turn these spiral-shaped molecules into electron “highways,” allowing electricity to more easily flow through the strand.
The results may provide a framework for engineering more stable, efficient and tunable DNA nanoscale devices, and for understanding how DNA conductivity might be used to identify gene damage. The study appears online June 20 in Nature Chemistry.
Scientists have long disagreed over exactly how electrons travel along strands of DNA, says David N. Beratan, professor of chemistry at Duke University and leader of the Duke team. Over longer distances, they believe electrons travel along DNA strands like particles, “hopping” from one molecular base or “unit” to the next. Over shorter distances, the electrons use their wave character, being shared or “smeared out” over multiple bases at once. But recent experiments lead by Nongjian Tao, professor of electrical engineering at Arizona State University and co-author on the study, provided hints that this wave-like behavior could be extended to longer distances.
This result was intriguing, says Duke graduate student and study lead author Chaoren Liu, because electrons that travel in waves are essentially entering the “fast lane,” moving with more efficiency than those that hop.
“In our studies, we first wanted to confirm that this wave-like behavior actually existed over these lengths,” Liu said. “And second, we wanted to understand the mechanism so that we could make this wave-like behavior stronger or extend it to even longer distances.”
DNA strands are built like chains, with each link comprising one of four molecular bases whose sequence codes the genetic instructions for our cells. Using computer simulations, Beratan’s team found that manipulating these same sequences could tune the degree of electron sharing between bases, leading to wave-like behavior over longer or shorter distances. In particular, they found that alternating blocks of five guanine (G) bases on opposite DNA strands created the best construct for long-range wave-like electronic motions.
The team theorizes that creating these blocks of G bases causes them to all “lock” together so the wave-like behavior of the electrons is less likely to be disrupted by random wiggling in the DNA strand.
Carsonella ruddii is a bacterium that lives symbiotically inside some insects. Its sheltered life has allowed it to reduce its genome to only about 160,000 base pairs. With less than 200 genes, it lacks some genes necessary for survival, but these genes are supplied by its insect host. In fact, Carsonella has such a small genome that biologists have conjectured that it is losing its “bacterial” identity and turning into an organelle, which is part of the host’s genome. This transition from bacterium to organelle has happened many times during evolutionary history; in fact, the mitochondrion responsible for energy production in human cells was once a free-roaming bacterium that we assimilated in the distant past.
At first, the researchers assumed that genes would shut down shortly after death, like the parts of a car that has run out of gas. What they found instead was that hundreds of genes ramped up. Although most of these genes upped their activity in the first 24 hours after the animals expired and then tapered off, in the fish some genes remained active 4 days after death.
Many of these postmortem genes are beneficial in emergencies; they perform tasks such as spurring inflammation, firing up the immune system, and counteracting stress. Other genes were more surprising. “What’s jaw-dropping is that developmental genes are turned on after death,” Noble says. These genes normally help sculpt the embryo, but they aren’t needed after birth. One possible explanation for their postmortem reawakening, the researchers say, is that cellular conditions in newly dead corpses resemble those in embryos.
The team also found that several genes that promote cancer became more active. That result could explain why people who receive transplants from the recently deceased have a higher risk of cancer, Noble says. He and his colleagues posted their results on the preprint server bioRxiv last week, and Noble says their paper is undergoing peer review at a journal.
“This is a rare study,” says molecular pharmacologist Ashim Malhotra of Pacific University, Hillsboro, in Oregon, who wasn’t connected to the research. “It is important to understand what happens to organs after a person dies, especially if we are going to transplant them.” The team’s approach for measuring gene activity could be “used as a diagnostic tool for predicting the quality of a transplant.”
In an accompanying paper on bioRxiv, Noble and two colleagues demonstrated another possible use for gene activity measurements, showing that they can provide accurate estimates of the time of death. Those results impress forensic scientist David Carter of Chaminade University of Honolulu. Although making a time of death estimate is crucial for many criminal investigations, “we are not very good at it,” he says. Such estimates often rely on evidence that isn’t directly connected to the body, such as the last calls or texts on the victim’s cellphone. Noble and his colleagues, Carter says, have “established a technique that has a great deal of potential to help death investigation.”
A mouse or zebrafish doesn’t benefit, no matter which genes turn on after its death. The patterns of gene activity that the researchers observed may represent what happens when the complex network of interacting genes that normally keeps an organism functioning unwinds. Some genes may turn on, for example, because other genes that normally help kept them silent have shut off. By following these changes, researchers might be able to learn more about how these networks evolved, Noble says. “The headline of this study is that we can probably get a lot of information about life by studying death.”
Chew on this: rice farming is a far older practice than we knew. In fact, the oldest evidence of domesticated rice has just been found in China, and it's about 9,000 years old.
The discovery, made by a team of archaeologists that includes University of Toronto Mississauga professor Gary Crawford, sheds new light on the origins of rice domestication and on the history of human agricultural practices.
Today, rice is one of most important grains in the world's economy, yet at one time, it was a wild plant...how did people bring rice into their world? This gives us another clue about how humans became farmers," says Crawford, an anthropological archaeologist who studies the relationships between people and plants in prehistory.
Working with three researchers from the Provincial Institute of Cultural Relics and Archaeology in Zhejiang Province, China, Crawford found the ancient domesticated rice fragments in a probable ditch in the lower Yangtze valley. They observed that about 30 per cent of the rice plant material - primarily bases, husks and leaf epidermis - were not wild, but showed signs of being purposely cultivated to produce rice plants that were durable and suitable for human consumption. Crawford says this finding indicates that the domestication of rice has been going on for much longer than originally thought. The rice plant remains also had characteristics of japonica rice, the short grain rice used in sushi that today is cultivated in Japan and Korea. Crawford says this finding clarifies the lineage of this specific rice crop, and confirms for the first time that it grew in this region of China.
Crawford and his colleagues spent about three years exploring the five-hectare archaeological dig site, called Huxi, which is situated in a flat basin about 100 meters above sea level. Their investigations were supported by other U of T Mississauga participants - anthropology professor David Smith and graduate students Danial Kwan and Nattha Cheunwattana. They worked primarily in early spring, fall and winter in order to avoid the late-spring wet season and excruciatingly hot summer months. Digging 1.5 meters below the ground, the team also unearthed artifacts such as sophisticated pottery and stone tools, as well as animal bones, charcoal and other plant seeds.
An international team of astronomers has reported the discovery of a new giant extrasolar planet orbiting a subgiant star so closely that it should be destroyed due to tidal interactions. However, against all odds, the planet exists.
The planet, designated K2-39b, was first spotted by NASA's prolonged Kepler mission, known as K2. To confirm the planetary status of K2-39b, the team of researchers, led by Vincent Van Eylen of the Aarhus University in Denmark, has employed the High Accuracy Radial velocity Planet Searcher (HARPS) spectrograph on the ESO 3.6m telescope in La Silla, Chile, the Nordic Optical Telescope in La Palma, Canary Islands, as well as the Magellan II telescope at the Las Campanas Observatory in Chile.
The ground-based follow-up measurements were crucial to confirm that the newly found object was, indeed, a genuine exoplanet. The scientists conducted the so-called radial velocity measurements to measure the movement of the star caused by the planet. They clearly confirmed that the planet was indeed real, and also allowed the team to determine its mass.
According to the study, K2-39b is 50 times more massive than our planet and has a radius of about eight Earth radii. However, what is most intriguing about the new findings is that the planet is orbiting its evolved subgiant host star every 4.6 days, and so closely that it should be tidally destroyed.
"K2-39b is a bit of a 'special beast,' because such short-period planets orbiting large, evolved stars, are quite rare. (…) This planet is special mostly because of the star it orbits: Its host star is an evolved star, a subgiant several times larger than the sun. Around such stars, very few short-period planets were known, and there is speculation this may be because they cannot survive so close to such large stars. However, the fact that we have now found this planet, very close to a subgiant star, proves that at least some planets can survive there," Van Eylen explained.
Currently, there are two main theories attempting to explain the lack of close-in planets orbiting evolved subgiant stars. One of the hypotheses is that planets might be tidally destroyed as the star evolves and grows larger. The other scenario suggests that this is due to the systematically higher masses of the observed evolved stars compared to the observed main-sequence stars.
In the study, the scientists also attempt to estimate how long K2-39b can survive orbiting its sub-giant parent star. Taking into account the stellar mass of K2-39 and assuming that the planet remains in its current orbit, they suggest that the alien world will end its life probably in about 150 million years' time.
Furthermore, the team notes that it seems there may be a second planet in the system, at a much larger distance from the star. However, according to Van Eylen, the current data set has not been able to constrain this potential second planet. Further measurements may be able to do just that.
Hyperloop One demonstrated its rapidly developing technology earlier this year in the Nevada desert. There, electromagnets propelled a sled at 115 mph (185 km/h) along a special test track. The ultimate goal of the Hyperloop system is the create a tube in which a vacuum in front of a passenger or cargo capsule removes air friction and allows mass ground travel at unprecedented speeds. The company is currently working to bring about a magnetic drive that could get a capsule up to 700 mph (1,126 km/h), for example.
According to a press release about today's announcement, Hyperloop One now has feasibility studies being conducted in the Netherlands, Switzerland, Dubai, Los Angeles, the UK, Finland, and Sweden.
Researchers have fabricated a silicon optical antenna that is somewhat like an extremely small, special kind of prism. This is because when a red light shines on the optical antenna, the light turns right, but when the light is another color such as orange, it turns left.
This unusual property, which is called "bidirectional color scattering," enables the optical antenna to function effectively as a passive wavelength router for visible light. The device could have applications for innovative light sensors, light-matter manipulation, and optical communication.
The new optical antenna was developed by a team of researchers, Jiaqi Li et al., at imec (Interuniversity MicroElectronics Center) and the University of Leuven (KU Leuven), both in Leuven, Belgium. Their work is published in a recent issue of Nano Letters.
Although optical antennas are a relatively new area of research, they are simply the optical version of the radio and microwave antennas that most people are familiar with, which are commonly used for receiving and transmitting signals in radios, cell phones, and Wi-Fi.
In general, an antenna's size corresponds to the wavelengths that it was designed for. Since radio and microwave waves are on the scale of millimeters to kilometers, these antennas can be quite large. Since the wavelength of visible light is on the scale of a few hundred nanometers, tuning in to these signals requires nanosized antennas, which are much more difficult to fabricate.
Over the past few years, the imec and KU Leuven team has been exploring the possibilities of directional light manipulation at these length scales using an antenna consisting of just a single element. In 2013, using gold nanoantennas, they were able to demonstrate the world's smallest unidirectional optical antenna, in the shape of the letter V. These metallic antennas support so-called "plasmonic modes," which are fundamentally different form the optical modes supported by a dielectric antenna.
Now, by switching to a dielectric V-shaped antenna made from silicon, the researchers could achieve bidirectional scattering, in contrast to unidirectional scattering in the case of using gold. In bidirectional scattering, the scattering direction depends on the wavelength of the incoming (incident) light. The shift in direction is gradual. For example, as the wavelength decreases from 755 nm to 660 nm, the scattering direction gradually changes from the leftward to the rightward direction. The specific wavelengths can be tuned by engineering slight adjustments to the size and shape of the antenna.
A Chinese supercomputer has topped a list of the world's fastest computers again this year, and for the first time the winning system uses Chinese-designed processors instead of U.S. technology.
The announcement Monday is a new milestone for Chinese supercomputer development and a further erosion of past U.S. dominance of the field. Last year's Chinese winner in the TOP500 ranking maintained by researchers in the United States and Germany slipped to No. 2, followed by a computer at the U.S. government's Oak Ridge National Laboratory in Tennessee.
Also this year, China displaced the United States for the first time as the country with the most supercomputers in the top 500. China had 167 systems and the United States had 165. Japan was a distant No. 3 with 29 systems.
Supercomputers are one of a series of technologies targeted by China's ruling Communist Party for development and have received heavy financial support. Such systems are used for weather forecasting, designing nuclear weapons, analyzing oilfields and other specialized purposes.
"Considering that just 10 years ago, China claimed a mere 28 systems on the list, with none ranked in the top 30, the nation has come further and faster than any other country in the history of supercomputing," the TOP500 organizers said in a statement.
This year's champion is the Sunway TaihuLight at the National Supercomputing Center in Wuxi, west of Shanghai, according to TOP500. It was developed by China's National Research Center of Parallel Computer Engineering & Technology using entirely Chinese-designed processors.
The TaihuLight is capable of 93 petaflops, or quadrillion calculations per second, according to TOP500. It is intended for use in engineering and research including climate, weather, life sciences, advanced manufacturing and data analytics.
Its top speed is about five times that of Oak Ridge's Titan, which uses Cray, NVIDIA and Opteron technology. Other countries with computers in the Top 10 were Japan, Switzerland, Germany and Saudi Arabia.
Via THE *OFFICIAL ANDREASCY*
A piece of paper is one of the most common, versatile daily items. Children use it to draw their favorite animals and practice writing the A-B-Cs, and adults print reports or scribble a hasty grocery list.
Now, connecting real-world items such as a paper airplane or a classroom survey form to the larger Internet of Things environment is possible using off-the-shelf technology and a pen, sticker or stencil pattern.
Researchers from the University of Washington, Disney Research and Carnegie Mellon University have created ways to give a piece of paper sensing capabilities that allows it to respond to gesture commands and connect to the digital world.
The method relies on small radio frequency (RFID) tags that are stuck on, printed or drawn onto the paper to create interactive, lightweight interfaces that can do anything from controlling music using a paper baton, to live polling in a classroom.
"Paper is our inspiration for this technology," said lead author Hanchuan Li, a UW doctoral student in computer science and engineering. "A piece of paper is still by far one of the most ubiquitous mediums. If RFID tags can make interfaces as simple, flexible and cheap as paper, it makes good sense to deploy those tags anywhere."
The researchers presented their work May 12, 2016 at the Association for Computing Machinery's CHI 2016 conference in San Jose, California.
Northwestern University's Ken Forbus is closing the gap between humans and machines. Using cognitive science theories, Forbus and his collaborators have developed a model that could give computers the ability to reason more like humans and even make moral decisions. Called the structure-mapping engine (SME), the new model is capable of analogical problem solving, including capturing the way humans spontaneously use analogies between situations to solve moral dilemmas.
"In terms of thinking like humans, analogies are where it's at," said Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science in Northwestern's McCormick School of Engineering. "Humans use relational statements fluidly to describe things, solve problems, indicate causality, and weigh moral dilemmas."
The theory underlying the model is psychologist Dedre Gentner's structure-mapping theory of analogy and similarity, which has been used to explain and predict many psychology phenomena. Structure-mapping argues that analogy and similarity involve comparisons between relational representations, which connect entities and ideas, for example, that a clock is above a door or that pressure differences cause water to flow.
Analogies can be complex (electricity flows like water) or simple (his new cell phone is very similar to his old phone). Previous models of analogy, including prior versions of SME, have not been able to scale to the size of representations that people tend to use. Forbus's new version of SME can handle the size and complexity of relational representations that are needed for visual reasoning, cracking textbook problems, and solving moral dilemmas.
"Relational ability is the key to higher-order cognition," said Gentner, Alice Gabrielle Twight Professor in Northwestern's Weinberg College of Arts and Sciences. "Although we share this ability with a few other species, humans greatly exceed other species in ability to represent and reason with relations."
Supported by the Office of Naval Research, Defense Advanced Research Projects Agency, and Air Force Office of Scientific Research, Forbus and Gentner's research is described in the June 20 issue of the journal Cognitive Science. Andrew Lovett, a postdoctoral fellow in Gentner's laboratory, and Ronald Ferguson, a PhD graduate from Forbus's laboratory, also authored the paper.
Many artificial intelligence systems -- like Google's AlphaGo -- rely on deep learning, a process in which a computer learns examining massive amounts of data. By contrast, people -- and SME-based systems -- often learn successfully from far fewer examples. In moral decision-making, for example, a handful of stories suffices to enable an SME-based system to learn to make decisions as people do in psychological experiments.
"Given a new situation, the machine will try to retrieve one of its prior stories, looking for analogous sacred values, and decide accordingly," Forbus said.
A new study published in Nature presents one of the most complete models of matter in the universe and predicts hundreds of massive black hole mergers each year observable with the second generation of gravitational wave detectors.
The model anticipated the massive black holes observed by the Laser Interferometer Gravitational-wave Observatory. The two colliding masses created the first directly detected gravitational waves and confirmed Einstein's general theory of relativity.
"The universe isn't the same everywhere," said Richard O'Shaughnessy, assistant professor in RIT's School of Mathematical Sciences, and co-author of the study led by Krzysztof Belczynski from Warsaw University. "Some places produce many more binary black holes than others. Our study takes these differences into careful account."
Massive stars that collapse upon themselves and end their lives as black holes, like the pair LIGO detected, are extremely rare, O'Shaughnessy said. They are less evolved, "more primitive stars," that occur in special configurations in the universe. These stars from the early universe are made of more pristine hydrogen, a gas which makes them "Titans among stars," at 40 to 100 solar masses. In contrast, younger generations of stars consumed the corpses of their predecessors containing heavy elements, which stunted their growth.
"Because LIGO is so much more sensitive to these heavy black holes, these regions of pristine gas that make heavy black holes are extremely important," O'Shaughnessy said. "These rare regions act like factories for building identifiable pairs of black holes."
O'Shaughnessy and his colleagues predict that massive black holes like these spin in a stable way, with orbits that remain in the same plane. The model shows that the alignment of these massive black holes are impervious to the tiny kick that follows the stars' core collapse. The same kick can change the alignment of smaller black holes and rock their orbital plane.
The calculations reported in Nature are the most detailed calculations of its kind ever performed, O'Shaughnessy said. He likens the model to a laboratory for assessing future prospects for gravitational wave astronomy. Other gravitational wave astronomers are now using the model in their own investigations as well.
"We've already seen that we can learn a lot about Einstein's theory and massive stars, just from this one event," said O'Shaughnessy, also a member of the LIGO Scientific Collaboration that helped make and interpret the first discovery of gravitational waves. "LIGO is not going to see 1,000 black holes like these each year, but many of them will be even better and more exciting because we will have a better instrument--better glasses to view them with and better techniques."
Every school kid learns the basic structure of the Earth: a thin outer crust, a thick mantle, and a Mars-sized core. But is this structure universal? Will rocky exoplanets orbiting other stars have the same three layers? New research suggests that the answer is yes - they will have interiors very similar to Earth. "We wanted to see how Earth-like these rocky planets are. It turns out they are very Earth-like," says lead author Li Zeng of the Harvard-Smithsonian Center for Astrophysics (CfA).
To reach this conclusion Zeng and his co-authors applied a computer model known as the Preliminary Reference Earth Model (PREM), which is the standard model for Earth's interior. They adjusted it to accommodate different masses and compositions, and applied it to six known rocky exoplanets with well-measured masses and physical sizes.
They found that the other planets, despite their differences from Earth, all should have a nickel/iron core containing about 30 percent of the planet's mass. In comparison, about a third of the Earth's mass is in its core. The remainder of each planet would be mantle and crust, just as with Earth.
"We've only understood the Earth's structure for the past hundred years. Now we can calculate the structures of planets orbiting other stars, even though we can't visit them," adds Zeng.
The new code also can be applied to smaller, icier worlds like the moons and dwarf planets in the outer solar system. For example, by plugging in the mass and size of Pluto, the team finds that Pluto is about one-third ice, mostly water ice but also ammonia and methane ice varieties.
The model assumes that distant exoplanets have chemical compositions similar to Earth. This is reasonable based on the relevant abundances of key chemical elements like iron, magnesium, silicon, and oxygen in nearby systems. However, planets forming in more or less metal-rich regions of the galaxy could show different interior structures. The team expects to explore these questions in future research.
The paper detailing this work, authored by Li Zeng, Dimitar Sasselov, and Stein Jacobsen (Harvard University), has been accepted for publication in The Astrophysical Journal and is available online.
First China conquered DNA sequencing. Now it wants to dominate precision medicine too.
Six years ago, China became the global leader in DNA sequencing — and it was all down to one company, BGI. The Shenzen-based firm had just purchased 128 of the world's fastest sequencing machines and was said to have more than half the world's capacity for decoding DNA. It was assembling an army of upstart young bioinformaticians, collaborating with leading researchers worldwide and publishing the sequences of creatures ranging from ancient humans to the giant panda. The firm was quickly gaining a reputation as a brute-force genome factory — more brawn than brains, said some.
Six years later, the scene is quite different. BGI's most famous scientist and visionary leader, Jun Wang, left last July. The machine that had given the company its dominance is outdated, and the firm's attempt to develop its own industrial-scale whole-genome sequencer hit a roadblock last November, forcing it to lay off employees at its US subsidiary. Meanwhile, the competing system — Illumina's X series — has been selling briskly, raising the speed and dropping the price of sequencing worldwide.
Armed with the latest sequencers, rival companies to BGI have emerged. Most prominent of these is Novogene in Beijing, founded in 2011 by former BGI vice-president Ruiqiang Li. And although BGI might not have the uncontested dominance it once did, it still claims to have the world's largest sequencing capacity as well as major scientific ambitions — including to sequence the genomes of one million people, one million plants and animals and one million microbial ecosystems. Today, China is being reborn as a sequencing power with a broader base.
Fuelling the drive is a multibillion-dollar, 15-year precision-medicine initiative, which China announced in March and which rivals a similar initiative in the United States. If these efforts fulfil their goals, doctors envision being able to use a person's genome and physiology to pick the best treatments for his or her disease. The goal now for sequencing companies is to turn the bounty of genomic data into medical benefits. To do that, sequence data alone are not enough — so some Chinese companies are going beyond brute-force sequencing to work out how lifestyle factors such as diet are also important for understanding disease risk and for finding therapies. “The thing about China is the ambition they have for their precision-medicine programme is orders of magnitude larger than the United States',” says Hannes Smárason, chief operating officer and co-founder of WuXiNextCODE, a genomics company in Cambridge, Massachusetts, that is part of Shanghai-based WuXi AppTec. “They are dynamic and receptive. There, the idea of integrating of genomics into health care is very real.”
The new energy behind sequencing is largely thanks to one machine: Illumina's HiSeq X Ten, so called because it is generally sold as sets of ten units. When the machine hit the market in 2014, one set was able to sequence a human genome for close to US$1,000, and power through some 18,000 human genomes per year. Companies that wanted to rival BGI saw an opportunity — and leapt.
Novogene was the first. Following a model similar to BGI's, Li has been building up a large staff of bioinformaticians to generate and interpret sequence data as part of collaborative basic-research projects on the snub-nosed monkey (Rhinopithecus roxellana)1, cotton (Gossypium hirsutum)2 and other plants and animals. Using the same machine, a handful of other companies — including WuXi PharmaTech and Cloud Health, both in Shanghai — focus more on offering sequencing as a service to pharmaceutical or personal-genomics companies.
The growth is accelerating. Novogene added a second X Ten set in April, and Cloud Health chief executive Jason Gang Jin says that the company will add another two sets this year. By the end of the year, China will probably have at least 70 units. (Illumina says that 300 units were sold worldwide by the end of last year.)
BGI has been trying to keep pace. In 2013, it purchased Complete Genomics in Mountain View, California, in a bid to create its own advanced sequencing machines for in-house use and for sale. The firm announced a system called Revolocity, its attempt to match the HiSeq X, last June. But in November, having taken just three orders, it suddenly suspended sales. BGI is now left with its ageing fleet of 128 Illumina HiSeq 2000 machines and a mélange of newer sequencers from various companies, including its own.
Estimates of China's share of the world's sequencing-capacity range from 20% to 30% — still lower than when BGI was in its heyday, but expected to increase fast. “Sequencing capacity is rising rapidly everywhere, but it's rising more rapidly in China than anywhere else,” says Richard Daly, chief executive of DNAnexus in Mountain View, which supplies cloud platforms for large-scale genomics.
BGI has another machine up its sleeve. The BGISEQ-500 is designed as more of a desktop instrument for research labs. It is also based on the Complete Genomics technology and is set to begin shipping this year. Yiwu He, BGI's new global head of research, says that the system can sequence a human genome for $1,000, and by being smaller in scale and more flexible to use, it will meet China's emerging need for clinical sequencing. “There will be more sequencing done outside of research institutes, in the hospitals,” says He. The company will bring the price of one human genome sequence down to $200 in the next few years, he predicts boldly. “China is the most exciting place to do biomedical research.”
Histone proteins at the core of nucleosomes and their tails exert control over the exposure of genes for binding, as demonstrated in simulations by Rice researchers.
The protein complex that holds strands of DNA in compact spools partially disassembles itself to help genes reveal themselves to specialized proteins and enzymes for activation, according to Rice University researchers and their colleagues.
The team’s detailed computer models support the idea that DNA unwrapping and core protein unfolding are coupled, and that DNA unwrapping can happen asymmetrically to expose specific genes. The study of nucleosome disassembly by Rice theoretical biological physicist Peter Wolynes, former Rice postdoctoral researcher Bin Zhang, postdoctoral researcher Weihua Zheng and University of Maryland theoretical chemist Garegin Papoian appears in the Journal of the American Chemical Society. The research is part of a drive by Rice’s Center for Theoretical Biological Physics (CTBP) to understand the details of DNA’s structure, dynamics and function.
The spools at the center of nucleosomes, the fundamental unit of DNA organization, are histone protein core complexes. Nucleosomes are buried deep within a cell’s nucleus. About 147 DNA base pairs (from the more than 3 billion in the human genome) wrap around each histone core 1.7 times. The double helix moves on to spiral around the next core, and the next, with linker sections of 20 to 90 base pairs in between. The structure helps squeeze a 6-foot-long strand of DNA in each cell into as compact a form as possible while facilitating the controlled exposure of genes along the strand for protein expression.
The spools consist of two pairs of heterodimers, macromolecules that join to form the core. The core is stable until genes along the DNA are called upon by transcription factors or RNA polymerases; the researchers’ goal was to simulate what happens as the DNA unwinds from the core, making itself available to bind to outside proteins or make contact with other genes along the strand.
The researchers used their energy landscape models to simulate the nucleosome disassembly mechanism based on the energetic properties of its constituent DNA and proteins. The landscape maps the energies of all the possible forms a protein can take as it folds and functions. Conceptual insights from energy landscape theory have been implemented in an open-source biomolecular modeling framework called AWSEM Molecular Dynamics, which was jointly developed by the Papoian and Wolynes groups.
Wolynes said most studies elsewhere treated the histone core as if it were rigid and irreversibly disassociated when DNA unwrapped. But more recent experimental studies that involved gently pulling strands of DNA or used fluorescent resonance energy transfer, which measures energy moving between two molecules, showed the protein core is flexible and does not completely disassemble during unwrapping.
In their simulations, the researchers found the core changed its shape as the DNA unwound. Without DNA, they found the histone core was completely unstable in physiological conditions.
Their simulations showed that histone tails – the terminal regions of the core proteins – play a crucial role in nucleosome stability. The tails are highly charged and bind tightly with DNA, keeping its genomic content from being exposed until necessary. Their models predicted a faster unwrapping for tail-less nucleosomes, as seen in experiments.
The nucleosome study is part of a larger effort both by Papoian at Maryland and by Wolynes with his colleagues at CTBP to understand the mechanics of DNA, from how it functions to how it reproduces during mitosis. Wolynes said the new study and another new one by his lab on DNA during mitosis represent the opposite ends of the size scale.
“We can understand things at each end of the scale, but there’s a no-man’s land in between,” he said. “We’ll have to see whether the phenomena in the present-day no-man’s land can be understood. I don’t believe in magic; I believe they eventually will.”
Last year, biophysicist Moh El-Naggar and his graduate student Yamini Jangir plunged beneath South Dakota’s Black Hills into an old gold mine that is now more famous as a home to a dark matter detector. Unlike most scientists who make pilgrimages to the Black Hills these days, El-Naggar and Jangir weren’t there to hunt for subatomic particles. They came in search of life.
The electricity-eating microbes that the researchers were hunting for belong to a larger class of organisms that scientists are only beginning to understand. They inhabit largely uncharted worlds: the bubbling cauldrons of deep sea vents; mineral-rich veins deep beneath the planet’s surface; ocean sediments just a few inches below the deep seafloor. The microbes represent a segment of life that has been largely ignored, in part because their strange habitats make them incredibly difficult to grow in the lab.
Yet early surveys suggest a potential microbial bounty. A recent sampling of microbes collected from the seafloor near Catalina Island, off the coast of Southern California, uncovered a surprising variety of microbes that consume or shed electrons by eating or breathing minerals or metals. El-Naggar’s team is still analyzing their gold mine data, but he says that their initial results echo the Catalina findings. Thus far, whenever scientists search for these electron eaters in the right locations — places that have lots of minerals but not a lot of oxygen — they find them.
As the tally of electron eaters grows, scientists are beginning to figure out just how they work. How does a microbe consume electrons out of a piece of metal, or deposit them back into the environment when it is finished with them? A study published last year revealed the way that one of these microbes catches and consumes its electrical prey. And not-yet-published work suggests that some metal eaters transport electrons directly across their membranes — a feat once thought impossible.
Though eating electricity seems bizarre, the flow of current is central to life. All organisms require a source of electrons to make and store energy. They must also be able to shed electrons once their job is done. In describing this bare-bones view of life, Nobel Prize-winning physiologist Albert Szent-Györgyi once said, “Life is nothing but an electron looking for a place to rest.”
The microbes’ apparent ability to ingest electrons — known as direct electron transfer — is particularly intriguing because it seems to defy the basic rules of biophysics. The fatty membranes that enclose cells act as an insulator, creating an electrically neutral zone once thought impossible for an electron to cross. “No one wanted to believe that a bacterium would take an electron from inside of the cell and move it to the outside,” said Kenneth Nealson, a geobiologist at the University of Southern California, in a lecture to the Society for Applied Microbiology in London last year.
In the 1980s, Nealson and others discovered a surprising group of bacteria that can expel electrons directly onto solid minerals. It took until 2006 to discover the molecular mechanism behind this feat: A trio of specialized proteins sits in the cell membrane, forming a conductive bridge that transfers electrons to the outside of cell.
Ord and his colleagues at UNSW colleague Georgina Cooke analyzed the evolutionary relationships between fish species with out-of-water adaptations, and also looked at the ecological and evolutionary conditions that might inspire fish to move from water to land.
The researchers identified 33 fish families with at least one species that showcases amphibious tendencies. They published their findings in the journal Evolution. "These forays onto land have occurred in fish that live in different climates, eat different diets and live in range of aquatic environments, from freshwater rivers to the ocean," atted Ord. "While many species only spend a short time out of water, others, like mudskippers and some eels can last for hours or days."
The new study also documents a unique group of intertidal fish called blennies, which includes several species that hop around on land full-time as adults, staying within the vicinity of crashing waves and hiding in the crevices of wet rocks at low tide. "In this one family of fish alone, an amphibious lifestyle appears to have evolved repeatedly, between three and seven times," added Ord.
MicroRNA (miRNA) are cellular fragments of RNA that in some organisms prevent the production of certain proteins. They have been found to be expressed in tissues of some non-mammals and in some embryos prior to pre-implantation. Little is known about their function in mammals, however, though prior research had found them to exist in oocytes (ovarian cells that lead to the development of an ovum) and early embryos. Prior efforts to deep sequence them in mammals has proved to be extremely challenging due to the numbers of them that must be processed, thus scientists still do not know what role they may play in embryo development, if any. In this new effort, the researchers report that they have found a way to tweak the cDNA library construction method for small RNAs resulting in a need for only 10 nanograms of RNA for doing a deep sequence, and because of that, were able to profile samples of both mouse oocytes and early embryos.
To tweak the construction method, the researchers optimized the 5' and 3' adaptor ligation and PCR amplification steps, which allowed for drastically reducing the amount of RNA needed. To test their ideas they performed the tweaking on 293 human embryonic kidney cells. Once they had the technique developed, they switched to testing mice oocytes and early embryos to learn more about the role of miRNA in mammal embryo development. They report that they were able to trace the processes surrounding miRNA as it moved from fertilization to early embryonic development—which was the first time that had ever been done. Furthermore, they found that the role miRNA played was suppressed as initial cell division was occurring—though it was not clear why that occurred—but later it was reactivated, perhaps as part of the process of regulating zygotic genetic growth factors.
Earth's magnetosphere, the region of space dominated by Earth's magnetic field, protects our planet from the harsh battering of the solar wind.
Announced recently in Nature Physics, a new discovery led by researchers at the University of Alberta shows for the first time how the puzzling third Van Allen radiation belt is created by a "space tsunami." Intense so-called ultra-low frequency (ULF) plasma waves, which are excited on the scale of the whole magnetosphere, transport the outer part of the belt radiation harmlessly into interplanetary space and create the previously unexplained feature of the third belt.
"Remarkably, we observed huge plasma waves," says Ian Mann, physics professor at the University of Alberta, lead author on the study and former Canada Research Chair in Space Physics. "Rather like a space tsunami, they slosh the radiation belts around and very rapidly wash away the outer part of the belt, explaining the structure of the enigmatic third radiation belt."
The research also points to the importance of these waves for reducing the space radiation threat to satellites during other space storms as well. "Space radiation poses a threat to the operation of the satellite infrastructure upon which our twenty-first century technological society relies," adds Mann. "Understanding how such radiation is energized and lost is one of the biggest challenges for space research."
For the last 50 years, and since the accidental discovery of the Van Allen belts at the beginning of the space age, forecasting this space radiation has become essential to the operation of satellites and human exploration in space.
The Van Allen belts, named after their discoverer, are regions within the magnetosphere where high-energy protons and electrons are trapped by Earth's magnetic field. Known since 1958, these regions were historically classified into two inner and outer belts. However, in 2013, NASA's Van Allen Probes reported an unexplained third Van Allen belt that had not previously been observed. This third Van Allen belt lasted only a few weeks before it vanished, and its cause remained inexplicable.
Mann is co-investigator on the NASA Van Allen Probes mission. One of his team's main objectives is to model the process by which plasma waves in the magnetosphere control the dynamics of the intense relativistic particles in the Van Allen belts—with one of the goals of the Van Allen Probes mission being to develop sufficient understanding to reach the point of predictability. The appearance of the third Van Allen belt, one of the first major discoveries of the Van Allen Probes era, had continued to puzzle scientists with ever increasingly complex explanation models being developed. However, the explanation announced today shows that once the effects of these huge ULF waves are included, everything falls into place.
3-D printing has become a powerful tool for engineers and designers, allowing them to do "rapid prototyping" by creating a physical copy of a proposed design.
But what if you decide to make changes? You may have to go back, change the design and print the whole thing again, perhaps more than once. So Cornell researchers have come up with an interactive prototyping system that prints what you are designing as you design it; the designer can pause anywhere in the process to test, measure and, if necessary, make changes that will be added to the physical model still in the printer.
"We are going from human-computer interaction to human-machine interaction," said graduate student Huaishu Peng, who described the On-the-Fly-Print system in a paper presented at the 2016 ACM Conference for Human Computer Interaction. Co-authors are François Guimbretière, associate professor of information science; Steve Marschner, professor of computer science; and doctoral student Rundong Wu.
Their system uses an improved version of an innovative "WirePrint" printer developed in a collaboration between Guimbretière's lab and the Hasso Platner Institute in Potsdam, Germany.
In conventional 3-D printing, a nozzle scans across a stage depositing drops of plastic, rising slightly after each pass to build an object in a series of layers. With the WirePrint technique the nozzle extrudes a rope of quick-hardening plastic to create a wire frame that represents the surface of the solid object described in a computer-aided design (CAD) file. WirePrint aimed to speed prototyping by creating a model of the shape of an object instead of printing the entire solid. The On-the-Fly-Print system builds on that idea by allowing the designer to make refinements while printing is in progress.
Scientists can now detect magnetic behavior at the atomic level with a new electron microscopy technique developed by a team from the Department of Energy's Oak Ridge National Laboratory and Uppsala University, Sweden. The researchers took a counterintuitive approach by taking advantage of optical distortions that they typically try to eliminate.
"It's a new approach to measure magnetism at the atomic scale," ORNL's Juan Carlos Idrobo said. "We will be able to study materials in a new way. Hard drives, for instance, are made by magnetic domains, and those magnetic domains are about 10 nanometers apart." One nanometer is a billionth of a meter, and the researchers plan to refine their technique to collect magnetic signals from individual atoms that are ten times smaller than a nanometer.
"If we can understand the interaction of those domains with atomic resolution, perhaps in the future we will able to decrease the size of magnetic hard drives," Idrobo said. "We won't know without looking at it."
Researchers have traditionally used scanning transmission electron microscopes to determine where atoms are located within materials. This new technique allows scientists to collect more information about how the atoms behave.
"Magnetism has its origins at the atomic scale, but the techniques that we use to measure it usually have spatial resolutions that are way larger than one atom," Idrobo said. "With an electron microscope, you can make the electron probe as small as possible and if you know how to control the probe, you can pick up a magnetic signature."
The ORNL-Uppsala team developed the technique by rethinking a cornerstone of electron microscopy known as aberration correction. Researchers have spent decades working to eliminate different kinds of aberrations, which are distortions that arise in the electron-optical lens and blur the resulting images.
Ralph Merkle, Robert Freitas and others have a theoretical design for a molecular mechanical computer that would be 100 billion times more energy efficient than the most energy efficient conventional green supercomputer. Removing the need for gears, clutches, switches, springs makes the design easier to build.
Astronomers have discovered a vast cloud of high-energy particles called a wind nebula around a rare ultra-magnetic neutron star, or magnetar, for the first time. The find offers a unique window into the properties, environment and outburst history of magnetars, which are the strongest magnets in the universe.
A neutron star is the crushed core of a massive star that ran out of fuel, collapsed under its own weight, and exploded as a supernova. Each one compresses the equivalent mass of half a million Earths into a ball just 12 miles (20 kilometers) across, or about the length of New York’s Manhattan Island. Neutron stars are most commonly found as pulsars, which produce radio, visible light, X-rays and gamma rays at various locations in their surrounding magnetic fields. When a pulsar spins these regions in our direction, astronomers detect pulses of emission, hence the name.
Typical pulsar magnetic fields can be 100 billion to 10 trillion times stronger than Earth’s. Magnetar fields reach strengths a thousand times stronger still, and scientists don’t know the details of how they are created. Of about 2,600 neutron stars known, to date only 29 are classified as magnetars.
The newly found nebula surrounds a magnetar known as Swift J1834.9-0846—J1834.9 for short—which was discovered by NASA’s Swift satellite on Aug. 7, 2011, during a brief X-ray outburst. Astronomers suspect the object is associated with the W41 supernova remnant, located about 13,000 light-years away in the constellation Scutum toward the central part of our galaxy.