Over 3,100 postings.
Your new post is loading...
Toll Free:1-800-605-8422 FREE
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
• 3D-printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green-energy • history • language • map • material-science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Back in October of 2015, astronomers shook the world when they reported how the Kepler mission had noticed a strange and sudden drop in brightness coming from KIC 8462852 (aka. Tabby's Star). This was followed by additional studies that showed how the star appeared to be consistently dimming over time. All of this led to a flurry of speculation, with possibilities ranging from large asteroids and a debris disc to an alien megastructure.
But in what may be the greatest explanation yet, a team of researchers from Columbia University and the University of California, Berkley, have suggested that the star's strange flickering could be the result of a planet it consumed at some point in the past. This would have resulted in a big outburst of brightness from which the star is now recovering; and the remains of this planet could be transiting in front of the star, thus causing periodic drops.
For the sake of their study – titled "Secular dimming of KIC 8462852 following its consumption of a planet", which is scheduled to appear in the Monthly Notices of the Royal Astronomical Society – the team took the initial Kepler findings, which showed sudden drops of 15% and 22% in brightness. They then considered subsequent studies that took a look at the long-term behavior of Tabby's Star (both of which were published in 2016).
The first study, conducted by Bradley Schaefer of Louisiana State University, showed a decrease of 14% between the years of 1890 and 1989. The second study, conducted by Ben Monet and Joshua Simon (of Caltech and the Carnegie Institution of Washington, respectively), showed how the star faded by 3% over the course of the four years that Kepler continuously viewed it.
They then attempted to explain this behavior using the Kozai Mechanism (aka. Kozai Effect, Lidov-Kozai mechanism), which is a long-standing method in astronomy for calculating the orbits of planets based on their eccentricity and inclination. Applied to KIC 8462852, they determined that the star likely consumed a planet (or planets) in the past, likely around 10,000 years ago. This process would have caused a temporary brightening from which the star is now returning to normal (thus explaining the long term trend). They further determined that the periodic drops in brightness could be caused by the remnants of this planet passing in high-eccentricity orbits in front of the star, thus accounting for the sudden changes.
Their calculations also put mass constraints on the planet (or planets) consumed. By their estimates, it was either a single Jupiter-sized planet, or a large number of smaller objects – such as moon-mass bodies that were about 1 km in diameter. This latter possibility seems more inviting, since a large number of objects would have produced a field of debris that would be more consistent with the dimming rate observed by previous studies. These results are not only the best explanation of this star's strange behavior, they could have serious implications for the study of stellar evolution – in which stars gobble up some of their planets over time.
Researchers at EPFL and UNIL have discovered a faster and more efficient gait, never observed in nature, for six-legged robots walking on flat ground. Bio-inspired gaits – less efficient for robots – are used by real insects since they have adhesive pads to walk in three dimensions. The results provide novel approaches for roboticists and new information to biologists.
When vertebrates run, their legs exhibit minimal contact with the ground. But insects are different. These six-legged creatures run fastest using a three-legged, or “tripod” gait where they have three legs on the ground at all times – two on one side of their body and one on the other. The tripod gait has long inspired engineers who design six-legged robots, but is it necessarily the fastest and most efficient way for bio-inspired robots to move on the ground?
Researchers at EPFL and UNIL revealed that there is in fact a faster way for robots to locomote on flat ground, provided they don’t have the adhesive pads used by insects to climb walls and ceilings. This suggests designers of insect-inspired robots should make a break with the tripod-gait paradigm and instead consider other possibilities including a new locomotor strategy denoted as the “bipod” gait. The researchers’ findings are published in Nature Communications.
The scientists carried out a host of computer simulations, tests on robots and experiments on Drosophila melanogaster – the most commonly studied insect in biology. “We wanted to determine why insects use a tripod gait and identify whether it is, indeed, the fastest way for six-legged animals and robots to walk,” said Pavan Ramdya, co-lead and corresponding author of the study. To test the various combinations, the researchers used an evolutionary-like algorithm to optimize the walking speed of a simulated insect model based on Drosophila. Step-by-step, this algorithm sifted through many different possible gaits, eliminating the slowest and shortlisting the fastest.
Lars Chittka didn’t expect much when he decided to see if bumblebees could learn to pull a string for a reward. While animals from birds to apes can solve this puzzle, it seemed unlikely that bees could solve it too because they have such tiny brains. “I asked what may have seemed an entirely mad question,” says Chittka, a behavioral ecologist at Queen Mary University of London.
But it turned out not be mad in the slightest. In new research reported in PLOS Biology, Chittka and his colleagues got a “big surprise”: they found that bumblebees could easily be trained to pull strings for sugar water.
First the researchers attached strings to blue discs with sugar water in the middle, and then let the bees learn that these fake flowers held a reward. The next step was putting the flowers under plexiglass – only the very tips of the strings were within reach. This was the first test of string pulling in an insect.
With this training, more than half of the bees solved the puzzle, vigorously pulling the string toward them until they could drink the sweet reward in the flower. Another experiment showed that while untrained bees rarely learned this skill on their own, a few actually did. “This was even more of a surprise,” Chittka says.
The researchers also found that this new skill spread socially and culturally from bee to bee. After watching trained bees demonstrate their string-pulling prowess, 60 percent of untrained bees solved the problem on their own. And adding a single trained bee to a colony of untrained bees was enough for many of them to acquire the skill.
“This was the final surprise – there is still a common perception that humans, and especially the cultural processes seen in humans, are unique in their cognitive performances,” Chittka says. “It’s tempting to assume that a large brain is a prerequisite for such phenomena.” But, as his work shows, problem solving and cultural transmission don’t necessarily take much brainpower.
Even those who follow science may be surprised by how quickly international collaboration in scientific studies is growing, according to new research.
The number of multiple-author scientific papers with collaborators from more than one country more than doubled from 1990 to 2015, from 10 to 25 percent, one study found. And 58 more countries participated in international research in 2015 than did so in 1990.
"Those are astonishing numbers," said Caroline Wagner, associate professor in the John Glenn College of Public Affairs at The Ohio State University, who helped conduct these studies.
"In the 20th century, we had national systems for conducting research. In this century, we increasingly have a global system."
Wagner presented her research Feb. 17 in Boston at the annual meeting of the American Association for the Advancement of Science. Even though Wagner has studied international collaboration in science for years, the way it has grown so quickly and widely has surprised even her.
One unexpected finding was that international collaboration has grown in all fields she has studied. One would expect more cooperation in fields like physics, where expensive equipment (think supercolliders) encourages support from many countries. But in mathematics?
"You would think that researchers in math wouldn't have a need to collaborate internationally - but I found they do work together, and at an increasing rate," Wagner said. "The methods of doing research don't determine patterns of collaboration. No matter how scientists do their work, they are collaborating more across borders."
In a study published online last month in the journal Scientometrics, Wagner and two co-authors (who are both from The Netherlands) examined the growth in international collaboration in six fields: astrophysics, mathematical logic, polymer science, seismology, soil science and virology.
Respiratory syncytial virus (RSV) is the most important cause of acute lower respiratory tract infection in very young children. Disease caused by RSV is very contagious and almost everyone is infected with RSV by the age of two years. Infections also reoccur throughout life. In the very young (from birth until the age of two years), the virus can cause severe respiratory tract disease characterized by bronchiolitis (inflammation of the bronchioles), pneumonia, and apnea (temporary cessation of breathing). One percent of RSV-infected children below the age of six months require hospitalization. In the USA, 100,000 children are hospitalized each year due to RSV, and 4500 children die from the infection. Worldwide, RSV causes 180,000 deaths each year. There is no vaccine and only one specific antiviral drug against RSV and supportive treatment is the only medical option for RSV-infected patients.
RSV annually causes nearly 34 million illnesses in children under 5 years of age and can result in serious illness in both very young children and elderly people leading to hospitalization in up to 2% of cases. Despite intensive research and the virus' status as a major pathogen, current methods of treatment rely almost exclusively on supportive care. With the goal of developing a new therapy to fight this disease, Prof. Xavier Saelens (VIB-UGent) and his team developed Nanobodies® that target the protein that the virus needs to enter lung cells. The researchers showed that these Nanobodies® neutralized the virus in laboratory assays as well as in animals.
The approach hinges on the use of single-domain antibodies, also known as Nanobodies®, which target and neutralize a vital protein in the virus, rendering it unable to enter lung cells. The research, published in the leading scientific journal Nature Communications, elucidates how these Nanobodies® interact with and neutralize the virus and demonstrates their ability to successfully protect mice from RSV infection and related inflammation.
A new research study has found that around 60 percent of the 500 known primate species are on the verge of extinction. With around 75 percent of all species declining in numbers, this new discovery sparked an uproar in the scientific community who are now calling to raise global awareness of the plight of the world’s primates and the costs of their loss to ecosystem health and human society.
Primates are essential to biodiversity in the tropics, contribute to natural regeneration and are important to many cultures and religions. In order to study the effect human activity has on primate survival, researchers combined data from the International Red List of the world nature conservation organization International Union for the Conservation of Nature (IUCN) with data from the United Nations (UN) database to establish forecasts and development trends for the next 50 years.
Based on this data, researchers were able to establish forecasts and development trends for the next 50 years. They determined that a primate’s natural habitat is situated in areas that experience high levels of poverty and lack of education. As a result, locals tend to hunt primates for meat and supply the illegal pet trade. The scientists predict that within the next fifty years, we will experience extinction events for many different primate species, like the ring-tailed lemur, eastern gorilla, and baboons.
“Conservation is an ecological, cultural and social necessity. When our closest relatives, the non-human primates, become extinct, this will send a warning signal that the living conditions for humans will soon deteriorate dramatically,” says Eckhard W. Heyman a scientist at the German Primate Center (DPZ) and a co-author of the study.
Insights from the mathematical genius Srinivasa Ramanujan give us a number of ways to explore the infinite.
The work that Ramanujan did in his brief professional life a century ago has spawned whole new areas of mathematical investigation, kept top mathematicians busy for their whole professional lives, and is finding applications in computer science, string theory, and the mathematical basis of black hole physics.
The mathematician Mark Kac divided all geniuses into two types: “ordinary” geniuses, who make you feel that you could have done what they did if you were say, a hundred times smarter, and “magical geniuses,” the working of whose minds is, for all intents and purposes, incomprehensible. There is no doubt that Srinivas Ramanujan was a magical genius, one of the greatest of all time. Just looking at any of his almost 4,000 original results can inspire a feeling of bewilderment and awe even in professional mathematicians: What kind of mind can dream up exotic gems like these?
Ramanujan indeed had preternatural insights into infinity: he was a consummate bridge builder between the finite and the infinite, finding ways to represent numbers in the form of infinite series, infinite sums and products, infinite integrals, and infinite continued fractions, an area in which, in the words of Hardy, his mastery was “beyond that of any mathematician in the world.” While most of Ramanujan’s results are far beyond the scope of this column, it turns out that we can get a flavor for some simple infinite forms using nothing more than middle-school algebra. Let’s embark on a journey to the infinite.
The design of 2D or 3D geometries is involved in one way or another in most of science, art and engineering activities. Modern design tools are powerful and boost the productivity of designers, but they require a lot of training, effort and time to achieve a good understanding and an efficient exploitation.
LAI4D is an R+D project whose aim is to develop an artificial intelligence able to understand the user's ideas regarding spatial imagination. This technology will help improve the communication between designers and design tools by emulating human cognitive abilities for interpreting graphic representations of geometric ideas and other capabilities.
The implementation has been conceived as a dual web application that can work as a 3D rendering widget or as a free light 3D CAD tool providing the adequate environment for the project. This CAD tool incorporates a special design assistant in charge of extracting conceptual geometries from pictures or sketches provided by the user as input. Additionally LAI4D tries to reduce the inherent complexity of professional design tools which, despite being suitable for experienced users, are almost unreachable for other people not trained in the usage of CAD systems and only in need of an occasional use. The selected implementation approach not only allows the users an easy access to the tool, but is also an excellent mean to build a community of designers that will provide the necessary feedback for the system in order to make it bigger and smarter. Follow this link to try the LAI4D designer.
Beginner's tutorial: This tutorial is the recommended introduction for all persons new to LAI4D. It is intended to teach the basics of design in 20 minutes. It shows step by step how to create a simple 3D geometry from the idea up to the publishing of the drawing on the Internet using the easiest tools. Thanks to this exercise the user will understand the working philosophy of LAI4D, will be able to generate polyhedral surfaces and polygonal lines with colors, and will learn to share designs through the free online sharing service of LAI4D. The created geometry can be inspected in the link: cubicle_sample.
Life was long thought to obey its own set of rules. But as simple systems show signs of lifelike behavior, scientists are arguing about whether this apparent complexity is all a consequence of thermodynamics.
Living organisms seem rather like Maxwell’s demon. Whereas a beaker full of reacting chemicals will eventually expend its energy and fall into boring stasis and equilibrium, living systems have collectively been avoiding the lifeless equilibrium state since the origin of life about three and a half billion years ago. They harvest energy from their surroundings to sustain this nonequilibrium state, and they do it with “intention.” Even simple bacteria move with “purpose” toward sources of heat and nutrition. In his 1944 bookWhat is Life?, the physicist Erwin Schrödinger expressed this by saying that living organisms feed on “negative entropy.”
They achieve it, Schrödinger said, by capturing and storing information. Some of that information is encoded in their genes and passed on from one generation to the next: a set of instructions for reaping negative entropy. Schrödinger didn’t know where the information is kept or how it is encoded, but his intuition that it is written into what he called an “aperiodic crystal”inspired Francis Crick, himself trained as a physicist, and James Watson when in 1953 they figured out how genetic information can be encoded in the molecular structure of the DNA molecule.
A genome, then, is at least in part a record of the useful knowledge that has enabled an organism’s ancestors — right back to the distant past — to survive on our planet. According to David Wolpert, a mathematician and physicist at the Santa Fe Institute who convened the recent workshop, and his colleagueArtemy Kolchinsky, the key point is that well-adapted organisms are correlated with that environment. If a bacterium swims dependably toward the left or the right when there is a food source in that direction, it is better adapted, and will flourish more, than one that swims in random directions and so only finds the food by chance. A correlation between the state of the organism and that of its environment implies that they share information in common. Wolpert and Kolchinsky say that it’s this information that helps the organism stay out of equilibrium — because, like Maxwell’s demon, it can then tailor its behavior to extract work from fluctuations in its surroundings. If it did not acquire this information, the organism would gradually revert to equilibrium: It would die.
Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information. And life turns out to be extremely good at it. Landauer’s resolution of the conundrum of Maxwell’s demon set an absolute lower limit on the amount of energy a finite-memory computation requires: namely, the energetic cost of forgetting. The best computers today are far, far more wasteful of energy than that, typically consuming and dissipating more than a million times more. But according to Wolpert, “a very conservative estimate of the thermodynamic efficiency of the total computation done by a cell is that it is only 10 or so times more than the Landauer limit.”
The implication, he said, is that “natural selection has been hugely concerned with minimizing the thermodynamic cost of computation. It will do all it can to reduce the total amount of computation a cell must perform.” In other words, biology (possibly excepting ourselves) seems to take great care not to overthink the problem of survival. This issue of the costs and benefits of computing one’s way through life, he said, has been largely overlooked in biology so far.
If you take 10 communities and run a simulation, it’s easy to see why we need as many members of the ‘herd’ as possible to get vaccines – before it’s too late.
Measles are back in the US – and spreading. More than 100 cases across 14 states and Washington DC have been confirmed by US health officials since an outbreak began at Disneyland last December. With a majority of those infections in unvaccinated people, widespread blame – from Washington to the rest of the world – has fallen on parents who chose not to vaccinate their children.
Part of the problem, according to Dr Elizabeth Edwards, professor of pediatrics and director of the Vanderbilt Vaccine Research Program, is just that: vaccination is understood by many as an individual choice, when science makes clear that the choice – to vaccinate or not to vaccinate – can affect an entire community.
“When you immunize your child, you’re not only immunizing your child. That child’s immunization is contributing to the control of the disease in the population,” Edwards explained. That sheltering effect is called herd immunity: a population that is highly immunized makes for a virus that can’t spread easily, providing protection to the community – or the herd – as a whole.
Despite the high overall measles vaccination rate in the US, vaccine skeptics – and their unimmunized kids – often congregate in like-minded communities, creating pockets of under-immunization.
California, where the bulk of current measles cases can still be found, is a prime example. It's one of 20 states that allow parents to skip vaccination based on their personal, philosophical beliefs – even though legislators introduced a bill that would ban such an opt-out provision.
Via Neelima Sinha
It's only a matter of time before we're able to control things with our minds using brain computer interfaces, even communicate to each other just using our minds. Think telepathy is just fiction? It's not.
It was when we tried virtual reality (VR) for the first time that we realized our method of controlling computers is likely to change. When you realize your VR headset is acting as a pointer in virtual reality, and you begin using your head like a mouse without even thinking about it, you start to grasp that there are much easier ways to control computers. Take foveated rendering (eye tracking) as an example. Soon computers will know exactly where you are looking – blink twice for double click!
Last November 2016, we posted an article, “Brain Implants that Augment the Human Brain Using AI” where we talked about exploring the hippocampus to solve brain disorders associated with memory loss.
A Brain Computer Interface (BCI) or Brain Machine Interface (BMI) has numerous definitions, but the common elements found among all the definitions are as follows:
If the initiatives are to be classified according to purpose (whether for research, clinical trials or investments), brain interface technology development can be grouped into the 3 following areas:
Much work has been done by Dr. Theodore W. Berger into neurobiological issues related to the hippocampus, primarily around implants to explore signal processing of hippocampal neurons. As it turns out, the hippocampus is the storage for our short-term memory working much like the RAM does in your computer. This is the motivation behind Bryan Johnson’s $100 million investment into his brain augmentation startup Kernel.
Microscopic machines that swim through the bloodstream to deliver drugs or perform minor surgeries have been a dream of scientists for decades. In the past 15 years researchers have created micro-engine variants that rely on chemical reactions, magnetism or vibration for thrust—but they often motor around erratically. The main challenge is guiding them to where they are needed, says University of Hong Kong chemist Jinyao Tang. Tang and his team have made progress on that front with a micro swimmer that can be smoothly and precisely steered with the help of light.
As reported in the December 2016 Nature Nanotechnology, the researchers built bottlebrush-shaped microparticles with silicon stems and titanium dioxide “bristle” heads. Both materials absorb photons, so when light is shined on the microparticle, the stem generates negative hydroxide charges and the bristles produce positive hydrogen ions. As the ions move to balance the uneven distribution of charge, they pull fluid with them, causing the micro swimmer to move toward the light—stem-first, like a dart.
As a test, researchers placed a swimmer in liquid on a glass slide and guided it with ultraviolet light to spell out the word “nano.” The 11-micron-long motor could cover about a millimeter in two minutes—slow for medical applications—but Tang says they are now designing new geometries to speed up the swimmers. “This unique way of precisely controlling speed and direction is amazing,” says Sámuel Sánchez, a nanoroboticist at the Max Planck Institute for Intelligent Systems in Stuttgart, who was not involved in the research.
This work is an early glimpse at medical robots that doctors could navigate through a patient's body from the outside with a focused beam of light, Tang says. The devices currently run on ultraviolet light—but the researchers are now working on micro swimmers that respond to a near-infrared wavelength, which can penetrate a few centimeters of tissue. For applications deeper in the body, surgeons could control the bots with optical fibers.
Engineers at the University of California San Diego have developed a material that could reduce signal losses in photonic devices. The advance has the potential to boost the efficiency of various light-based technologies including fiber optic communication systems, lasers and photovoltaics.
The discovery addresses one of the biggest challenges in the field of photonics: minimizing loss of optical (light-based) signals in devices known as plasmonic metamaterials. Plasmonic metamaterials are materials engineered at the nanoscale to control light in unusual ways. They can be used to develop exotic devices ranging from invisibility cloaks to quantum computers. But a problem with metamaterials is that they typically contain metals that absorb energy from light and convert it into heat. As a result, part of the optical signal gets wasted, lowering the efficiency.
In a recent study published in Nature Communications, a team of photonics researchers led by electrical engineering professor Shaya Fainman at the UC San Diego Jacobs School of Engineering demonstrated a way to make up for these losses by incorporating into the metamaterial something that emits light—a semiconductor. "We're offsetting the loss introduced by the metal with gain from the semiconductor. This combination theoretically could result in zero net absorption of the signal—a 'lossless' metamaterial," said Joseph Smalley, an electrical engineering postdoctoral scholar in Fainman's group and the first author of the study.
In their experiments, the researchers shined light from an infrared laser onto the metamaterial. They found that depending on which way the light is polarized—which plane or direction (up and down, side to side) all the light waves are set to vibrate—the metamaterial either reflects or emits light.
"This is the first material that behaves simultaneously as a metal and a semiconductor. If light is polarized one way, the metamaterial reflects light like a metal, and when light is polarized the other way, the metamaterial absorbs and emits light of a different 'color' like a semiconductor," Smalley said.metamaterial-boost-efficiency-lasers.html#jCp
The Earth's geomagnetic field increased in intensity around the Levant during the late eighth century B.C. before rapidly weakening.
The Earth is surrounded by a magnetic field that arises from the motion of iron in the liquid outer core. Direct observation of the field has been possible for only about 180 years, Ben-Yosef told Live Science. In that time, the field has weakened by about 10 percent, he said. Some researchers think the fieldmight be in the process of flipping, so that magnetic north becomes magnetic south and vice versa.
The new study reveals much faster changes in intensity. There was a spike in intensity during the late eighth century B.C., culminating in a rapid decline after about 732 B.C., Ben-Yosef and his colleagues reported today (Feb. 13) in the journal Proceedings of the National Academy of the Sciences. In a mere 31 years beginning in the year 732 B.C., there was a 27 percent decrease in the strength of the magnetic field, the researchers found. From the sixth century B.C. to the second century B.C., the field was generally stable, with a slight gradual decline.
"Our research shows that the field is very fluctuating," Ben-Yosef said. "It fluctuates quite rapidly, so there is nothing to worry about," as far as the current decline, he said. This doesn't mean that the magnetic field isn't going to flip in the near future; the new study looked at only strength of the field, not directionality. The findings do suggest that there's no reason to worry that a 10 percent decline in the field strength over more than a century is abnormal, Ben-Yosef said.
At least in the Levant, that is. All of the pottery in the study came from this region, which encompasses what is now Syria, Jordan, Israel, Palestine, Lebanon and nearby areas. That means researchers can't be sure whether the same fluctuations were happening elsewhere. Because the scientists also don't know for sure the precise locations within the Levant where the pottery was fired, they can't say anything about the direction of the geomagnetic field at the time, only its strength.
Our close cousins definitely ate each other, but no one knows why.
Neandertals ate each other—at least once in a while—according to a new analysis of bones unearthed in a Belgian cave. The remains were excavated near Goyet beginning in the 19th century and now sit in museums in Brussels. The outdated excavation techniques make it impossible to reconstruct how these Neandertals lived, but when researchers examined the bones, it was unmistakably clear what happened to them after they died. Many of the bones were covered in cut marks and dents caused by pounding, indicating that the meat and marrow had been removed. The researchers also spotted what appear to be bite marks running up and down finger bones. The marks were identical to those found on reindeer and horse bones also uncovered at the site, suggesting all three species were prepared and eaten, the researchers report this week in Scientific Reports.
A few of the Neandertal bones also showed additional wear and tear, suggesting they were later used to shape stone tools. The bones are between 40,500 and 45,500 years old, which is before Homo sapiens arrived in the region, so the only possible culprits are the Neandertals themselves. Although scientists knew that Neandertals had practiced cannibalism in Croatia, this is the first evidence of it in northern Europe. No one yet knows if Neandertal cannibalism was a ritual practice, reserved for special occasions and imbued with special meaning, or if they were just really, really hungry.
Long-dormant microbes are found inside giant crystals of the Naica mountain caves - and revived.
Scientists have extracted long-dormant microbes from inside the famous giant crystals of the Naica mountain caves in Mexico - and revived them. The organisms were likely to have been encased in the striking shafts of gypsum at least 10,000 years ago, and possibly up to 50,000 years ago.
It is another demonstration of the ability of life to adapt and cope in the most hostile of environments. "Other people have made longer-term claims for the antiquity of organisms that were still alive, but in this case these organisms are all very extraordinary - they are not very closely related to anything in the known genetic databases," said Dr Penelope Boston.
The new director of Nasa's Astrobiology Institute in Moffett Field, California, described her findings here at the annual meeting of the American Association for the Advancement of Science (AAAS). First opened by miners looking for silver and other metals a hundred years ago, the deeply buried Naica caves are a key interest to scientists fascinated by extremophiles - microbes that can thrive in seemingly impossible conditions.
The environment is hot (40-60C), humid and acidic. With no light at depth, any lifeform must chemosynthesise to survive. That is, it must derive the energy needed to sustain itself by processing rock minerals.
Researchers had identified microbes living in the walls of the caves, but isolating them from inside the metres-long crystals is a surprise. These outsized needles of gypsum have grown over millions of years. They are not perfect. In places they have defects - small voids where fluids have collected and become encased.
Via Kathy Bosiak
Leaders in the United Arab Emirates aim to build a human settlement on Mars by 2117, a research project the government promises will bring benefits to generations of people.
UAE Vice President and Prime Minister Sheikh Mohammed bin Rashid Al Maktoum and Abu Dhabi Crown Prince Sheikh Mohamed bin Zayed Al Nahyanstarted the Mars 2117 Project on Tuesday. The goal of the century-long effort is not just to inhabit the Red Planet, but to further space science research.
"We expect future generations to reap the benefits, driven by its passion to learn to unveil a new knowledge," Sheikh Mohammed said. "The landing of people on other planets has been a longtime dream for humans. Our aim is that the UAE will spearhead international efforts to make this dream a reality."
The government said UAE scientists will start the project and build a coalition of international scientists to "speed" it up. One of the aims of the plan is to make the research available to institutions across the globe, an effort to make the research beneficial to those on Earth as it relates to transportation, energy and food.
The first step of the project is to, "achieve scientific breakthrough to facilitate the arrival of humans to the Red Planet in the next decades." Part of the government's goal also is to expedite travel to and from Mars as well as establish what the settlement will look like and how people will move and eat once they arrive.
Mars volcano erupted for 2 billion years
Also at the summit, a team of UAE engineers presented a concept for a Mars city built by robots along. The showcase also featured "the expected lifestyle on Mars in terms of transport, power production and providing food, as well as infrastructure works and materials used for the construction of the city."
The UAE isn't new to Mars exploration. The country announced in 2015 it would send a spacecraft to Mars, which would land in 2021. UAE stated the spacecraft would be the first from the Arab world to go to Mars. "Human ambitions have no limits," Sheikh Mohammed said, "and whoever looks into the scientific breakthroughs in the current century believes that human abilities can realize the most important human dream."
Astronomers today released the largest-ever compilation of exoplanet-detecting observations made with a technique called radial velocity. They also demonstrated how these observations can be used to hunt for exoplanets by detecting 114 potential exoplanets, including one orbiting a star 8.1 light-years away.
The radial velocity method is one of the most successful techniques for finding and confirming exoplanets.
It takes advantage of the fact that in addition to an exoplanet being influenced by the gravity of the star it orbits, the exoplanet’s gravity also affects the star. Astronomers are able to use sophisticated tools to detect the tiny wobble the exoplanet induces as its gravity tugs on the parent star. The newly available observations were taken by the High Resolution Echelle Spectrometer (HIRES), an instrument mounted on the Keck Observatory’s 10-m telescope at Mauna Kea in Hawaii.
HIRES is designed to split a star’s incoming light into a rainbow of color components. Astronomers can then measure the precise intensity of thousands of color channels, or wavelengths, to determine characteristics of the starlight.
For two decades, they have pointed HIRES at 1,624 nearby stars, all within a relatively close 325 light-years from Earth.
The instrument has recorded 60,949 observations, each lasting anywhere from 30 sec to 20 min, depending on how precise the measurements needed to be. With all these data compiled, any given star in the dataset can have several days’, years’, or even more than a decade’s worth of observations.
“This dataset will slowly grow, and you’ll be able to go on and search for whatever star you’re interested in and download all the data we’ve ever taken on it,” said Dr. Jennifer Burt, an astronomer at MIT’s Kavli Institute for Astrophysics and Space Research and co-author of a paper accepted for publication in the Astronomical Journal.
Accurately modeling climate change and interactive human factors -- including inequality, consumption, and population -- is essential for the effective science-based policies and measures needed to benefit and sustain current and future generations. A recent study presents extensive evidence of the need for a new paradigm of modeling that fully incorporates the feedbacks between Earth systems and human systems.
A new scientific paper by a University of Maryland-led international team of distinguished scientists, including five members of the National Academies, argues that there are critical two-way feedbacks missing from current climate models that are used to inform environmental, climate, and economic policies. The most important inadequately-modeled variables are inequality, consumption, and population.
In this research, the authors present extensive evidence of the need for a new paradigm of modeling that incorporates the feedbacks that the Earth System has on humans, and propose a framework for future modeling that would serve as a more realistic guide for policymaking and sustainable development.
The study explains that the Earth System (e.g., atmosphere, ocean, land, and biosphere) provides the Human System (e.g., humans and their production, distribution, and consumption) not only the sources of its inputs (e.g., water, energy, biomass, and materials) but also the sinks (e.g., atmosphere, oceans, rivers, lakes, and lands) that absorb and process its outputs (e.g., emissions, pollution, and other wastes).
Titled "Modeling Sustainability: Population, Inequality, Consumption, and Bidirectional Coupling of the Earth and Human Systems", the paper describes how the rapid growth in resource use, land-use change, emissions, and pollution has made humanity the dominant driver of change in most of the Earth's natural systems, and how these changes, in turn, have critical feedback effects on humans with costly and serious consequences, including on human health and well-being, economic growth and development, and even human migration and societal conflict. However, the paper argues that these two-way interactions ("bidirectional coupling") are not included in the current models.
The Oxford University Press's multidisciplinary journal National Science Review, which published the paper, has highlighted the work in its current issue, pointing out that "the rate of change of atmospheric concentrations of CO2, CH4, and N2O [the primary greenhouse gases] increased by over 700, 1000, and 300 times (respectively) in the period after the Green Revolution when compared to pre-industrial rates."
"Many datasets, for example, the data for the total concentration of atmospheric greenhouse gases, show that human population has been a strong driver of the total impact of humans on our planet Earth. This is seen particularly after the two major accelerating regime shifts: Industrial Revolution (~1750) and Green Revolution (~1950)" said Safa Motesharrei, UMD systems scientist and lead author of the paper. "For the most recent time, we show that the total impact has grown on average ~4 percent between 1950 and 2010, with almost equal contributions from population growth (~1.7 percent) and GDP per capita growth (~2.2 percent). This corresponds to a doubling of the total impact every ~17 years. This doubling of the impact is shockingly rapid."
Researchers achieve 100% protection in small clinical study.
The findings from two new studies have just been released describing the efficacy of a malaria vaccine, provided by the biotech company Sanaria. In the small, controlled clinical trials, the vaccine proved to be extremely efficacious and sustained effectiveness over a number of weeks. The new vaccine—called Sanaria® PfSPZ-CVac—is composed of live, attenuated, and purified malaria sporozoites and the antimalarial drug chloroquine. Results from the two studies were published in Nature in an article entitled “Sterile Protection against Human Malaria by Chemoattenuated PfSPZ Vaccine” and in The Lancet Infectious Diseases in an article entitled “Progress with the PfSPZ Vaccine for Malaria.”
In the Nature study, PfSPZ-CVac was administered to nine subjects three times over 8 weeks—the research demonstrated the three doses were also safe and effective when administered over just 10 days. Amazingly, the researchers report reported that all nine immunized subjects (100%) were protected against the malaria parasitePlasmodium falciparum (the most pathogenic of all human malaria strains) when exposed to the disease 10 weeks after their last vaccine dose.
Scientists verified that assays of the subjects’ cellular immunity correlated with vaccine-induced protection. A team from Antigen Discovery Inc. that studied the antibody responses of the nine protected individuals identified 22 malaria parasite proteins that could be the targets of protective immune responses. Moreover, the investigators found that from the antibody responses of the nine protected individuals, they were able to identify 22 malaria parasite proteins that could be the targets of protective immune responses.
“We are extremely encouraged by these findings,” remarked co-senior study investigator Stephen Hoffman, M.D., founder and ceo of Sanaria. “Clinical trials of PfSPZ-CVac underway in Germany, the U.S., and Equatorial Guinea, and soon to start in Mali, Ghana, the U.S., and Gabon will lead to an optimized vaccination regimen that we expect to move rapidly into Phase III clinical trials and licensure. The ability to complete an immunization regimen in 10 days will facilitate the use of PfSPZ-CVac in mass vaccination programs to eliminate the malaria parasite and to prevent malaria in travelers.”
Each year, the American Cancer Society estimates the numbers of new cancer cases and deaths that will occur in the United States in the current year and compiles the most recent data on cancer incidence, mortality, and survival. Incidence data were collected by the Surveillance, Epidemiology, and End Results Program; the National Program of Cancer Registries; and the North American Association of Central Cancer Registries.
Mortality data were collected by the National Center for Health Statistics. In 2017, 1,688,780 new cancer cases and 600,920 cancer deaths are projected to occur in the United States. For all sites combined, the cancer incidence rate is 20% higher in men than in women, while the cancer death rate is 40% higher.
However, sex disparities vary by cancer type. For example, thyroid cancer incidence rates are 3-fold higher in women than in men (21 vs 7 per 100,000 population), despite equivalent death rates (0.5 per 100,000 population), largely reflecting sex differences in the “epidemic of diagnosis.” Over the past decade of available data, the overall cancer incidence rate (2004-2013) was stable in women and declined by approximately 2% annually in men, while the cancer death rate (2005-2014) declined by about 1.5% annually in both men and women. From 1991 to 2014, the overall cancer death rate dropped 25%, translating to approximately 2,143,200 fewer cancer deaths than would have been expected if death rates had remained at their peak.
Although the cancer death rate was 15% higher in blacks than in whites in 2014, increasing access to care as a result of the Patient Protection and Affordable Care Act may expedite the narrowing racial gap; from 2010 to 2015, the proportion of blacks who were uninsured halved, from 21% to 11%, as it did for Hispanics (31% to 16%). Gains in coverage for traditionally underserved Americans will facilitate the broader application of existing cancer control knowledge across every segment of the population.
The world's top AI researchers met to consider the threats posed by their research. The global economy could be the first casualty.
In the US, the number of manufacturing jobs peaked in 1979 and has steadily decreased ever since. At the same time, manufacturing has steadily increased, with the US now producing more goods than any other country but China. Machines aren’t just taking the place of humans on the assembly line. They’re doing a better job. And all this before the coming wave of AI upends so many other sectors of the economy. “I am less concerned with Terminator scenarios,” MIT economist Andrew McAfee said on the first day at Asilomar. “If current trends continue, people are going to rise up well before the machines do.”
McAfee pointed to newly collected data that shows a sharp decline in middle class job creation since the 1980s. Now, most new jobs are either at the very low end of the pay scale or the very high end. He also argued that these trends are reversible, that improved education and a greater emphasis on entrepreneurship and research can help feed new engines of growth, that economies have overcome the rise of new technologies before. But after his talk, in the hallways at Asilomar, so many of the researchers warned him that the coming revolution in AI would eliminate far more jobs far more quickly than he expected.
Indeed, the rise of driverless cars and trucks is just a start. New AI techniques are poised to reinvent everything from manufacturing to healthcare to Wall Street. In other words, it’s not just blue-collar jobs that AI endangers. “Several of the rock stars in this field came up to me and said: ‘I think you’re low-balling this one. I think you are underestimating the rate of change,'” McAfee says.
That threat has many thinkers entertaining the idea of a universal basic income, a guaranteed living wage paid by the government to anyone left out of the workforce. But McAfee believes this would only make the problem worse, because it would eliminate the incentive for entrepreneurship and other activity that could create new jobs as the old ones fade away. Others question the psychological effects of the idea. “A universal basic income doesn’t give people dignity or protect them from boredom and vice,” Etzioni says.
At a time when the Trump administration is promising to make America great again by restoring old-school manufacturing jobs, AI researchers aren’t taking him too seriously. They know that these jobs are never coming back, thanks in no small part to their own research, which will eliminate so many other kinds of jobs in the years to come, as well. At Asilomar, they looked at the real US economy, the real reasons for the “hollowing out” of the middle class. The problem isn’t immigration—far from it. The problem isn’t offshoring or taxes or regulation. It’s technology itself.
Viruses are obligate intracellular parasites that probably infect all cellular lifeforms. Although virologists have traditionally focused on viruses that cause disease in humans, domestic animals and crops, the recent advances in metagenomic sequencing, in particular high-throughput sequencing of environmental samples, have revealed a staggeringly large virome everywhere in the biosphere. At least 10^31 virus particles exist globally at any given time in most environments, including marine and freshwater habitats and metazoan gastrointestinal tracts, in which the number of detectable virus particles exceeds the number of cells by 10–100-fold1, 2, 3, 4, 5.
To help conceptualize the sheer number of viruses in existence, their current biomass has been estimated to equal that of 75 million blue whales (approximately 200 million tons) and, if placed end to end, the collective length of their virions would span 65 galaxies6. In addition to their remarkable abundance, viruses are spectacularly diverse in the nature and organization of their genetic material, gene sequences and encoded proteins, replication mechanisms, and interactions with their cellular hosts, whether they are antagonistic, commensal or mutualistic7.
Aquatic environments contain particularly diverse forms of viruses, including single-stranded (ss) and double-stranded (ds) DNA and RNA viruses with genomes that range in size from less than 2,000 bases to more than 2 million bases4. Although dsDNA viruses that infect bacteria (bacteriophages) are the best studied viruses to date, recent work suggests that around 50% of marine viruses have ssDNA or RNA genomes8.
Metagenomic data are changing our views on virus diversity and are therefore challenging the way in which we recognize and classify viruses9. Historically, the description and classification of a new virus by the International Committee on Taxonomy of Viruses (ICTV) have required substantial information on host range, replication cycle, and the structure and properties of virus particles, which were then used to define groups of viruses. However, high-throughput sequencing and metagenomic approaches have radically changed virology, with many more viruses now known solely from sequence data than have been characterized experimentally. For example, the family Genomoviridae currently comprises a single classified virus, whereas more than 120 possible members have been sequenced from diverse environments. However, these sequenced viruses lack information about their hosts and other biological properties that would guide their assignment into species and genera in the family10.
Indeed, vast numbers of complete, or nearly complete, genome sequences have been assembled and characterized from metagenomic data for viruses with small11, 12, 13, 14, medium15, 16, 17, 18 and even large19, 20genomes. The identification of entirely new groups of viruses from such analyses emphasizes the power of metagenomic approaches in discovering viruses, some of which could have key functions in the regulation of ecosystems, whereas others could coexist with their hosts without causing recognizable disease or may even be mutualists7. However, realistically, few of these viruses are ever likely to receive the same level of experimental characterization as pathogens that cause human disease or influence the global economy.
The question of whether viruses that are identified by metagenomics can, and should, be incorporated into the official ICTV taxonomy scheme on the basis of sequence data alone is pressing. In response to this question, a workshop of invited experts in the field of virus discovery and environmental surveillance, and members of the ICTV Executive Committee, took place in June 2016 to discuss this possibility and to develop a framework for appropriate approaches to virus classification.
Via Niklaus Grunwald
Sequencing messenger RNA molecules from individual cells offers a glimpse into the lives of those cells, revealing what they’re doing at a particular time. However, the equipment required to do this kind of analysis is cumbersome and not widely available.
MIT researchers have now developed a portable technology that can rapidly prepare the RNA of many cells for sequencing simultaneously, which they believe will enable more widespread use of this approach. The new technology, known as Seq-Well, could allow scientists to more easily identify different cell types found in tissue samples, helping them to study how immune cells fight infection and how cancer cells respond to treatment, among other applications.
“Rather than trying to pick one marker that defines a cell type, using single-cell RNA sequencing we can go in and look at everything a cell is expressing at a given moment. By finding common patterns across cells, we can figure out who those cells are,” says Alex K. Shalek, the Hermann L.F. von Helmholtz Career Development Assistant Professor of Health Sciences and Technology, an assistant professor of chemistry, and a member of MIT’s Institute for Medical Engineering and Science.
Shalek and his colleagues have spent the past several years developing single-cell RNA sequencing strategies. In the new study, he teamed up with J. Christopher Love, an associate professor of chemical engineering at MIT’s Koch Institute for Integrative Cancer Research, to create a new version of the technology that can rapidly analyze large numbers of cells, with very simple equipment.
“We’ve combined his technologies with some of ours in a way that makes it really accessible for researchers who want to do this type of sequencing on a range of different clinical samples and settings,” Love says. “It overcomes some of the barriers that are facing the adoption of these techniques more broadly.”
Love and Shalek are the senior authors of a paper describing the new technique in the Feb. 13 issue of Nature Methods. The paper’s lead authors are Research Associate Todd Gierahn and graduate students Marc H. Wadsworth II and Travis K. Hughes.
Via Integrated DNA Technologies
A mysterious cell process named "anastasis" (Greek for "rising to life") challenges our idea of life being a linear march towards death, and suggests that cell death can actually be reversed under certain conditions—essentially allowing cells to un-die.
Even as the cell is shrivelling up in response to radiation, toxins, or other stresses, it can in some cases undo the dying process and repair itself if the stress is taken away before the cell is completely gone, said cell biologist Denise Montell of the University of California, Santa Barbara.
"In the field of people studying apoptosis—this programmed cell suicide mechanism—it has been a tenet in that field that once cells trigger this death process, it's irreversible," Montell told me over the phone. Her research, beginning with a paper published by the journal Molecular Biology of the Cell in 2012, shows otherwise.
Montell's lab wants to see if they can use anastasis to salvage hard-to-replace cells in the human body, which could be important in treating ischemia or heart attacks. But it could also provide an accidental, chilling glimpse into the hows and whys of cancer.
Every day, the billions of cells in our bodies actively decide whether they should continue to live, or die. Damaged cells must die—otherwise we might get cancer or other diseases—through programmed cell death processes, the most famous of which is apoptosis (from the Greek for "falling off").
"There are many cells that we don't want to die. This is particularly true for the neurons in our brain, which have to last our whole life, or the cells for our heart," Montell explained. A careful balance must be struck: if too many cells die we'd develop diseases like Alzheimer's or Parkinson's, a hallmark of which is neuronal cell death.
Once apoptosis begins, a critical molecule called executioner caspase is activated within the cell. It does exactly what it sounds like. The caspase goes around the cell, dicing up cell parts, and the cell starts to shrivel up. "Eventually it will break into little pieces, and then other cells come and gobble up the little pieces," she said.
Montell began her research on anastasis years ago at John Hopkins University, where a student applied to join her lab as a postdoc after discovering the death-reversal process during his graduate work in Hong Kong. Montell credits this student—Hogan Tang—with discovering anastasis, which was aptly named with the help of classics scholars, she said.
The next step was to test it on mammalian and fruit fly cells. Montell told Tang as much when he came over to visit her in 2008, and brought up what he'd observed. She was skeptical. But after several months of questioning, Tang joined her lab and they began to seriously pursue the discovery.
So far, they've seen anastasis in 12 different types of mammalian cells. The fact that it can be observed in human and fruit fly cells, Montell said, suggests that anastasis is an incredibly ancient process shared by a common ancestor that must date back millions of years.