Amazing Science
759.5K views | +196 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

Scientists implant false memories into the brains of sleeping mice

Scientists implant false memories into the brains of sleeping mice | Amazing Science |

Manipulating memories by tinkering with brain cells is becoming routine in neuroscience labs. Last year, one team of researchers used a technique called optogenetics to label the cells encoding fearful memories in the mouse brain and to switch the memories on and off, and another used it to identify the cells encoding positive and negative emotional memories, so that they could convert positive memories into negative ones, and vice versa.

The new work, published today in the journal Nature Neuroscience, shows for the first time that artificial memories can be implanted into the brains of sleeping animals. It also provides more details about how populations of nerve cells encode spatial memories, and about the important role that sleep plays in making such memories stronger.

Karim Benchenane of the French National Centre for Scientific Research (CNRS) in Paris and his colleagues implanted electrodes into the brains of 40 mice, targeting the medial forebrain bundle (MFB), a component of the reward circuitry, and the CA1 region of the hippocampus, which contains at least three different cell types that encode the memories needed for spatial navigation.

They then left the mice to explore their surroundings, and monitored the responses of their hippocampal neurons to identify place cells, each of which fired when one of the animals was in a specific location, or ‘place field’, within its environment. In one experiment, performed on five awake animals, they timed electrical stimulation of the MFB to coincide with the firing of a given place cell.

This paired stimulation protocol created a false associative memory in the animals’ brains. The mice linked MFB stimulation with the place field encoded by the cell, and subsequently spent 4- to 5-times more time in that specific location than two control mice who received MFB stimulation that did not coincide with place cell firing.

Place cells are known to ‘replay’ their activity patterns during sleep, and this is believed to strengthen newly formed memories, possibly by promoting the formation of new synaptic connections. Nevertheless, we still don’t know what place cells are doing during replay, or exactly how replayed activity is related to their navigational functions.

To investigate further, the researchers repeated their experiments in five sleeping mice. Having previously identified place cells while they explored their surroundings, the researchers allowed the animals to doze off, and then paired the firing of a selected place cell in each one with stimulation of the MFB. Later on, these animals, too, showed a strong preference for that given place field, heading directly for it when they woke up and spending far more time there than in other locations.

Alanna Myers's curator insight, March 14, 2015 9:21 PM

But why? The article notes that the procedure is highly invasive, so is unlikely to ever be used in humans, except in very 'special' circumstances. It makes my skin crawl, there don't seem to be any good outcomes from this except to understand the role of sleep in embedding new memories. Even more creepily, the article says psychologists have already conducted experiments with humans that make the test subjects recall strong, vivid memories of crimes they never commit. Why?!

Scooped by Dr. Stefan Gruenwald!

Genome Editing Keeps HIV at Bay Long with Artificially Created CCR5-delta32 mutation

Genome Editing Keeps HIV at Bay Long with Artificially Created CCR5-delta32 mutation | Amazing Science |

Geneticists have been able to modify the immune system to confer resistance to HIV infection. The technique involves harvesting a patient's T-cells, using genome-editing techniques to disrupt the gene that controls the receptor used by HIV to infect those cells, and returning the modified cells to the patient, Fyodor Urnov, PhD, from Sangamo BioSciences in Richmond, California, explained here at the Future of Genomic Medicine VIII. The hope is that the edited cells will establish a permanent reservoir of HIV-resistant immune cells, he said.

In effect, the therapy mimics the natural mutation that confers HIV resistance in some people. The mutation came to light when a man named Timothy Brown, known as "the Berlin patient," wasapparently cured of HIV infection after a bone marrow transplant from a donor who had the mutation.

"You start with a naturally occurring variation, and then you aim to recapitulate it to create a disease-protective genotype and then a phenotype in a clinical setting," Dr Urnov said. He presented updated data from a phase 2 trial, the early results of which were published in the New England Journal of Medicine(2014;370:901-910). In the study, the researchers edited T-cells to modify the gene that encodes for CCR5, the coreceptor exploited by HIV to infect immune system cells.

With an established genome-editing technique, the team used DNA-snipping enzymes — called zinc-finger nucleases — to mimic the naturally occurring CCR5-delta32 mutation, which causes the expression of a truncated and nonfunctioning form of the CCR5 protein. The targeted section of DNA cleaved by the zinc-finger nucleases then undergoes a self-repair process, or nonhomologous end joining, leaving behind a T-cell with a nonfunctioning but otherwise healthy form of CCR5. The modified autologous cells are then reinfused into the patient.

"I'm thrilled to report that we have done this in more than 70 individuals, and the treatment has been well tolerated so far. I'm also delighted to report that the genome-edited cells persist over time," Dr Urnov said. "We have observed persistence of the cells in our subjects out to 4 years."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Giant methane storms on Uranus

Giant methane storms on Uranus | Amazing Science |

In August 2014 a group led by Imke de Pater pointed the Keck telescope at Uranus and were a little bit surprised to see storms raging. It wasn't as though clouds haven't been seen before, but the clouds they spotted last year were very much brighter than any seen before. The fact that the storms are bright in the methane spectrum isn't a surprise – Uranus, and its neighbour Neptune, are pretty much just big balls of methane, water and ammonia.

The storms are described in a paper recently published in Icarus, with the pre-print available here. After the first observations, the group put out a call to amateur astronomers to see if they could also observed this unusual activity too. They did, and with this information the group built a case to point the Hubble Space telescope at Uranus, which happened in October. Again, they saw large storms, showing that what they had seen in August hadn't been a one off event - the weather report on Uranus is looking rather unsettled.

Uranus was the first planet to be discovered in the 'recent' era of science. All the planets up to Saturn were observed to be different 'wandering' stars by many ancient cultures – so we'll never know who first spotted them. But Uranus was first observed in 1690 by John Flamsteed. He plotted it six times – but didn't realise it was different from any other star (he catalogued it to be 64 Tauri). The French astronomer Pierre Lemonnier also observed Uranus, but didn't distinguish it from the other stars he was watching. It was William Herschel who realised, in 1781 after thinking it was a comet, that he'd seen a planet orbiting further from the sun than Saturn.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

MUSE goes beyond Hubble: Looking deeply into the universe in 3-D

MUSE goes beyond Hubble: Looking deeply into the universe in 3-D | Amazing Science |
The MUSE instrument on ESO's Very Large Telescope has given astronomers the best ever three-dimensional view of the deep universe. After staring at the Hubble Deep Field South region for only 27 hours, the new observations reveal the distances, motions and other properties of far more galaxies than ever before in this tiny piece of the sky. They also go beyond Hubble and reveal previously invisible objects.

By taking very long exposure pictures of regions of the sky, astronomers have created many deep fields that have revealed much about the early Universe. The most famous of these was the original Hubble Deep Field, taken by the NASA/ESA Hubble Space Telescope over several days in late 1995. This spectacular and iconic picture rapidly transformed our understanding of the content of the Universe when it was young. It was followed two years later by a similar view in the southern sky -- the Hubble Deep Field South.

But these images did not hold all the answers -- to find out more about the galaxies in the deep field images, astronomers had to carefully look at each one with other instruments, a difficult and time-consuming job. But now, for the first time, the new MUSE instrument can do both jobs at once -- and far more quickly.

One of the first observations using MUSE after it was commissioned on the VLT in 2014 was a long hard look at the Hubble Deep Field South (HDF-S). The results exceeded expectations.

"After just a few hours of observations at the telescope, we had a quick look at the data and found many galaxies -- it was very encouraging. And when we got back to Europe we started exploring the data in more detail. It was like fishing in deep water and each new catch generated a lot of excitement and discussion of the species we were finding," explained Roland Bacon (Centre de Recherche Astrophysique de Lyon, France, CNRS) principal investigator of the MUSE instrument and leader of the commissioning team.

For every part of the MUSE view of HDF-S there is not just a pixel in an image, but also a spectrum revealing the intensity of the light's different component colours at that point -- about 90,000 spectra in total [1]. These can reveal the distance, composition and internal motions of hundreds of distant galaxies -- as well as catching a small number of very faint stars in the Milky Way.

Even though the total exposure time was much shorter than for the Hubble images, the HDF-S MUSE data revealed more than twenty very faint objects in this small patch of the sky that Hubble did not record at all. "The greatest excitement came when we found very distant galaxies that were not even visible in the deepest Hubble image. After so many years of hard work on the instrument, it was a powerful experience for me to see our dreams becoming reality," adds Roland Bacon.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Understanding why our most Earth-like neighbor, Venus, is so different

Understanding why our most Earth-like neighbor, Venus, is so different | Amazing Science |

In our solar system, there are only two large rocky worlds, Venus and Earth. Mercury and Mars are small enough that both lost most of their internal heat billions of years ago and they have largely ceased to further evolve. (The ancient, preserved, surface of Mars is what makes it so attractive to explore for the types of habitable environments that were long ago erased from the Earth’s surface.) Both Venus and the Earth, however, retain substantial heat in their cores. That heat drives plate tectonics on our world and appears to have caused the near global resurfacing of Venus in the last few hundred millions of years, which counts for recent when compared to the age of the solar system.

While Venus and Earth have similar sizes and are solar system neighbors, they have evolved very differently. Venus today lacks oceans, appears to lack plate tectonics, and has a massive carbon dioxide atmosphere that creates a greenhouse effect that makes the surface extremely hot.

Understanding why Venus and Earth became so different will help us understand why Earth evolved as it has and what the range of conditions for similarly sized worlds around other stars may be. Venus provides the contrast to the Earth that can help us both better understand the origins of our world’s characteristics and the range of possibilities for similar sized planets orbiting other stars.

Today, our knowledge of Venus’ surface and its interior is similar to our knowledge of Mars in the 1970s following the Viking mission. The Soviet Union placed several probes on the surface that made simple measurements in the hour or so before the surface heat fried their electronics. NASA’s Magellan spacecraft mapped the surface with radar in the early 1990s at about 120 m resolution globally. We know, however, from our experiences mapping the Moon and Mars’ surfaces that teasing out the details of geologic processes requires mapping surfaces with resolutions less than 50 m resolution with smaller areas mapped at a few meters resolution. 

Mapping Venus’ surface (with one exception we’ll return to later) requires using imaging radars that can penetrate its thick cloud cover. The technology in the early 1990s when Magellan flew was relatively new and crude by today’s standards. Now imaging radars are widely used to study the earth both from airplanes and from satellites. The technology is mature and relatively low cost. 

As a result, something of a cottage industry has grown up proposing new missions to map Venus either through the European Space Agency’s Medium Class program or through NASA’s Discovery program. The different accounting rules applied by the two agencies make direct cost comparisons difficult, but these missions cost in the neighborhood of $500M to $600M. A Venus radar mapping mission has been proposed for the current ESA Medium Class competition, and I hear that up to three missions are in competition for selection through the NASA program.

The European selection process tends to be more open than the U.S. process, and the EnVision team led by Dr. Richard Ghail at Imperial College London shared a copy of their proposal to ESA with me. The EnVision mission would address several key questions:

  • The average age of Venus’ surface is just a few hundred million years old, a tiny fraction of the age of the surfaces of most rocky and icy moons in the solar system. What processes resurfaced the planet? Did they occur in the same time period or have they been spread over time?
  • Is Venus currently geologically active and therefore continuing to remake its surface and release new gases into the atmosphere?
  • What processes modify rocks once they are delivered to the surface? Venus’ atmosphere is so thick that its surface in many ways is similar in terms of pressure to what is found at the bottom of our oceans. This should lead to complex weathering and erosion, which is consistent with what we saw from the pictures taken on the surface by the Soviet Union’s Venera landers.
  • What is the internal structure of Venus like? This is the part of a planet we can never see, but scientists can study it indirectly through the combination of Venus’s gravity field and surface topography. Both were mapped by Magellan, but at too crude of resolutions to answer key questions.

Global Resurfacing and tectonic activity

Volcanic Activity

Weathering and Surface Processes

No comment yet.
Scooped by Dr. Stefan Gruenwald!

NASA discovers Mars once had more water than the Earth’s Arctic Ocean

NASA discovers Mars once had more water than the Earth’s Arctic Ocean | Amazing Science |

Billions of years ago, a huge primitive ocean covered one-fifth of the red planet’s surface, making it warm, wet and ideal for alien life to gain a foothold, Nasa scientists say.

The huge body of water spread over a fifth of the planet’s surface, as great a portion as the Atlantic covers the Earth, and was a mile deep in places. In total, the ocean held 20 million cubic kilometers of water, or more than is currently found in the Arctic Ocean on Earth, the researchers found.

Unveiled by Nasa on Thursday, the compelling evidence for the primitive ocean adds to an emerging picture of Mars as a warm and wet world in its youth, which trickled with streams, winding river deltas, and long-standing lakes, soon after it formed 4.5 billion years ago.

The view of the planet’s ancient history radically re-writes what many scientists believed only a decade ago. Back then, flowing water was widely considered to have been a more erratic presence on Mars, gushing forth only rarely, and never forming long-standing seas and oceans.

“A major question has been how much water did Mars actually have when it was young and how did it lose that water?” said Michael Mumma, a senior scientist at Nasa Goddard Space Flight Center in Maryland.

Writing in the journal, Science , the Nasa team, and others at the European Southern Observatory (ESO) in Munich, provide an answer after studying Mars with three of the most powerful infra-red telescopes in the world.

The scientists used the Keck II telescope and Nasa’s Infrared Telescope Facility, both in Hawaii, and the ESO’s Very Large Telescope in Chile, to make maps of the Martian atmosphere over six years. They looked specifically at how different forms of water molecules in the Martian air varied from place to place over the changing seasons.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Astronomers Watch the Same Supernova Exploding Over and Over Again

Astronomers Watch the Same Supernova Exploding Over and Over Again | Amazing Science |

It’s “Groundhog Day” in the cosmos. This is the first time astronomers have been able to see the same explosion over and over again, and its unique properties may help them better understand not only the nature of these spectacular phenomena but also cosmological mysteries like dark matter and how fast the universe is expanding. Astronomers using the Hubble Space Telescope say they have been watching the same star blow itself to smithereens in a supernova explosion over and over again, thanks to a trick of Einsteinian optics.

The star exploded more than nine billion years ago on the other side of the universe, too far for even the Hubble to see without special help from the cosmos. In this case, however, light rays from the star have been bent and magnified by the gravity of an intervening cluster of galaxies so that multiple images of it appear.

Four of them are arranged in a tight formation known as an Einstein Cross surrounding one of the galaxies in the cluster. Since each light ray follows a different path from the star to here, each image in the cross represents a slightly different moment in the supernova explosion.

“I was sort of astounded,” said Patrick Kelly of the University of California, Berkeley, who discovered the supernova images in data recorded by the space telescope in November. “I was not expecting anything like that at all.” Dr. Kelly is lead author of a report describing the supernova published recently in Science.

Robert Kirshner, a supernova expert at the Harvard-Smithsonian Center for Astrophysics who was not involved in the work, said: “We’ve seen gravitational lenses before, and we’ve seen supernovae before. We’ve even seen lensed supernovae before. But this multiple image is what we have all been hoping to see.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

NASA Wants To Use A Submarine To Explore Titan's Kraken Ocean

NASA Wants To Use A Submarine To Explore Titan's Kraken Ocean | Amazing Science |

NASA's newest 2015 Video

Saturn is orbited by 62 official moons, the largest of which is Titan. However, Titan is not your average satellite - larger than the planet Mercury, Titan has a thick nitrogen atmosphere and a large liquid hydrocarbon lakes on the surface. Unfortunately, it has been difficult to obtain much information about the lakes’ depth or composition from the orbital missions. NASA has recently revealed what a conceptual submarine mission to Kraken Mare, the largest sea on Titan, would look like. Kraken Mare contains enough liquid methane to fill Lake Michigan three times over. Conditions are presumed to be rough, with changing tides and massive waves. 

The hypothetical submarine would travel about 2,000 kilometers (1,250 miles) over the course of a 90 day mission. While the craft wouldn’t have a problem staying under the sea during that time and diving, it will need to surface in order to transmit data back to Earth. It would be powered by a radioisotope thermoelectric generator which doesn’t have moving parts, making it a good choice for a craft with such a long journey and will be dropped into the sea. Most of the power will be used to propel the submarine while under the surface, but will be capable of performing science missions as well.

During the mission, the submarine would make a number of observations and collect data using a variety of instruments. Some of the main objectives would be to analyze the chemical composition of the liquid, but also other oceanographic features such as currents and tidal patterns. The craft would also be equipped with cameras in order to image Titan’s shoreline and landscape. The science goals are pretty vague at such an early juncture, but would be more refined and detailed if the mission planning continues.

There are a number of technological and logistical obstacles to address before any proposed launch dates are developed, including Titan’s orbit around Saturn. It takes nearly 30 Earth years for Titan to revolve around the planet, which will influence when such a mission could take place. 

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The Rise and Fall of the World's Nuclear Arsenal Over 70 Years

The Rise and Fall of the World's Nuclear Arsenal Over 70 Years | Amazing Science |

Since 1987, the Bulletin of the Atomic Scientists has been counting up each country's nuclear arsenal in its Nuclear Notebook, peeling back the veil of secrecy that often surrounds these numbers. The Bulletin has now gone and made its Nuclear Notebook into a neat interactive graphic.

There are nine nuclear states: the U.S., Russia, the United Kingdom, France, China, India, Pakistan, Israel, and North Korea. The 70 years worth of data isn't necessarily surprising, but it really drives home how the world's nuclear arsenal is completely dominated by the U.S. and Russia. The other countries barely register as a blip. The full interactive graphic lets you sift through the data country by country and year by year. Check it out at the Bulletin's website.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Deer and other herbivores are supplementing their diet with meat by eating other animals

Deer and other herbivores are supplementing their diet with meat by eating other animals | Amazing Science |

Deer aren't the slim, graceful vegans we thought they were. Scientists using field cameras have caught deer preying on nestling song birds. And it's not just deer. Herbivores the world over may be supplementing their diets.

When researchers in North Dakota set up "nest cams" over the nests of song birds, they expected to see a lot of nestlings and eggs get taken by ground squirrels, foxes, and badgers. Squirrels hit thirteen nests, but other meat-eaters made a poor showing. Foxes and weasels only took one nest each. Know what fearsome animal out-did either of those two sleek, resourceful predators?

White-tailed deer. These supposed herbivores placidly ate living nestlings right out of the nest. And if you're thinking that it must be a mistake, that the deer were chewing their way through some vegetation and happened to get a mouthful of bird, think again. Up in Canada, a group of ornithologists were studying adult birds. In order to examine them closely, the researchers used "mist-nets." These nets, usually draped between trees, are designed to trap birds or bats gently so they could be collected, studied, and released. When a herd of deer came by, they deer walked up to the struggling birds and ate them alive, right out of the nets.

This behavior is not limited to one species or one continent. Last year, a farmer in India made a video of a cow eating a recently-hatched chick. Some scientists speculate that herbivores turn to meat when they're not getting enough nutrients in their diet. It's possible. A biologist in Scotland documented red deer eating seabird chicks, and concluded it was how they got the dietary boost necessary to grow their antlers. The same researcher also documented sheep eating the heads and legs off of seabird chicks. And then there's another cow in India, which reportedly ate fifty chickens. There may be a specific need that drives herbivores to occasionally eat meat. It's also possible, experts say, that eating meat, when it can't run away from them, is just something supposed "herbivores" do, and we're finally getting wise to it.

Dorothy M Neddermeyer, PhD's comment, March 5, 2015 6:00 PM
When herbivores eat 'meat' they are seeking minerals and salt. If a salt/mineral lick is strategically placed where the herbivores 'turned meat eaters' congregate, the propensity to eat meat will disappear.
Scooped by Dr. Stefan Gruenwald!

A Pair of Sunglasses Promises a Miracle: A Cure for Colorblindess

A Pair of Sunglasses Promises a Miracle: A Cure for Colorblindess | Amazing Science |

The California company EnChroma is creating lenses that allow some to see colors for the first time. Colorblindness is just the latest problem that scientists have tried to solve with a technical fix. They’ve modified the DNA of plants such as corn to resist pests and fight disease, and now are building electronic bees to pollinate them. Drugs let antsy children concentrate in class and help depressed adults feel balanced. Cochlear implants help the deaf hear, and mechanical limbs help athletes win Olympic medals.

It is no surprise, then, that scientists have made breakthroughs with colorblindness, which is the most common congenital disorder in humans: More than 15 million people in the U.S. and over 300 million worldwide don’t see normal colors. Most are men who inherit it from their mothers’ fathers.

Despite how common this condition is, most people don’t understand it. The colorblind are almost all actually red-green colorblind, but that doesn’t mean they can’t see red and green. The colorblind can see the colors when they’re vivid, but make mistakes when they’re faint. And because so many colors such as pink or purple contain just a little bit of red or green, mistakes are common.

It’s treated as a joke, even among the celebrity colorblind. Didn’t you know Mark Zuckerberg made Facebook blue because it’s the easiest color for him to see? If Van Gogh had normal color vision, would his paintings have looked more or less intense? Is defective vision the reason why Bill Clinton has trouble seeing stains? Colorblind men clash ties when they dress, buy unripe bananas for breakfast, and mix up subway lines on their way to work. They get confused by line graphs during meetings, and try to push through the red “occupied” signs on bathroom doors. To a colorblind man, the red lipstick you’re wearing might not be that impressive, but neither will your blemishes.

Based in Berkeley, California, McPherson, who has a PhD in glass science from Alfred University, originally specialized in creating eyewear for doctors to use as protection during laser surgery. Rare earth iron embedded in the glasses absorbed a significant amount of light, enabling surgeons to not only stay safe, but also clearly differentiate between blood and tissue during procedures.

In fact, surgeons loved the glasses so much, they began disappearing from operating rooms. This was the first indication that they could be used outside the hospital. McPherson, too, began casually wearing them, as sunglasses. “Wearing them makes all colors look incredibly saturated,” he says. “It makes the world look really bright.”

It wasn’t until Angell borrowed his sunglasses at the Frisbee game, however, that McPherson realized they could serve a broader purpose and help those who are colorblind. After making this discovery, he spent time researching colorblindness, a condition he knew very little about, and ultimately applied for a grant from the National Institutes of Health to begin conducting clinical trials.

Since then, McPherson and two colleagues, Tony Dykes and Andrew Schmeder, founded EnChroma Labs, a company dedicated to developing everyday sunglasses for the 300 million people in the world with color vision deficiency. They've been selling glasses, with sporty and trendy, Ray-Ban-like frames, since December 2012, at a price point ranging from $325 to $450. The EnChroma team has refined the product significantly, most recently changing the lenses from glass to a much more consumer-friendly polycarbonate in December 2014. 

The company’s eyewear is able to treat up to 80 percent of the customers who come to them. The remaining 20 percent, including the writer of this recent Atlantic article, who tested the glasses, are missing an entire class of photopigments, either green or red—a condition EnChroma is not currently able to address.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from DNA and RNA research!

UCLA researchers devise new method to identify disease markers based on RNA editing

UCLA researchers devise new method to identify disease markers based on RNA editing | Amazing Science |

UCLA life scientists have created an accurate new method to identify genetic markers for many diseases — a significant step toward a new era of personalized medicine, tailored to each person’s DNA and RNA. This powerful new method, called GIREMI (pronounced Gir-REMY), will help scientists to inexpensively identify RNA editing sites, genetic mutations and single nucleotide polymorphisms — tiny variations in a genetic sequence — and can be used to diagnose and predict the risk of a wide range of diseases from cancers to schizophrenia, said Xinshu (Grace) Xiao, senior author of the research and a UCLA associate professor of integrative biology and physiology in the UCLA College.

Details about GIREMI were published March 2 in the advance online edition of the journal Nature Methods. The research was funded by the National Institute of Health and the National Science Foundation. Xiao is making the software available on her website as a free download, enabling scientists worldwide to use this potent method in their own research on any number of diseases. President Obama’s budget encourages doctors to design individually tailored treatments based on genetic and molecular differences. This approach, which is called personalized medicine or precision medicine, holds the potential of “delivering the right treatment at the right time, every time, to the right person,” Obama said.

Many genes contain RNA editing sites, which are not yet well understood, but appear to hold clues to many diseases. One might think that whatever is in the DNA we inherited from our parents would eventually be expressed in our proteins, but it turns out there is a modification process, called RNA editing, that can contribute to different types of cancer, autism, Alzheimer’s disease, Parkinson’s and many others, Xiao said.

RNA editing modifies nucleotides, whose patterns carry the data required for constructing proteins, which provide the components of cells and tissues — in our genetic material. If you had an “A” nucleotide in your DNA, for example, it may be modified into a “G.”

RNA editing is different from mutations. A mutation is written incorrectly in our genes. In RNA editing, our genetic material is normal, but modifications occur later when a gene is expressed.

GIREMI was researched and designed during the past two years by Xiao and Qing Zhang, a postdoctoral scholar in her laboratory. It is the most accurate and sensitive method for identifying RNA editing sites, as well as SNPs and mutations in RNA. Differentiating SNPs, most of which appear not to be harmful, from RNA editing sites has been very difficult and previously required sequencing a person’s entire genome.

“We can predict RNA editing sites and SNPs without sequencing the whole genome,” said Xiao, a member of UCLA’s Institute for Quantitative and Computational BiosciencesMolecular Biology Institute and also the Jonsson Comprehensive Cancer Center. “Now you don’t have to spend thousands of dollars sequencing the DNA; you can sequence only the RNA. Our method will be easily applicable to all the existing RNA data sets, and will help to identify SNPs and mutations at a large cost reduction from current methods.”

RNA editing is at an early stage. “We are trying to discover as many editing sites as possible,” said Xiao, whose research group is working to apply GIREMI to many diseases. “This method can be easily applied to any RNA sequencing data sets to discover new RNA editing sites that are specific to a certain disease.”

Many RNA editing sites are specific to the brain, Xiao and Zhang found, indicating RNA editing is involved in brain function and neurological disorders. There are more than 10,000 known RNA editing sites in the brain and probably many more, she said.

People have “abundant differences” in RNA editing sites. Studying 93 people whose RNA has been sequenced, Xiao and Zhang found that each person has unique RNA editing sites in their immune system’s lymphoblast cells, which are precursors of white blood cells that protect us from infectious diseases and foreign invaders.

RNA has been widely known as a cellular messenger that makes proteins and carries out DNA’s instructions to other parts of the cell, but is now understood to perform sophisticated chemical reactions and is believed to perform an extraordinary number of other functions, at least some of which are unknown.

Via Integrated DNA Technologies
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Genome analysis reveals that herders moved en masse from Russia to Central Europe around 4,500 years ago

Genome analysis reveals that herders moved en masse from Russia to Central Europe around 4,500 years ago | Amazing Science |

Analysis of the genomes of 69 ancient Europeans has revealed that herders moved en masse from Russia into Central Europe around 4,500 years ago. These migrants may be responsible for the expansion of Indo-European languages, which make up the majority of spoken tongues in Europe today.

Data from the genomes of 69 ancient individuals suggest that herders moved en masse from the continent's eastern periphery into Central Europe. These migrants may be responsible for the expansion of Indo-European languages, which make up the majority of spoken tongues in Europe today.

An international team has published the research in the journal NatureProf David Reich and colleagues extracted DNA from remains found at archaeological sites around the continent. They used a new DNA-enrichment technique that greatly reduces the amount of sequencing needed to obtain genome-wide data.

Their analyses show that 7,000-8,000 years ago, a closely related group of early farmers moved into Europe from the Near East, confirming the findings of previous studies. The farmers were distinct from the indigenous hunter-gatherers they encountered as they spread around the continent. Eventually, the two groups mixed, so that by 5,000-6,000 years ago, the farmers' genetic signature had become melded with that of the indigenous Europeans.

But previous studies show that a two-way amalgam of farmers and hunters is not sufficient to capture the genetic complexity of modern Europeans. A third ancestral group must have been added to the melting pot more recently.

Prof Reich and colleagues have now identified a likely source area for this later diaspora. The Bronze Age Yamnaya pastoralists of southern Russia are a good fit for the missing third genetic component in Europeans. The team analysed nine genomes from individuals belonging to this nomadic group, which buried their dead in mounds known as kurgans.

The scientists contend that a group similar to the Yamnaya moved into the European heartland after the invention of wheeled vehicles, contributing up to 50% of ancestry in some modern north Europeans. Southern Europeans on the whole appear to have been less affected by the expansion.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The rise and fall of cognitive skills – different parts of the brain work best at different ages

The rise and fall of cognitive skills – different parts of the brain work best at different ages | Amazing Science |

Scientists have long known that our ability to think quickly and recall information, also known as fluid intelligence, peaks around age 20 and then begins a slow decline. However, more recent findings, including a new study from neuroscientists at MIT and Massachusetts General Hospital (MGH), suggest that the real picture is much more complex.

The study, which appears in the journal Psychological Science, finds that different components of fluid intelligence peak at different ages, some as late as age 40.

“At any given age, you’re getting better at some things, you’re getting worse at some other things, and you’re at a plateau at some other things. There’s probably not one age at which you’re peak on most things, much less all of them,” says Joshua Hartshorne, a postdoc in MIT’s Department of Brain and Cognitive Sciences and one of the paper’s authors.

“It paints a different picture of the way we change over the lifespan than psychology and neuroscience have traditionally painted,” adds Laura Germine, a postdoc in psychiatric and neurodevelopmental genetics at MGH and the paper’s other author.

Through the websites and testmybrain.orgHartshorne and Germine were able to harness the power of the Internet to run a large-scale study with participants across a broad age range. They examined four different cognitive tasks, as well as a task that measured participants’ ability to perceive others’ emotional state.

Together, test data from nearly 50,000 subjects provided a very clear picture that showed each cognitive skill peaking at a different age. For example, the speed with which participants processed information appeared to peak early, around age 18 or 19, and then immediately started to decline. Short-term memory seemed to improve until around age 25, level off for several years, and then begin to drop around age 35. The ability to evaluate other people’s emotional states, on the other hand, peaked much later, when participants were in their 40s or 50s.

It’s not yet clear why these skills tend to peak at different ages, but previous research suggests that it may have to do with changes in gene expression or brain structure as we age.

The researchers also included a vocabulary test, which serves as a measure of what is known as crystallized intelligence — the accumulation of facts and knowledge. While the results confirmed that crystallized intelligence peaks later in life, the new data indicated that the peak occurred when participants were in their late 60s or early 70s, even later than previously thought.

The researchers believe this could be explained by today’s adults having higher levels of education, jobs that require a lot of reading, and more opportunities for intellectual stimulation in comparison to previous generations.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Two quantum properties teleported together for first time

Two quantum properties teleported together for first time | Amazing Science |

The values of two inherent properties of one photon – its spin and its orbital angular momentum – have been transferred via quantum teleportation onto another photon for the first time by physicists in China. Previous experiments have managed to teleport a single property, but scaling that up to two properties proved to be a difficult task, which has only now been achieved. The team's work is a crucial step forward in improving our understanding of the fundamentals of quantum mechanics and the result could also play an important role in the development of quantum communications and quantum computers.

Quantum teleportation first appeared in the early 1990s after four researchers, including Charles Bennett of IBM in New York, developed a basic quantum teleportation protocol. To successfully teleport a quantum state, you must make a precise initial measurement of a system, transmit the measurement information to a receiving destination and then reconstruct a perfect copy of the original state. The "no-cloning" theorem of quantum mechanics dictates that it is impossible to make a perfect copy of a quantum particle. But researchers found a way around this via teleportation, which allows a flawless copy of a property of a particle to be made. This occurs thanks to what is ultimately a complete transfer (rather than an actual copy) of the property onto another particle such that the first particle loses all of the properties that are teleported.

Teleporting more than one state simultaneously is essential to fully describe a quantum particle and achieving this would be a tentative step towards teleporting something larger than a quantum particle, which could be very useful in the exchange of quantum information. Now, Chaoyang Lu and Jian-Wei Pan, along with colleagues at the University of Science and Technology of China in Hefei, have taken the first step in simultaneously teleporting multiple properties of a single photon.

In the experiment, the team teleports the composite quantum states of a single photon encoded in both its spin and OAM. To transfer the two properties requires not only an extra entangled set of particles (the quantum channel), but a "hyper-entangled" set – where the two particles are simultaneously entangled in both their spin and their OAM. The researchers shine a strong ultraviolet pulsed laser on three nonlinear crystals to generate three entangled pairs of photons – one pair is hyper-entangled and is used as the "quantum channel", a second entangled pair is used to carry out an intermediate "non-destructive" measurement, while the third pair is used to prepare the two-property state of a single photon that will eventually be teleported.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Imaging the 3D structure of a single virus using the world's most powerful x-ray free-electron laser

Imaging the 3D structure of a single virus using the world's most powerful x-ray free-electron laser | Amazing Science |

By measuring a series of diffraction pattern from a virus injected into an XFEL beam, researchers at Stanford’s Linac Coherent Light Source (LCLS) have determined the first three-dimensional structure of a virus, using a mimivirus.

X-ray crystallography has solved the vast majority of the structures of proteins and other biomolecules. The success of the method relies on growing large crystals of the molecules, which isn’t possible for all molecules.

“Free-electron lasers provide femtosecond X-ray pulses with a peak brilliance ten billion times higher than any previously available X-ray source,” the researchers note in a paper inPhysical Review Letters. “Such a large jump in one physical quantity is very rare, and can have far reaching implications for several areas of science. It has been suggested that such pulses could outrun key damage processes and allow structure determination without the need for crystallization.”

The current resolution of the technique (about 100 nanometers) would be sufficient to image important pathogenic viruses like HIV, influenza and herpes, and further improvements may soon allow researchers to tackle the study of single proteins, the scientists say.

Mimivirus is one of the largest known viruses. The viral capsid is about 450 nanometers in diameter and is covered by a layer of thin fibres. A 3D structure of the viral capsid exists, but the 3D structure of the inside was previously unknown.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

NOAA Announces Arrival Of El Niño, 2015 Poised To Beat 2014 For Hottest Year Ever Recorded

NOAA Announces Arrival Of El Niño, 2015 Poised To Beat 2014 For Hottest Year Ever Recorded | Amazing Science |

The National Oceanic and Atmospheric Administration (NOAA) has announced that the long-awaited El Niño has arrived. NOAA’s Climate Prediction Center says we now have “borderline, weak El Niño conditions,” and there is a “50-60% chance that El Niño conditions will continue” through the summer.

An El Niño is “characterized by unusually warm ocean temperatures in the Equatorial Pacific,” as NOAA has explained. That contrasts with the unusually cold temps in the Equatorial Pacific during a La Niña. Both are associated with extreme weather around the globe (though a weak El Niño like this will tend to have a muted effect). El Niños tend to set the record for the hottest years, since the regional warming adds to the underlying global warming trend. La Niña years tend to be below the global warming trend line.

If even a weak El Niño does persist through summer, 2015 will almost certainly top 2014 as the hottest year on record. But there is a good chance it will do so in any case (unless a La Niña forms). After all, 2014 was the hottest year on record even though there was no official El Niño during the year. It’s just hard to stop the march of human-caused global warming — without actually sharply cutting greenhouse gas emissions.

Significantly, because 1998 was an unusually strong “super El Niño,” and because we haven’t had an El Niño since 2010, it appeared for a while (to some) as if global warming had slowed — if you cherry-picked a relatively recent start year (and ignored the rapid warming in the oceans, where 90 percent of human-caused planetary warming goes). In fact, however, several recent studies confirmed that planetary warming continues apace everywhere you look.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Google collaborates with UCSB to build a quantum device that detects and corrects its own errors

Google collaborates with UCSB to build a quantum device that detects and corrects its own errors | Amazing Science |

Google launches an effort to build its own quantum computer that has the potential to change computing forever. Google is about to begin designing and building hardware for a quantum computer, a type of machine that can exploit quantum physics to solve problems that would take a conventional computer millions of years. Since 2009, Google has been working with controversial startup D-Wave Systems, which claims to make “the first commercial quantum computer.” Last year, Google purchased one of D-Wave’s machines to be able to test the machine thoroughly. But independent tests published earlier this year found no evidence that D-Wave’s computer uses quantum physics at all to solve problems more efficiently than a conventional machine.

Now, John Martinis, a professor at University of California, Santa Barbara, has joined Google to establish a new quantum hardware lab near the university. He will try to make his own versions of the kind of chip inside a D-Wave machine. Martinis has spent more than a decade working on a more proven approach to quantum computing, and built some of the largest, most error-free systems of qubits, the basic building blocks that encode information in a quantum computer.

“We would like to rethink the design and make the qubits in a different way,” says Martinis of his effort to improve on D-Wave’s hardware. “We think there’s an opportunity in the way we build our qubits to improve the machine.” Martinis has taken a joint position with Google and UCSB that will allow him to continue his own research at the university.

Quantum computers could be immensely faster than any existing computer at certain problems. That’s because qubits working together can use the quirks of quantum mechanics to quickly discard incorrect paths to a solution and home in on the correct one. However, qubits are tricky to operate because quantum states are so delicate.

Chris Monroe, a professor who leads a quantum computing lab at the University of Maryland, welcomed the news that one of the leading lights in the field was going to work on the question of whether designs like D-Wave’s can be useful. “I think this is a great development to have legitimate researchers give it a try,” he says.

Since showing off its first machine in 2007, D-Wave has irritated academic researchers by making claims for its computers without providing the evidence its critics say is needed to back them up. However, the company has attracted over $140 million in funding and sold several of its machines (see “The CIA and Jeff Bezos Bet on Quantum Computing”).

There is no question that D-Wave’s machine can perform certain calculations. And research published in 2011 showed that the machine’s chip harbors the right kind of quantum physics needed for quantum computing. But evidence is lacking that it uses that physics in the way needed to unlock the huge speedups promised by a quantum computer. It could be solving problems using only ordinary physics.

Martinis’s previous work has been focused on the conventional approach to quantum computing. He set a new milestone in the field this April, when his lab announced that it could operate five qubits together with relatively low error rates. Larger systems of such qubits could be configured to run just about any kind of algorithm depending on the problem at hand, much like a conventional computer. To be useful, a quantum computer would probably need to be built with tens of thousands of qubits or more.

Martinis was a coauthor on a paper published in Science earlier this year that took the most rigorous independent look at a D-Wave machine yet. It concluded that in the tests run on the computer, there was “no evidence of quantum speedup.” Without that, critics say, D-Wave is nothing more than an overhyped, and rather weird, conventional computer. The company counters that the tests of its machine involved the wrong kind of problems to demonstrate its benefits.

Martinis’s work on D-Wave’s machine led him into talks with Google, and to his new position. Theory and simulation suggest that it might be possible for annealers to deliver quantum speedups, and he considers it an open question. “There’s some really interesting science that people are trying to figure out,” he says.

Benjamin Chiong's curator insight, March 23, 2015 7:23 PM

Looking at Amdahl's law, it is not only the data storage that matters but every component of computer. As each piece of hardware advances, the rest of the parts should be able to keep up as well. Quantum Computing forges a world that allows massive processing power to analyze Big Data. This gives us an idea how the future would look like.

Scooped by Dr. Stefan Gruenwald!

Potential Ebola Vaccination Even After Potentially High-Risk Exposure

Potential Ebola Vaccination Even After Potentially High-Risk Exposure | Amazing Science |

 Safe and effective vaccines and drugs are needed for the prevention and treatment of Ebola virus disease, including following a potentially high-risk exposure such as a needlestick. To assess response to postexposure vaccination in a health care worker who was exposed to the Ebola virus.

Case report of a physician who experienced a needlestick while working in an Ebola treatment unit in Sierra Leone on September 26, 2014. Medical evacuation to the United States was rapidly initiated. Given the concern about potentially lethal Ebola virus disease, the patient was offered, and provided his consent for, postexposure vaccination with an experimental vaccine available through an emergency Investigational New Drug application. He was vaccinated on September 28, 2014. The vaccine used was VSVΔG-ZEBOV, a replicating, attenuated, recombinant vesicular stomatitis virus (serotype Indiana) whose surface glycoprotein gene was replaced by the Zaire Ebola virus glycoprotein gene. This vaccine has entered a clinical trial for the prevention of Ebola in West Africa.

The vaccine was administered 43 hours after the needlestick occurred. Fever and moderate to severe symptoms developed 12 hours after vaccination and diminished over 3 to 4 days. The real-time reverse transcription polymerase chain reaction results were transiently positive for vesicular stomatitis virus nucleoprotein gene and Ebola virus glycoprotein gene (both included in the vaccine) but consistently negative for Ebola virus nucleoprotein gene (not in the vaccine). Early postvaccination cytokine secretion and T lymphocyte and plasmablast activation were detected. Subsequently, Ebola virus glycoprotein-specific antibodies and T cells became detectable, but antibodies against Ebola viral matrix protein 40 (not in the vaccine) were not detected.

It is currently unknown if VSVΔG-ZEBOV is safe or effective for post-exposure vaccination in humans who have experienced a high-risk occupational exposure to the Ebola virus, such as a needlestick. In this patient, postexposure vaccination with VSVΔG-ZEBOV induced a self-limited febrile syndrome that was associated with transient detection of the recombinant vesicular stomatitis vaccine virus in blood. Strong innate and Ebola-specific adaptive immune responses were detected after vaccination. The clinical syndrome and laboratory evidence were consistent with vaccination response, and no evidence of Ebola virus infection was detected.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Study indicates e-cigarette vapor "not significant" in terms of airborne pollutants

Study indicates e-cigarette vapor "not significant" in terms of airborne pollutants | Amazing Science |

Leading commercial electronic cigarettes were tested to determine bulk composition. The e-cigarettes and conventional cigarettes were evaluated using machine-puffing to compare nicotine delivery and relative yields of chemical constituents. The e-liquids tested were found to contain humectants, glycerin and/or propylene glycol, (⩾75% content); water (<20%); nicotine (approximately 2%); and flavor (<10%). The aerosol collected mass (ACM) of the e-cigarette samples was similar in composition to the e-liquids. Aerosol nicotine for the e-cigarette samples was 85% lower than nicotine yield for the conventional cigarettes.

Analysis of the smoke from conventional cigarettes showed that the mainstream cigarette smoke delivered approximately 1,500 times more harmful and potentially harmful constituents (HPHCs) tested when compared to e-cigarette aerosol or to puffing room air. The deliveries of HPHCs tested for these e-cigarette products were similar to the study air blanks rather than to deliveries from conventional cigarettes; no significant contribution of cigarette smoke HPHCs from any of the compound classes tested was found for the e-cigarettes. Thus, the results of this study support previous researchers’ discussion of e-cigarette products’ potential for reduced exposure compared to cigarette smoke.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

A paralyzed woman flew an F-35 fighter jet in a simulator — using only her mind

A paralyzed woman flew an F-35 fighter jet in a simulator — using only her mind | Amazing Science |

Over at the Defense Advanced Research Projects Agency, also known as DARPA, there are some pretty amazing (and often top-secret) things going on. But one notable component of a DARPA project was revealed by a Defense Department official at a recent forum, and it is the stuff of science fiction movies. According to DARPA Director Arati Prabhakar, a paralyzed woman was successfully able use her thoughts to control an F-35 and a single-engine Cessna in a flight simulator.

It's just the latest advance for one woman, 55-year-old Jan Scheuermann, who has been the subject of two years of groundbreaking neurosignaling research.  First, Scheuermann began by controlling a robotic arm and accomplishing tasks such as feeding herself a bar of chocolate and giving high fives and thumbs ups. Then, researchers learned that -- surprisingly -- Scheuermann was able to control both right-hand and left-hand prosthetic arms with just the left motor cortex, which is typically responsible for controlling the right-hand side. After that, Scheuermann decided she was up for a new challenge, according to Prabhakar.

"Jan decided that she wanted to try flying a Joint Strike Fighter simulator," Prabhakar said, prompting laughter from the crowd at the New America Foundation's Future of War forum. "So Jan got to fly in the simulator."

Unlike pilots who use the simulator technology for training, Scheuermann wasn't thinking about controlling the plane with a joystick. She thought about flying the plane itself -- and it worked. "In fact," Prabhakar noted, "for someone who's never flown -- she's not a pilot in real life -- she's in there flying a simulator directly from neurosignaling."

Scheuermann has been paralyzed since 2003 because of a neurodegenerative condition. In 2012, she agreed to be fitted with two probes on the surface of her brain in the motor cortex area responsible for right hand and arm movements. In the last two years, she has tolerated those probes better than expected; as a result, she's been the subject of increasingly sophisticated experiments in conjunction with the University of Pittsburgh Medical Center and DARPA's Revolutionizing Prosthetics program, to determine just how much she can do simply by thinking about it.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Greenhouse Effect: Here’s why gas really costs Americans $6.25 a gallon

Greenhouse Effect: Here’s why gas really costs Americans $6.25 a gallon | Amazing Science |

It’s almost April 15, and you may be worrying about how much taxes will hurt this year. But a new study published today suggests there’s a whole world of economic losses in the air around us that few of us know anything about.

The study, published in the journal Climatic Change, is the first to pull together a proper accounting of the hidden costs of greenhouse gas emissions. It shows the true (and much higher) cost that we pay in dollars at the pump and light switch—or in human lives at the emergency room.

Drew Shindell, a professor at Duke University, has attempted to play CPA to our industrialized emitting world. He has tabulated what he calls “climate damages” for a whole range of greenhouse gases like CO2, aerosols, and methane—and more persistent ones like nitrous oxides.
If these damages are added in like the gas tax, a gallon of regular in the United States would really cost $6.25. The price of diesel would be a whopping $7.72 a gallon.

Shindell also estimated the yearly damages from power plants in the U.S. Using coal costs us the most, with climate damages adding an almost 30 cents per kilowatt hour to the current price of 10 cents we now pay. The gas-fired power price rises to 17 cents from 7 cents per kilowatt hour.

For the average homeowner who uses natural gas, your real bill after climate damages is double. And for those of us who get their electricity from coal-fired power plants, our energy bills are really four times what we see in our monthly statements.

Shindell calculates the total yearly emissions price tag—between transportation, electricity, and industrial combustion—at between $330-970 billion. That wide spread depends on the choice of a discount rate, which reflects the relative value of money over the years and decades of climate change to come.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

2.8 million-year-old jawbone found in Ethiopia

2.8 million-year-old jawbone found in Ethiopia | Amazing Science |

2.8 million-year-old jawbone may be the oldest human fossil in existence,according to two papers published simultaneously in Science. Researchers now suspect that Homo (the genus that includes modern humans) dates back at least 400,000 years earlier than previously thought.

For decades scientists have been scouring Africa for ancient human remains. Archaeologists think Homo habilis, the first truly “human-like” primate, lived about 2.5 million years ago, and Lucy, the human-ape mashup who is perhaps our most famous ancestor, lived about 3.2 million years ago.

But this particular fossil, found in the Afar region of Ethiopia and temporarily named LD 350-1, appears to be a new type of Homo that falls right between Lucy and Homo habilis. The fossil’s slim molars and proportionate jaw are hallmarks of habilis, for instance, but its primitive chin looks a lot more like Lucy’s. For now, the researchers are calling their discovery “Homo species indeterminate,” as they still aren’t exactly sure what it is.

Most 2.8 million-year-old fossils are too ancient to date by conventional means, so the researchers sampled volcanic ash above and below the jawbone and then used argon40 dating to determine the age of the eruption that formed each sample. The results give us the youngest and oldest dates that the hominin who owned LD 350-1 could have lived—2.5 and 2.8 million years ago, respectively.

JebaQpt's comment, November 24, 2015 6:33 AM
Today Google Doodle for the Discovery of Lucy
Scooped by Dr. Stefan Gruenwald!

Scientists reveal the body weight of the world's most complete Stegosaurus

Scientists reveal the body weight of the world's most complete Stegosaurus | Amazing Science |

Scientists have discovered that a 150 million year old Stegosaurus stenops specimen would have been similar in weight to a small rhino when it died.

Calculating body mass in animals that have been dead for many millions of years has been difficult for scientists. There are two methods for calculating body mass. One relies on researchers taking measurements of limb bones and extrapolating body mass from a large dataset of living animals, while the other produces a 3D model of the animal and applies densities to body segments to calculate mass. However, both often have varying results.

The researchers from Imperial College London and the Natural History Museum are the first to combine both methods to calculate the body mass of an extinct creature to get an accurate measurement. They used this approach on a Stegosaurus skeleton nicknamed Sophie, which was found in Wyoming in the USA in 2003. They have calculated that the Sophie would have weighed around 1,600 kg, similar in weight to a small rhino.

Dr Susannah Maidment, Junior Research Fellow from the Department of Earth Science and Engineering at Imperial College London, said: “Although the Stegosaurus is something of an iconic dinosaur, scientists know very little about its biology because its fossils are surprisingly rare.  We don't actually know whether Sophie was female or male, despite its nickname. When it died, Sophie was a young adult - equivalent to a human teenager. Although there is no evidence for why it died, it seems that the carcass fell into a shallow pond, where it was quickly buried, preventing other animals from scavenging it, and explaining why it is so well preserved.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Ultra-cold mirrors could reveal gravity's quantum side

Ultra-cold mirrors could reveal gravity's quantum side | Amazing Science |

An experiment not much bigger than a tabletop, using ultra-cold metal plates, could serve up a cosmic feast. It could give us a glimpse of quantum gravity and so lead to a "theory of everything": one that unites the laws of quantum mechanics, governing the very small, and those of general relativity, concerning the monstrously huge.

Such theories are difficult to test in the lab because they probe such extreme scales. But quantum effects have a way of showing up unexpectedly. In a strange quantum phenomenon known as the Casimir effect, two sheets of metal held very close together in a vacuum will attract each other.

The effect occurs because, even in empty space, there is an electromagnetic field that fluctuates slightly all the time. Placing two metal sheets very close to one another limits the fluctuations between them, because the sheets reflect electromagnetic waves. But elsewhere the fluctuations are unrestricted, and this pushes the plates together.

James Quach at the University of Tokyo suggests that we might be able to observe the equivalent effect for gravity. That would, in turn, be direct evidence of the quantum nature of gravity: the Casimir effect depends on vacuum fluctuations, which are only predicted by quantum physics.

But in order to detect it, you would need something that reflects gravitational waves – the ripples in space-time predicted by general relativity. Earlier research suggested that superconductors (for example, metals cooled to close to absolute zero such that they lose all electrical resistance) might act as mirrors in this way.

"The quantum properties of superconductors may reflect gravitational waves. If this is correct, then the gravitational Casimir effect for superconductors should be large," says Quach. "The experiment I propose is feasible with current technology."

It's still unclear if superconductors actually reflect gravitational waves, however. "The exciting part of this paper has to do with a speculative idea about gravitational waves and superconductors," says Dimitra Karabali at Lehman College in New York. "But if it's right, it's wonderful."

No comment yet.