Amazing Science
Follow
458.7K views | +211 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Evolving Soft Robots with Multiple Materials (muscle, bone, etc.)

Here we evolve the bodies of soft robots made of multiple materials (muscle, bone, & support tissue) to move quickly. Evolution produces a diverse array of fun, wacky, interesting, but ultimately functional soft robots. Enjoy!

This video accompanies the following paper: Unshackling Evolution: Evolving Soft Robots with Multiple Materials and a Powerful Generative Encoding. Cheney, MacCurdy, Clune, & Lipson. Proceedings of the Genetic and Evolutionary Computation Conference. 2013. 

PDF: http://jeffclune.com/publications/201...

The work was performed by members of the Cornell Creative Machines Lab: http://creativemachines.cornell.edu and the Evolving Artificial Intelligence Lab at the University of Wyoming:http://JeffClune.com

The simulator we used is called VoxCad, by Jon Hiller:http://www.VoxCad.com

 

More videos: http://jeffclune.com/videos.html

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

6 year project to map how nerve connections develop in babies' brains while still in the womb and after birth

6 year project to map how nerve connections develop in babies' brains while still in the womb and after birth | Amazing Science | Scoop.it
Scientists in the UK have launched a six-year project to create the first map of babies' brain in a critical period of growth in the womb and after birth.

 

By the time a baby takes its first breath many of the key pathways between nerves have already been made. And some of these will help determine how a baby thinks or sees the world, and may have a role to play in the development of conditions such as autism, scientists say.

 

But how this rich neural network assembles in the baby before birth is relatively unchartered territory.

 

Researchers from Guy's and St Thomas' Hospital, King's College London, Imperial College and Oxford University aim to produce a dynamic wiring diagram of how the brain grows, at a level of detail that they say has been impossible until now.

 

They hope that by charting the journeys of bundles of nerves in the final three months of pregnancy, doctors will be able to understand more about how they can help in situations when this process goes wrong.

 

Prof David Edwards, director of the Centre for the Developing Brain, who is leading the research, says: "There is a distressing number of children in our society who grow up with problems because of things that happen to them around the time of birth or just before birth.

 

"It is very important to be able to scan babies before they are born, because we can capture a period when an awful lot is changing inside the brain, and it is a time when a great many of the things that might be going wrong do seem to be going wrong."

 

The study - known as the Developing Human Connectome Project - hopes to look at more than 1,500 babies, studying many aspects of their neurological development.

 

By examining the brains of babies while they are still growing in the womb, as well as those born prematurely and at full term, the scientists will try to define baselines of normal development and investigate how these may be affected by problems around birth.

 

Researchers aim to understand more about how the brain is affected by prematurity and they plan to share their map with the wider research community.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Oldest dinosaur embryo fossils discovered in China

Oldest dinosaur embryo fossils discovered in China | Amazing Science | Scoop.it
Nesting site yields earliest known organic remains of a terrestrial vertebrate.

 

Palaeontologists working in China have unearthed the earliest collection of fossilized dinosaur embryos to date. The trove includes remains from many individuals at different developmental stages, providing a unique opportunity to investigate the embryonic development of a prehistoric species.

 

Robert Reisz, a palaeontologist at the University of Toronto in Mississauga, Canada, and his colleagues discovered the sauropodomorph fossils in a bone bed in Lufeng County that dates to the Early Jurassic period, 197 million to 190 million years ago. The site contained eggshells and more than 200 disarticulated bones — the oldest known traces of budding dinosaurs, the researchers report in Nature.

 

“Most of our record of dinosaur embryos is concentrated in the Late Cretaceous period,” says David Evans, curator of vertebrate palaeontology at the Royal Ontario Museum in Toronto. “This [study] takes a detailed record of dinosaur embryology and pushes it back over 100 million years.”

But it is not just the age of the fossils that is notable, the researchers say. Spectroscopic analysis of bone-tissue samples from the Chinese nesting site revealed the oldest organic material ever seen in a terrestrial vertebrate. That was surprising because the fossilized femur bones were delicate and porous, which made them vulnerable to the corrosive effects of weathering and groundwater, says Reisz.

 

“That suggests to us that other dinosaur fossils might have organic remains,” he says. “We just haven’t looked at them in the right ways.” 

Reisz thinks that the complex proteins his team detected in that organic material are preserved collagen. Because collagen composition varies across species, further analyses could help researchers to compare the sauropodomorph fossils with those of other creatures. They include the mighty sauropods, close relatives — and perhaps descendants — of early sauropodomorphs that weighed in at about 100,000 kilograms each, making them the largest animals ever to roam Earth.

 

The researchers think that the Lufeng dinosaurs are sauropodomorphs because they are similar in many ways to intact embryonic skeletons of Massospondylus, a sauropodomorph that Reisz unearthed in South Africa in 20052. But their analysis does identify key differences between the two fossil finds. The Lufeng embryos were less developmentally advanced than the Massospondylus embryos, and they seem to be examples of a different genus, Lufengosaurus.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Shingled recording pattern may lead to increased hard drive capacities

Shingled recording pattern may lead to increased hard drive capacities | Amazing Science | Scoop.it

Modern hard drive technology is reaching its limits. Engineers have increased data-storage capacities by reducing the widths of the narrow tracks of magnetic material that record data inside a hard drive. Narrowing these tracks has required a concordant reduction in the size of the magnetic write head—the device used to create them. However, it is physically difficult to reduce the size of write heads any further. Kim Keng Teo and co-workers at the A*STAR Data Storage Institute, Singapore, and the Niigata Institute of Technology, Japan, have recently performed an analysis that highlights the promise of an alternative approach, which may sidestep this problem completely.

 

In a conventional hard drive, a write head stores data by applying a magnetic field to a series of parallel, non-overlapping tracks. Halving the width of the track effectively doubles the data-storage capacity, but also requires the size of the write head to be halved. The head therefore produces less magnetic field than is needed to enable stable data storage. This is because the small magnetic grains that are characteristic of modern hard drive media need to be thermally stable at room temperature. Shingled magnetic recording represents a step towards solving this problem as it allows for narrower track widths without smaller write heads. Rather than writing to non-overlapping tracks, the approach overlaps tracks just as shingles on a roof overlap (see image). Tracks are written in a so-called 'raster' pattern, with new data written to one side only of the last-written track. Teo and co-workers analyzed the scaling behavior of this approach by using both numerical analysis and experimental verification. Their results showed that the size of the data track is not limited by the size of the write head, as in conventional hard drives. Instead, the track size is limited by the size of the magnetic read head, and by the 'erase bandwidth', which represents the portion of the track edge that is affected by adjacent tracks. "This is a paradigm shift for the industry," says Teo.

 

"A relatively small difference in the way that writing occurs calls for a completely new approach to head design." Teo expects the shingled approach to be a useful stop-gap measure prior to the arrival of more advanced, next-generation technologies in the next decade or so that will apply more radical modifications to the hard drive such as the use of heat to assist the write head.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

World First: Device keeps human liver alive outside body

World First: Device keeps human liver alive outside body | Amazing Science | Scoop.it

In a world first, a donated human liver has been 'kept alive' outside a human being and then successfully transplanted into a patient in need of a new liver.

 

So far the procedure has been performed on two patients on the liver transplant waiting list and both are making excellent recoveries.

Currently transplantation depends on preserving donor organs by putting them ‘on ice’ – cooling them to slow their metabolism. But this often leads to organs becoming damaged.

 

The technology, developed at Oxford University and now being trialled at the liver transplant centre at King’s College Hospital as part of a controlled clinical investigation, could preserve a functioning liver outside the body for 24 hours. A donated human liver connected to the device is raised to body temperature and oxygenated red blood cells are circulated through its capillaries. Once on the machine, a liver functions normally just as it would inside a human body, regaining its colour and producing bile.

 

The results from the first two transplants, carried out at King’s College Hospital in February 2013, suggest that the device could be useful for all patients needing liver transplants. Based on pre-clinical data, the new device could also enable the preservation of livers which would otherwise be discarded as unfit for transplantation – potentially as much as doubling the number of organs available for transplant and prolonging the maximum period of organ preservation to 24 hours.

 

‘These first clinical cases confirm that we can support human livers outside the body, keep them alive and functioning on our machine and then, hours later, successfully transplant them into a patient,’ said Professor Constantin Coussios of Oxford University's Department of Engineering Science, one of the machine's inventors and Technical Director of OrganOx, the University spin-out created to bring the device from bench to bedside.

 

‘The device is the very first completely automated liver perfusion device of its kind: the organ is perfused with oxygenated red blood cells at normal body temperature, just as it would be inside the body, and can for example be observed making bile, which makes it an extraordinary feat of engineering.

 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Thunderstorms contain invisible pulses of ‘dark lightning’ - powerful radiation

Thunderstorms contain invisible pulses of ‘dark lightning’ - powerful radiation | Amazing Science | Scoop.it
Scientists investigate previously unknown sprays of X-rays and bursts of gamma rays.

 

A lightning bolt is one of nature’s most over-the-top phenomena, rarely failing to elicit at least a ping of awe no matter how many times a person has witnessed one. With his iconic kite-and-key experiments in the mid-18th century, Benjamin Franklin showed that lightning is an electrical phenomenon, and since then the general view has been that lightning bolts are big honking sparks no different in kind from the little ones generated by walking in socks across a carpeted room.

 

But scientists recently discovered something mind-bending about lightning: Sometimes its flashes are invisible, just sudden pulses of unexpectedly powerful radiation. It’s what Joseph Dwyer, a lightning researcher at the Florida Institute of Technology, has termed dark lightning.

 

Unknown to Franklin but now clear to a growing roster of lightning researchers and astronomers is that along with bright thunderbolts, thunderstorms unleash sprays of X-rays and even intense bursts of gamma rays, a form of radiation normally associated with such cosmic spectacles as collapsing stars. The radiation in these invisible blasts can carry a million times as much energy as the radiation in visible lightning, but that energy dissipates quickly in all directions rather than remaining in a stiletto-like lightning bolt.

 

Dark lightning appears sometimes to compete with normal lightning as a way for thunderstorms to vent the electrical energy that gets pent up inside their roiling interiors, Dwyer says. Unlike with regular lightning, though, people struck by dark lightning, most likely while flying in an airplane, would not get hurt. But according to Dwyer’s calculations, they might receive in an instant the maximum safe lifetime dose of ionizing radiation — the kind that wreaks the most havoc on the human body.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Drawing Einstein’s face with math: start with x(t)=-38/9sin(11/7-3t)

Drawing Einstein’s face with math: start with x(t)=-38/9sin(11/7-3t) | Amazing Science | Scoop.it
Wolfram Alpha collects famous faces into functions: Obama, 2pac, Sergey Brin.

 

A post on StackExchange from a couple of months ago inquired how to create the line drawings in the style that Wolfram Alpha has curated. Some debate ensued about whether the equations that produced the drawings were handwritten, but one commenter, Simon Woods, described a way to produce the curves.

 

Woods’ method, adapted from another comment by Rahul Narain, involves reverse engineering the curves using Wolfram's Mathematica by converting an image to grayscale, extracting the contours, and plotting the curve using a function “tocurve” that takes the line, a number of modes, and “symbolic parameter t” that parameterizes the line. The “Fourier” function in Mathematica will approximate the line with sinusoids, and the “Rationalize” function converts all the numbers to rational to produce equations that look similar to WolframAlpha’s collection.

 

This procedure would cover one closed-line element of a drawing, but many of the portraits on WolframAlpha have multiple elements (for instance, the color in Barack Obama’s hair, or Adele’s eyes, or Alexander Graham Bell’s beard). But once you have all that, drawing Sergey Brin wearing a set of Google Glasses is extremely easy. 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Wearable Gesture Control from Thalmic Labs Senses Your Muscles

Wearable Gesture Control from Thalmic Labs Senses Your Muscles | Amazing Science | Scoop.it

With visions of Minority Report, many a user's hoped to control gadgets by wildly waving at a Kinect like a symphony conductor. Now there's another way to make your friends laugh at you thanks to the Thalmic Labs' MYO armband, which senses motion and electrical activity in your muscles to let you control your computer or other device via Bluetooth 4.0. The company says its proprietary sensor can detect signals right down to individual fingers before you even move them, which -- coupled with an extremely sensitive 6-axis motion detector -- makes for a highly responsive experience. Feedback to the user is given through haptics in the device, which also packs an ARM processor and onboard Lithium-Ion batteries. MYO is now up for a limited pre-order with Thalmic saying you won't be charged until it ships near year's end, while developers can also grab the API. If you're willing to risk some ridicule to be first on the block to grab one, hit the source.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Madagascar hit by the most severe locust plague since the 1950s

Madagascar hit by the most severe locust plague since the 1950s | Amazing Science | Scoop.it

A severe plague of locusts has infested about half of Madagascar, threatening crops and raising concerns about food shortages, a UN agency says. The UN's Food and Agricultural Organization (FAO) said billions of the plant-devouring insects could cause hunger for 60% of the population.

About $22m (£14.5m) was urgently needed to fight the plague in a country where many people are poor, the FAO added.

 

It was the worst plague to hit the island since the 1950s, the FAO said.

FAO locust control expert Annie Monard told BBC Focus on Africa the plague posed a major threat to the Indian Ocean island. "The last one was in the 1950s and it had a duration of 17 years so if nothing is done it can last for five to 10 years, depending on the conditions," she said.

 

Nearly 60% of the island's more than 22 million people could be threatened by a significant worsening of hunger. "Currently, about half the country is infested by hoppers and flying swarms - each swarm made up of billions of plant-devouring insects," the FAO said in a statement.

 

"FAO estimates that about two-thirds of the island country will be affected by the locust plague by September 2013 if no action is taken."

 

It said it needed donors to give more than $22m in emergency funding by June so that a full-scale spraying campaign could be launched to fight the plague.

 

The plague threatened pasture for livestock and rice crops - the main staple in Madagascar, the FAO said.

 

"Nearly 60% of the island's more than 22m people could be threatened by a significant worsening of hunger in a country that already had extremely high rates of food insecurity and malnutrition," it added.

 

An estimated 80% of people in Madagascar, which has a population of more than 22 million, live on less than a dollar a day.

 

The Locust Control Centre in Madagascar had treated 30,000 hectares of farmland since last October, but a cyclone in February made the situation worse, the FAO said.

 

The cyclone not only damaged crops but created "optimal conditions for one more generation of locusts to breed", it added.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The information paradox: What happens to matter falling into a black hole?

The information paradox: What happens to matter falling into a black hole? | Amazing Science | Scoop.it
Will an astronaut who falls into a black hole be crushed or burned to a crisp?

 

In March 2012, Joseph Polchinski began to contemplate suicide — at least in mathematical form. A string theorist at the Kavli Institute for Theoretical Physics in Santa Barbara, California, Polchinski was pondering what would happen to an astronaut who dived into a black hole. Obviously, he would die. But how?

 

According to the then-accepted account, he wouldn’t feel anything special at first, even when his fall took him through the black hole’s event horizon: the invisible boundary beyond which nothing can escape. But eventually — after hours, days or even weeks if the black hole was big enough — he would begin to notice that gravity was tugging at his feet more strongly than at his head. As his plunge carried him inexorably downwards, the difference in forces would quickly increase and rip him apart, before finally crushing his remnants into the black hole’s infinitely dense core.

 

But Polchinski’s calculations, carried out with two of his students — Ahmed Almheiri and James Sully — and fellow string theorist Donald Marolf at the University of California, Santa Barbara (UCSB), were telling a different story. In their account, quantum effects would turn the event horizon into a seething maelstrom of particles. Anyone who fell into it would hit a wall of fire and be burned to a crisp in an instant.

 

The team’s verdict, published in July 2012, shocked the physics community. Such firewalls would violate a foundational tenet of physics that was first articulated almost a century ago by Albert Einstein, who used it as the basis of general relativity, his theory of gravity. Known as the equivalence principle, it states in part that an observer falling in a gravitational field — even the powerful one inside a black hole — will see exactly the same phenomena as an observer floating in empty space. Without this principle, Einstein’s framework crumbles.

 

Well aware of the implications of their claim, Polchinski and his co-authors offered an alternative plot ending in which a firewall does not form. But this solution came with a huge price. Physicists would have to sacrifice the other great pillar of their science: quantum mechanics, the theory governing the interactions between subatomic particles.

 

The result has been a flurry of research papers about firewalls, all struggling to resolve the impasse, none succeeding to everyone’s satisfaction. Steve Giddings, a quantum physicist at the UCSB, describes the situation as “a crisis in the foundations of physics that may need a revolution to resolve”.

 

With that thought in mind, black-hole experts came together last month at CERN, Europe’s particle-physics laboratory near Geneva, Switzerland, to grapple with the issue face to face. They hoped to reveal the path towards a unified theory of ‘quantum gravity’ that brings all the fundamental forces of nature under one umbrella — a prize that has eluded physicists for decades.

 

The firewall idea “shakes the foundations of what most of us believed about black holes”, said Raphael Bousso, a string theorist at the University of California, Berkeley, as he opened his talk at the meeting. “It essentially pits quantum mechanics against general relativity, without giving us any clues as to which direction to go next.”

 

The roots of the firewall crisis go back to 1974, when physicist Stephen Hawking at the University of Cambridge, UK, showed that quantum effects cause black holes to run a temperature2. Left in isolation, the holes will slowly spew out thermal radiation — photons and other particles — and gradually lose mass until they evaporate away entirely (see Figure).

 

These particles aren’t the firewall, however; the subtleties of relativity guarantee that an astronaut falling through the event horizon will not notice this radiation. But Hawking’s result was still startling — not least because the equations of general relativity say that black holes can only swallow mass and grow, not evaporate.

 

Hawking’s argument basically comes down to the observation that in the quantum realm, ‘empty’ space isn’t empty. Down at this sub-sub-microscopic level, it is in constant turmoil, with pairs of particles and their corresponding antiparticles continually popping into existence before rapidly recombining and vanishing. Only in very delicate laboratory experiments does this submicroscopic frenzy have any observable consequences. But when a particle–antiparticle pair appears just outside a black hole’s event horizon, Hawking realized, one member could fall in before the two recombined, leaving the surviving partner to fly outwards as radiation. The doomed particle would balance the positive energy of the outgoing particle by carrying negative energy inwards — something allowed by quantum rules. That negative energy would then get subtracted from the black hole’s mass, causing the hole to shrink.

 

Hawking’s original analysis has since been refined and extended by many researchers, and his conclusion is now accepted almost universally. But with it came the disturbing realization that black-hole radiation leads to a paradox that challenges quantum theory.

 

Quantum mechanics says that information cannot be destroyed. In principle, it should be possible to recover everything there is to know about the objects that fell in a black hole by measuring the quantum state of the radiation coming out. But Hawking showed that it was not that simple: the radiation coming out is random. Toss in a kilogram of rock or a kilogram of computer chips and the result will be the same. Watch the black hole even until it dies, and there would still be no way to tell how it was formed or what fell in it.

 

But that same year, the deadlock was broken by a discovery made by Juan Maldacena, a physicist then at Harvard University in Cambridge. Maldacena’s insight built on an earlier proposal that any three-dimensional (3D) region of our Universe can be described by information encoded on its two-dimensional (2D) boundary, in much the same way that laser light can encode a 3D scene on a 2D hologram. “We used the word ‘hologram’ as a metaphor,” says Leonard Susskind, a string theorist at Stanford University in California, and one of those who came up with the proposal. “But after doing more mathematics, it seemed to make literal sense that the Universe is a projection of information on the boundary.”

 

What Maldacena came up with was a concrete mathematical formulation of the hologram idea that made use of ideas from superstring theory, which posits that elementary particles are composed of tiny vibrating loops of energy. His model envisages a 3D universe containing strings and black holes that are governed only by gravity, bounded by a 2D surface on which elementary particles and fields obey ordinary quantum laws without gravity. Hypothetical residents of the 3D space would never see this boundary because it is infinitely far away. But that wouldn’t matter: anything happening in the 3D universe could be described equally well by equations in the 2D universe, and vice versa. “I found that there’s a mathematical dictionary that allows you to go back and forth between the languages of these two worlds,” Maldacena explains.

 

One of the most promising resolutions, according to Susskind, has come from Daniel Harlow, a quantum physicist at Princeton University in New Jersey, and Patrick Hayden, a computer scientist at McGill University in Montreal, Canada. They considered whether an astronaut could ever detect the paradox with a real-world measurement. To do so, he or she would first have to decode a significant portion of the outgoing Hawking radiation, then dive into the black hole to examine the infalling particles. The pair’s calculations show that the radiation is so tough to decode that the black hole would evaporate before the astronaut was ready to jump in. “There’s no fundamental law preventing someone from measuring the paradox,” says Harlow. “But in practice, it’s impossible.”

 

Giddings, however, argues that the firewall paradox requires a radical solution. He has calculated that if the entanglement between the outgoing Hawking radiation and its infalling twin is not broken until the escaping particle has travelled a short distance away from the event horizon, then the energy released would be much less ferocious, and no firewall would be generated. This protects the equivalence principle, but requires some quantum laws to be modified. At the CERN meeting, participants were tantalized by the possibility that Giddings’ model could be tested: it predicts that when two black holes merge, they may produce distinctive ripples in space-time that can be detected by gravitational-wave observatories on Earth.

 

There is another option that would save the equivalence principle, but it is so controversial that few dare to champion it: maybe Hawking was right all those years ago and information is lost in black holes. Ironically, it is Preskill, the man who bet against Hawking’s claim, who raised this alternative, at a workshop on firewalls at Stanford at the end of last year. “It’s surprising that people are not seriously thinking about this possibility because it doesn’t seem any crazier than firewalls,” he says — although he adds that his instinct is still that information survives.

 

The reluctance to revisit Hawking’s old argument is a sign of the immense respect that physicists have for Maldacena’s dictionary relating gravity to quantum theory, which seemingly proved that information cannot be lost. “This is the deepest ever insight into gravity because it links it to quantum fields,” says Polchinski, who compares Maldacena’s result — which has now accumulated close to 9,000 citations — to the nineteenth-century discovery that a single theory connects light, electricity and magnetism. “If the firewall argument had been made in the early 1990s, I think it would have been a powerful argument for information loss,” says Bousso. “But now nobody wants to entertain the possibility that Maldacena is wrong.”

 

Maldacena is flattered that most physicists would back him in a straight-out fight against Einstein, although he believes it won’t come to that. “To completely understand the firewall paradox, we may need to flesh out that dictionary,” he says, “but we won’t need to throw it out.”

 

The only consensus so far is that this problem will not go away any time soon. During his talk, Polchinski fielded all proposed strategies for mitigating the firewall, carefully highlighting what he sees as their weaknesses. “I’m sorry that no one has gotten rid of the firewall,” he concludes. “But please keep trying.”

 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scott Reef: Coral Reef Back From the Dead After Coral Bleaching Nearly Destroyed It

Scott Reef: Coral Reef Back From the Dead After Coral Bleaching Nearly Destroyed It | Amazing Science | Scoop.it

Back in 1998, Scott Reef was a ghost town. Rising ocean temperatures caused by El Niño had triggered a catastrophic bleaching event that decimated the enormous reef system off the coast of Western Australia. The prognosis was grim—more than 249 kilometers away from its nearest neighbors, the Scott system had no hope of being reseeded by their coral larvae, a process scientists believed was vital to reef recovery. But just 15 years later, Scott Reef has regrown into the vibrant ecosystem pictured above, and its isolation may have been the key to its survival. Although the Scott system did not benefit from the arrival of larvae from other reefs, an abundance of plant-eating fish in the area kept dangerous algae in check and allowed the few remaining local larvae to hang on long enough to begin the slow but steady process of repopulating the reef, researchers report online today in Science. The reason those hungry fish were there to save the day? There were no humans around to hunt them. So while climate change may be wreaking havoc on coral reefs around the world, these ecosystems might stand a chance of bouncing back once humans are no longer around to bother them.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Ants follow Fermat's principle of least time

Ants follow Fermat's principle of least time | Amazing Science | Scoop.it

Ants have long been known to choose the shortest of several routes to a food source, but what happens when the shortest route is not the fastest? This situation can occur, for example, when ants are forced to travel on two different surfaces, where they can walk faster on one surface than on the other. In a new study, scientists have found that ants behave the same way as light does when traveling through different media: both paths obey Fermat's principle of least time, taking the fastest route rather than the most direct one. Besides revealing insight into ant communities, the findings could offer inspiration to researchers working on solving complex problems in robotics, logistics, and information technology.

The figure shows a ‘refracted’ trail of Wasmannia auropunctata workers at the medium border between smooth (white) and rough (green) felt. The position of the food is on the rough felt. The density of workers on the rough felt is higher than on the smooth felt because travel speed is lower. In addition, it appears, although not very obvious, as if the ants on the rough felt ‘float’ on top of the felt hairs, indicating the difficulty of walking on this substrate.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

For Early Primates, a Night Was Filled With Color

For Early Primates, a Night Was Filled With Color | Amazing Science | Scoop.it
A genetic examination of tarsiers indicates that the saucer-eyed primates developed three-color vision when they were still nocturnal.

 

A new study suggests that primates’ ability to see in three colors may not have evolved as a result of daytime living, as has long been thought. The findings, published in the journalProceedings of the Royal Society B, are based on a genetic examination oftarsiers, the nocturnal, saucer-eyed primates that long ago branched off from monkeys, apes and humans.

 

By analyzing the genes that encodephotopigments in the eyes of modern tarsiers, the researchers concluded that the last ancestor that all tarsiers had in common had highly acute three-color vision, much like that of modern-day primates.

 

Such vision would normally indicate a daytime lifestyle. But fossils show that the tarsier ancestor was also nocturnal, strongly suggesting that the ability to see in three colors somehow predated the shift to daytime living.

The coexistence of the two normally incompatible traits suggests that primates were able to function during twilight or bright moonlight for a time before making the transition to a fully diurnal existence.

 

“Today there is no mammal we know of that has trichromatic vision that lives during night,” said an author of the study, Nathaniel J. Dominy, associate professor of anthropology at Dartmouth. “And if there’s a pattern that exists today, the safest thing to do is assume the same pattern existed in the past.

 

“We think that tarsiers may have been active under relatively bright light conditions at dark times of the day,” he added. “Very bright moonlight is bright enough for your cones to operate.”

more...
QMP's curator insight, April 19, 2013 5:50 PM

This is a great article about how the night time helped early primates to evolve the colored vision that we experience today.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

World’s first successful uterus transplant recipient is pregnant via in vitro fertilization

World’s first successful uterus transplant recipient is pregnant via in vitro fertilization | Amazing Science | Scoop.it

The first woman ever to receive a uterus from a deceased donor, is two-weeks pregnant following a successful embryo transplant, her doctors said on Friday.

 

The 22-year-old Derya Sert was revealed to be almost two-weeks pregnant in preliminary results after in vitro fertilisation at Akdeniz University Hospital in Turkey’s southern province of Antalya, her doctor Mustafa Unal said in a written statement. “She is doing just fine at the moment,” Unal said.

 

Sert was described as a “medical miracle” when she became the first woman in the world to have a successful womb transplant from a dead donor in August 2011 at the same Antalya hospital.

 

The groundbreaking news of her pregnancy will rekindle hopes for thousands of childless women across the world who are unable to bear their own babies. Sert was born without a uterus, like one in every 5,000 women around the world, and her doctors waited 18 months before implanting the embryo to make sure the foreign organ was still functioning.

 

Hers was the second womb transplant to be performed in the world, the first being in Saudi Arabia in 2000 from a living donor, which failed after 99 days due to heavy clotting. Doctors had to remove the organ. The baby is expected to be delivered via C-section and the uterus to be removed from Sert in the months following the birth to avoid further complications and the risk of rejection. The young woman had started to menstruate after the transplant, which her doctors had said was an important signal that the womb was functional.

 

Experts however warn the pregnancy carries several health risks to the patient as well as to the baby, including birth defects due to the use of immunosuppressive drugs as well as preterm delivery.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Superheated Bose-Einstein condensate exists above critical temperature

Superheated Bose-Einstein condensate exists above critical temperature | Amazing Science | Scoop.it

At very low temperatures, near absolute zero, multiple particles called bosons can form an unusual state of matter in which a large fraction of the bosons in a gas occupy the same quantum state—the lowest one—to form a Bose-Einstein condensate (BEC). In a sense, the bosons lose their individual identities and behave like a single, very large atom. But while previously BECs have only existed below a critical temperature, scientists in a new study have shown that BECs can exist above this critical temperature for more than a minute when different components of the gas evolve at different rates.

a superheated BEC is reminiscent of superheated distilled water (water that has had many of its impurities removed), which remains liquid above 100 °C, the temperature at which it would normally boil into a gas. In both cases, the temperature—as defined by the average energy per particle (boson or water molecule)—rises above a critical temperature at which the phase transition should occur, and yet it doesn't.


In BECs and distilled water, the inhibition of a phase transition at the critical temperature occurs for different reasons. In general, there are two types of phase transitions. The boiling of water is a first-order phase transition, and it can be inhibited in clean water because, in the absence of impurities, there is in an energy barrier that "protects" the liquid from boiling away. On the other hand, boiling a BEC is a second-order phase transition. In this case, superheating occurs because the BEC component and the remaining thermal (non-condensed) component decouple and evolve as two separate equilibrium systems.

In equilibrium, a BEC can only exist below a critical transition temperature. If the temperature is increased towards the critical value, the BEC should gradually decay into the thermal component. The particles flow between the two components until they have the same chemical potential (a measure of how much energy it takes to add a particle to either component), or in other words, until they are in equilibrium with each other. However, maintaining this equilibrium relies on the interactions between the particles.

In the future, the researchers plan to further investigate the physical mechanism behind superheating. "We are primarily interested in further fundamental understanding of the superheating phenomenon," Smith said. "The funny thing is that the system is simultaneously in equilibrium in some respects (e.g., the BEC and the thermal component have the same temperature, the BEC has an equilibrium shape for the given number of condensed atoms, etc.) and out of equilibrium in other ways (primarily the fact that the number of condensed atoms is much higher than expected in equilibrium). This poses new question about how we define equilibrium in a quantum system, which we would like to understand better. Practical applications might come later, fully exploiting their potential being reliant on more complete fundamental understanding. "Also, it turns out that condensation in 2D systems is even more interesting than in 3D, and we plan to study superheating and other non-equilibrium phenomena for an ultracold 2D Bose gas."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Beautiful New Bat Species Discovered in South Sudan

Beautiful New Bat Species Discovered in South Sudan | Amazing Science | Scoop.it

This is a special bat, and not just because of its strikingly beautiful spots and stripes. This is a rare specimen, whose discovery in South Sudan led researchers to identify a new genus of bat. The bat is just the fifth specimen of its kind ever collected.


The distinctly patterned bat was discovered by researchers from Bucknell University and Fauna & Flora International during a field research expedition with wildlife authorities in South Sudan.

 

DeeAnn Reeder, an Associate Professor of Biology at Bucknell and first author of the paper announcing the new bat genus, recognized the bat as the same species as a specimen captured in the Democratic Republic of the Congo in 1939. That specimen was classified as Glauconycteris superba, but after detailed analyses she and her colleagues determined it did not belong in the genus Glauconycteris. It was so unique that they needed to create a new genus for it.

 

Reeder and her colleagues named the new genus Niumbaha, which means “rare” or “unusual” in Zande, the language spoken in Western Equatoria State, where the bat was captured. The bat’s full scientific name is Niumbaha superba, reflecting both the rarity and the magnificence of this creature.


“Our discovery of this new genus of bat is an indicator of how diverse the area is and how much work remains,” Reeder said in a press release.

 



more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Chemical treatment that turns whole organs transparent offers a big boost to the field of ‘connectomics’

Chemical treatment that turns whole organs transparent offers a big boost to the field of ‘connectomics’ | Amazing Science | Scoop.it
Technique to make tissue transparent offers three-dimensional view of neural networks.

 

A chemical treatment that turns whole organs transparent offers a big boost to the field of ‘connectomics’ — the push to map the brain’s fiendishly complicated wiring. Scientists could use the technique to view large networks of neurons with unprecedented ease and accuracy. The technology also opens up new research avenues for old brains that were saved from patients and healthy donors.

 

“This is probably one of the most important advances for doing neuroanatomy in decades,” says Thomas Insel, director of the US National Institute of Mental Health in Bethesda, Maryland, which funded part of the work. Existing technology allows scientists to see neurons and their connections in microscopic detail — but only across tiny slivers of tissue. Researchers must reconstruct three-dimensional data from images of these thin slices. Aligning hundreds or even thousands of these snapshots to map long-range projections of nerve cells is laborious and error-prone, rendering fine-grain analysis of whole brains practically impossible.

 

The new method instead allows researchers to see directly into optically transparent whole brains or thick blocks of brain tissue. Called CLARITY, it was devised by Karl Deisseroth and his team at Stanford University in California. “You can get right down to the fine structure of the system while not losing the big picture,” says Deisseroth, who adds that his group is in the process of rendering an entire human brain transparent.

 

The technique, published online in Nature on 10 April, turns the brain transparent using the detergent SDS, which strips away lipids that normally block the passage of light  (K. Chung et al., Nature http://dx.doi.org/10.1038/nature12107; 2013). Other groups have tried to clarify brains in the past, but many lipid-extraction techniques dissolve proteins and thus make it harder to identify different types of neurons. Deisseroth’s group solved this problem by first infusing the brain with acryl­amide, which binds proteins, nucleic acids and other biomolecules. When the acrylamide is heated, it polymerizes and forms a tissue-wide mesh that secures the molecules. The resulting brain–hydrogel hybrid showed only 8% protein loss after lipid extraction, compared to 41% with existing methods.

 

Applying CLARITY to whole mouse brains, the researchers viewed fluorescently labelled neurons in areas ranging from outer layers of the cortex to deep structures such as the thalamus. They also traced individual nerve fibres through 0.5-millimeter-thick slabs of formalin-preserved autopsied human brain — orders of magnitude thicker than slices currently imaged.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

MIT and Harvard engineers create graphene electronics with DNA-based lithography

MIT and Harvard engineers create graphene electronics with DNA-based lithography | Amazing Science | Scoop.it

Chemical and molecular engineers at at MIT and Harvard have successfully used templates made of DNA to cheaply and easily pattern graphene into nanoscale structures that could eventually be fashioned into electronic circuits.

 

Graphene, as you are surely aware by now, is a material with almost magical properties. It is the strongest and most electrically conductive material known to humankind. Semiconductor masters, such as Intel and TSMC, would absolutely love to use graphene to fashion computer chips are capable of operating at hundreds of gigahertz while consuming tiny amounts of power. Unfortunately, though, graphene is much more difficult and expensive to work with than silicon — and, in its base state, it isn’t a semiconductor. The DNA patterning performed by MIT and Harvard seeks to rectify both of these issues, by making graphene easy to work with, and thus easy to turn it into a semiconductor for use in computer chips.

 

Late last year, Harvard’s Wyss Institute announced that it had discovered a technique forbuilding intricately detailed DNA nanostructures out of DNA “Lego bricks.” These bricks are specially crafted strands of DNA that join together with other DNA bricks at a 90-degree angle. By joining enough of these bricks together, a three-dimensional 25-nanometer cube emerges. By altering which DNA bricks are available during this process, the Wyss Institute was capable of forming 102 distinct 3D shapes, as seen in the image and video below.

 

The MIT and Harvard researchers are essentially taking these shapes and binding them to a graphene surface with a molecule called aminopyrine. Once bound, the DNA is coated with a layer of silver, and then a layer of gold to stabilize it. The gold-covered DNA is then used as a mask for plasma lithography, where oxygen plasma burns away the graphene that isn’t covered. Finally, the DNA mask is washed away with sodium cyanide, leaving a piece of graphene that is an almost-perfect copy of the DNA template.

 

So far, the researchers have used this process — dubbed metallized DNA nanolithography— to create X and Y junctions, rings, and ribbons out of graphene. Nanoribbons, which are simply very narrow strips of graphene, are of particular interest because they have a bandgap — a feature that graphene doesn’t normally possess. A bandgap means that these nanoribbons have semiconductive properties, which means they might one day be used in computer chips. Graphene rings are also of interest, because they can be fashioned into quantum interference transistors — a new and not-well-understood transistor that connects three terminals to a ring, with the transistor’s gate being controlled by the flow of electrons around the ring.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Solar Power Achieves Grid Parity

Solar Power Achieves Grid Parity | Amazing Science | Scoop.it

Deutsche Bank has released a report concluding that the cost of unsubsidized solar power is about the same as the cost of electricity from the grid in India and Italy, and that by 2014 even more countries will achieve solar “grid parity.”

 

During 2013, China is expected to supplant Germany as the world’s biggest solar market. China expects to add 10 gigawatts of new solar projects this year, “more than double its previous target and three times last year’s expansion.”

 

In 2012, U.S. solar installations grew 73% over 2011 levels, driven by third party leasing agreements that eliminate up front costs in rooftop installation. The price of installed PV systems fell 27%.

more...
Danielle Schaeffer's curator insight, April 9, 2013 8:45 AM

Even if solar does bear the same price as power from the grid, the homeowner is better off having it, and having control of it, instead of a utiltiy company controlling it.

Zertrin's curator insight, April 12, 2013 4:21 AM

It is now, and will become in the future more and more interesting to use directly the electricity you produce from photovoltaic panels than to sell it.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Memory that never forgets: non-volatile DIMMs hit the market

Memory that never forgets: non-volatile DIMMs hit the market | Amazing Science | Scoop.it

The server world still waits for DDR4, the next generation of dynamic memory, to be ready for prime time. In the meantime, a new set of memory boards from Viking is looking to squeeze more performance out of servers not by providing faster memory, but by making it safer to keep more in memory and less on disk or SSD. Viking Technology has begun supplying dual in-line memory modules that combine DDR3 dynamic memory with NAND flash memory to create non-volatile RAM for servers and storage arrays—modules that don't lose their memory when the systems they're in lose power or shut down.

 

The ArxCis-NV DIMM, which Viking demonstrated at the Storage Networking Industry Association'sSNW Spring conference in Orlando this week, plugs into standard DIMM memory slots in servers and RAID controller cards.  Viking isn't the only player in the non-volatile DIMM game—Micron Technology and AgigA Tech announced their own NVDIMM effort in November—but they're first to market. The modules shipping now to a select group of server manufacturers have 4GB of dynamic RAM and 8GB of NAND memory. Modules with double those figures are planned for later in the year, and modules with 16GB of DRAM and 32GB of NAND are in the works for next year.

 

The ArxCis can be plugged into existing servers and RAID controllers today as a substitute for battery backed-up (BBU) memory modules. They are even equipped with batteries to power a last-gasp write to NAND memory in the event of a power outage. But the ArxCis is more than a better backup in the event of system failure. Viking's non-volatile DIMMs are primarily aimed at big in-memory computing tasks, such as high-speed in-memory transactional database systems and indices such as those used in search engines and other "hyper-scale" computing applications.  Facebook's "Unicorn" search engine system, for example, keeps massive indices in memory to allow for real-time response to user queries, as does the "type-ahead" feature in Google's search.

 

Viking's executives also claim that non-volatile DIMM cards can be paired with solid-state disks to extend the life and performance of the disks. Since DDR memory is much faster than the NAND memory used by SSDs, and it doesn't have the limited number of "writes" that flash memory has (see Lee Hutchinson's look at SSDs for an explanation of how SSDs "wear out"). This keeps more data in RAM for constant writing, preventing the "amplification" effect of SSD storage from being magnified and driving drives toward end-of-life that much faster. Since data gets written to the NAND memory on the DIMM only when the module detects a drop in voltage, the modules can last up to 10 years before the NAND memory "rots" and is unwritable, according to Viking's estimates.

 

While the cost of NVDIMM memory puts it out of reach of every-day applications, the DIMMs will cost "a few hundred dollars each," Viking Vice President of Marketing Adrian Proctor told ComputerWorld. The entry of Micron and others into the NVDIMM market could eventually drive costs down and make them more practical in consumer devices, making "instant on" computing that much more instant.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

DIY Brainless Robots Exhibit Collective Behavior

DIY Brainless Robots Exhibit Collective Behavior | Amazing Science | Scoop.it

We’ve all seen the amazing capabilities of flocks of birds and schools of fish to move seemingly as one. Such collective behavior can be witnessed in almost all living systems. A lot of research is going into figuring out how swarm behavior works in order to mimic nature’s capabilities in swarm robotics. In these multirobot systems a large number of relatively simple robots can accomplish complex tasks through interdependent cooperation.

The ability of an individual agent to participate in collective behavior is often linked to cognition and social interaction implying that swarmbots require computational power and a sensor to function. But now scientists of Harvard university have demonstrated that brainless robots self-organize into coherent collective motion.


The robots used in the experiment are BristleBots (Bbots). These are very simple robots anyone can buildfor $5 from a toothbrush, a pager motor and a battery. When the brush head is pressed to the ground, the angled bristles give it forward movement. With a speed of 150 rounds per second, the motor turns the brush into a self-propelling robot. The Bbots don’t have computing power or sensors.


The scientists Luca Giomi, Nico Hawley-Weld and L. Mahadevan of the School of Engineering and Applied Sciences custom-build two different kinds of Bbots, Walkers and Spinners. Both have an elliptical chassis but Walkers have long bristles making them move in a straight line while the Spinners with their short bristles move in a circle.


The Harvard men placed the Bbots in a circular arena with upward sloping boundaries. When the Bbots ran up against the edge they were forced back. When less than ten Bbots populated the arena, they moved around randomly but once they exceeded that number they self-organized and showed collective behavior. The Spinners grouped up and moved along the boundary together and the Walkers eventually ended up standing still side by side.  


In their paper Giomi and his colleagues point out that although the Bbots don’t have sensors they do sense each other and their environment through contact interaction. The elliptical shape of the Bbots, their movement and spatial interaction are sufficient to produce collective motion. This suggests swarm behavior need not be dependent on cognition and social skills but can also be achieved by mechanical intelligence.


The outcome of the study is significant for the development of swarm robotics. The Walkers and Spinners could serve as terrain explorers because they translate their interactions with the environment into dynamic behavior. Their lack of artificial intelligence and sensors makes them very cheap and robust allowing for the deployment of vast numbers of Bbots.

And, as mentioned in the video, it also raises the more philosophical question how unintelligent ‘non-intelligent’ creatures actually are. 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

An abundance of medium-sized exoplanet worlds is challenging current planet-formin models

An abundance of medium-sized exoplanet worlds is challenging current planet-formin models | Amazing Science | Scoop.it

Guided by the example of our own Solar System, with its distinct sets of large and small worlds, early planet-formation models were based on the notion of ‘core accretion’. Dust swirling around a star in a protoplanetary disk can aggregate into small planetesimals of rock and ice, which collide and stick together. The inner part of the disk contains too little material for these cores to grow much bigger than Earth. But farther out, they can attain ten Earth masses or more, enough to attract a vast volume of gas and become Jupiter-like.

 

The detection, starting in 1995, of Jupiter-sized planets with orbits as short as a few Earth days contradicted these models. The theorists revised their models to allow these ‘hot Jupiters’ to form far from their star and then migrate in. Yet these models predicted that anything reaching super-Earth size should either become a gas giant or be swallowed by its star, creating a ‘planetary desert’ in this size range. Kepler’s discoveries wreck those predictions. “It’s a tropical rainforest, not a desert,” says Andrew Howard, an astronomer at the University of California, Berkeley. “We hope the theory is going to catch up.”

 

Kepler measures a planet’s size by detecting how much light it blocks as it passes in front of its star. For a handful of the super-Earths detected by Kepler, ground-based observations have also determined mass, by tracking the wobble of the host star induced by the planet’s gravity. And some of these super-Earths seem to have very low densities — indicating that they may have small rocky cores surrounded by large gas envelopes.

Kepler astronomer Jack Lissauer, of Ames, thinks that they may have begun as small cores in the outer parts of their solar system, accreting a large amount of gas without reaching the point of runaway growth that leads to a true gas giant. Without the gravitational heft of a giant to hold in gas, such a planet would have a large, low-density atmosphere, but it could still grow to super-Earth size by a cooling process that shrinks the atmosphere and allows more gas to be drawn in, he says.

 

But that scenario may not explain smaller and denser super-Earths. Several such planets have already been detected, and Kepler is starting to reach the sensitivity required to spot them, says Greg Laughlin, an astronomer at the University of California, Santa Cruz. “Kepler is just seeing the tip of the iceberg.”

 

Nor can any current theory explain how super-Earths can sit so close to their stars. Lissauer says the problem lies in the migration portion of the models. But Norm Murray, an astrophysicist at the University of Toronto, is exploring other ways of forming super-Earths. Instead of assembling them and migrating them towards the star, Murray’s model first migrates rocky planetesimals and then allows them to accrete. “‘Migration then assembly’ is the catchphrase,” he says.

 

In any event, Laughlin says that modellers will probably find a way to explain the current observations. “They’ll scramble to fix the models,” he says. But it’s probably not the last time they’ll have to revisit their codes, he adds. “My prediction is that they’ll completely miss the next big thing, whatever that will be.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists 3D-print self-assembling 'living tissue' using just water and oil

Scientists 3D-print self-assembling 'living tissue' using just water and oil | Amazing Science | Scoop.it

Researchers have created networks of water droplets that mimic some properties of cells in biological tissues. Using a three-dimensional printer, a team at the University of Oxford, UK, assembled tiny water droplets into a jelly-like material that can flex like a muscle and transmit electric signals like chains of neurons. The work is published today in Science.

 

These networks, which can contain up to 35,000 droplets, could one day become a scaffold for making synthetic tissues or provide a model for organ functions, says co-author Gabriel Villar of Cambridge Consultants, a technology-transfer company in Cambridge, UK. “We want to see just how far we can push the mimicry of living tissue,” he says.

 

The network relies on each water droplet having a lipid coating, which forms when the droplets are in a finely-tuned mix of oil and a pure lipid.

The lipid molecules have a water-loving head, which sticks to the droplet's surface, and a water-fearing tail, which pokes out into the oily solution. When two lipid-coated droplets come together, each with its carpet of water-fearing tails, they stick to each other like Velcro, forming a lipid bilayer, similar to those in cell membranes. The bilayer creates a structural and functional connection between droplets.

 

Although previous studies have shown that lipid-coated droplets can form such connections, their watery composition and spherical shape made them tricky to assemble. “I already made a raft of droplets that stuck together,” says biomedical engineer David Needham of the University of Southern Denmark in Odense, who was not involved in the study. “But to print them is really an achievement.”

 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Hubble telescope spots death of a white dwarf 10 billion years ago

Hubble telescope spots death of a white dwarf 10 billion years ago | Amazing Science | Scoop.it

When a white dwarf explodes as a type Ia supernova, its death is so bright that its light can be detected across the Universe. A new observation using the Hubble Space Telescope identified the farthest type Ia supernova yet seen, at a distance of greater than 10 billion light-years. In the tradition of supernova surveys, this event was nicknamed for Woodrow Wilson, 28th President of the United States. The previous record-holder, Supernova Mingus, was about 350 million light-years closer to Earth.

 

White dwarfs are the remains of stars similar in mass to the Sun. Since such a star would have to live out its entire life to form a white dwarf, there are limits to how early in the Universe's history a type Ia supernova can explode. Only 8 white dwarf supernovas have been identified farther than 9 billion light-years away. (Some core-collapse supernovas, which are the explosions of very massive stars, have been seen farther than Supernova Wilson.) Since all such explosions happen in a similar way, cosmologists use them to measure the expansion rate of the Universe.

 

Astronomers found this violent event by comparing the light from several separate long exposures of the same patch of the sky, known as CANDELS (the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey). Bright as it was, the distance was so great that Supernova Wilson appeared as an enhancement of the luminosity of its host galaxy. The researchers subtracted the light of the galaxy without the supernova from the combined supernova-galaxy combination, then analyzed the residual light to identify it as type Ia.

 

The Universe was only a few billion years old when Supernova Wilson exploded, nearly as early as such an event could possibly occur. The early era of Supernova Wilson's explosion means it was likely the result of two white dwarfs merging rather than a single white dwarf exceeding its maximum mass. This is because the most massive white dwarfs require more time to form than the Universe's existence had provided.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Research points to abrupt and widespread climate shift in the Sahara 5,000 years ago

Research points to abrupt and widespread climate shift in the Sahara 5,000 years ago | Amazing Science | Scoop.it

As recently as 5,000 years ago, the Sahara—today a vast desert in northern Africa, spanning more than 3.5 million square miles—was a verdant landscape, with sprawling vegetation and numerous lakes. Ancient cave paintings in the region depict hippos in watering holes, and roving herds of elephants and giraffes—a vibrant contrast with today's barren, inhospitable terrain.

The Sahara's "green" era, known as the African Humid Period, likely lasted from 11,000 to 5,000 years ago, and is thought to have ended abruptly, with the region drying back into desert within a span of one to two centuries. Now researchers at MIT, Columbia University and elsewhere have found that this abrupt climate change occurred nearly simultaneously across North Africa. The team traced the region's wet and dry periods over the past 30,000 years by analyzing sediment samples off the coast of Africa. Such sediments are composed, in part, of dust blown from the continent over thousands of years: The more dust that accumulated in a given period, the drier the continent may have been.


From their measurements, the researchers found that the Sahara emitted five times less dust during the African Humid Period than the region does today. Their results, which suggest a far greater change in Africa's climate than previously estimated, will be published in Earth and Planetary Science Letters. David McGee, an assistant professor in MIT's Department of Earth, Atmospheric and Planetary Sciences, says the quantitative results of the study will help scientists determine the influence of dust emissions on both past and present climate change.


This study, McGee says, is the first in which researchers have combined the two techniques—endmember modeling and thorium-230 normalization—a pairing that produced very precise measurements of dust emissions through tens of thousands of years. In the end, the team found that during some dry periods North Africa emitted more than twice the dust generated today. Through their samples, the researchers found the African Humid Period began and ended very abruptly, consistent with previous findings. However, they found that 6,000 years ago, toward the end of this period, dust emissions were one-fifth today's levels, and far less dusty than previous estimates. McGee says these new measurements may give scientists a better understanding of how dust fluxes relate to climate by providing inputs for climate models. Natalie Mahowald, a professor of earth and atmospheric sciences at Cornell University, says the group's combination of techniques yielded more robust estimates of dust than previous studies. "Dust is one of the most important aerosols for climate and biogeochemistry," Mahowald says. "This study suggests very large fluctuations due to climate over the last 10,000 years, which has enormous implications for human-derived climate change.

more...
No comment yet.