Amazing Science
391.7K views | +270 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

MIT and Harvard engineers create graphene electronics with DNA-based lithography

MIT and Harvard engineers create graphene electronics with DNA-based lithography | Amazing Science |

Chemical and molecular engineers at at MIT and Harvard have successfully used templates made of DNA to cheaply and easily pattern graphene into nanoscale structures that could eventually be fashioned into electronic circuits.


Graphene, as you are surely aware by now, is a material with almost magical properties. It is the strongest and most electrically conductive material known to humankind. Semiconductor masters, such as Intel and TSMC, would absolutely love to use graphene to fashion computer chips are capable of operating at hundreds of gigahertz while consuming tiny amounts of power. Unfortunately, though, graphene is much more difficult and expensive to work with than silicon — and, in its base state, it isn’t a semiconductor. The DNA patterning performed by MIT and Harvard seeks to rectify both of these issues, by making graphene easy to work with, and thus easy to turn it into a semiconductor for use in computer chips.


Late last year, Harvard’s Wyss Institute announced that it had discovered a technique forbuilding intricately detailed DNA nanostructures out of DNA “Lego bricks.” These bricks are specially crafted strands of DNA that join together with other DNA bricks at a 90-degree angle. By joining enough of these bricks together, a three-dimensional 25-nanometer cube emerges. By altering which DNA bricks are available during this process, the Wyss Institute was capable of forming 102 distinct 3D shapes, as seen in the image and video below.


The MIT and Harvard researchers are essentially taking these shapes and binding them to a graphene surface with a molecule called aminopyrine. Once bound, the DNA is coated with a layer of silver, and then a layer of gold to stabilize it. The gold-covered DNA is then used as a mask for plasma lithography, where oxygen plasma burns away the graphene that isn’t covered. Finally, the DNA mask is washed away with sodium cyanide, leaving a piece of graphene that is an almost-perfect copy of the DNA template.


So far, the researchers have used this process — dubbed metallized DNA nanolithography— to create X and Y junctions, rings, and ribbons out of graphene. Nanoribbons, which are simply very narrow strips of graphene, are of particular interest because they have a bandgap — a feature that graphene doesn’t normally possess. A bandgap means that these nanoribbons have semiconductive properties, which means they might one day be used in computer chips. Graphene rings are also of interest, because they can be fashioned into quantum interference transistors — a new and not-well-understood transistor that connects three terminals to a ring, with the transistor’s gate being controlled by the flow of electrons around the ring.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Solar Power Achieves Grid Parity

Solar Power Achieves Grid Parity | Amazing Science |

Deutsche Bank has released a report concluding that the cost of unsubsidized solar power is about the same as the cost of electricity from the grid in India and Italy, and that by 2014 even more countries will achieve solar “grid parity.”


During 2013, China is expected to supplant Germany as the world’s biggest solar market. China expects to add 10 gigawatts of new solar projects this year, “more than double its previous target and three times last year’s expansion.”


In 2012, U.S. solar installations grew 73% over 2011 levels, driven by third party leasing agreements that eliminate up front costs in rooftop installation. The price of installed PV systems fell 27%.

Danielle Schaeffer's curator insight, April 9, 2013 8:45 AM

Even if solar does bear the same price as power from the grid, the homeowner is better off having it, and having control of it, instead of a utiltiy company controlling it.

Zertrin's curator insight, April 12, 2013 4:21 AM

It is now, and will become in the future more and more interesting to use directly the electricity you produce from photovoltaic panels than to sell it.

Scooped by Dr. Stefan Gruenwald!

Memory that never forgets: non-volatile DIMMs hit the market

Memory that never forgets: non-volatile DIMMs hit the market | Amazing Science |

The server world still waits for DDR4, the next generation of dynamic memory, to be ready for prime time. In the meantime, a new set of memory boards from Viking is looking to squeeze more performance out of servers not by providing faster memory, but by making it safer to keep more in memory and less on disk or SSD. Viking Technology has begun supplying dual in-line memory modules that combine DDR3 dynamic memory with NAND flash memory to create non-volatile RAM for servers and storage arrays—modules that don't lose their memory when the systems they're in lose power or shut down.


The ArxCis-NV DIMM, which Viking demonstrated at the Storage Networking Industry Association'sSNW Spring conference in Orlando this week, plugs into standard DIMM memory slots in servers and RAID controller cards.  Viking isn't the only player in the non-volatile DIMM game—Micron Technology and AgigA Tech announced their own NVDIMM effort in November—but they're first to market. The modules shipping now to a select group of server manufacturers have 4GB of dynamic RAM and 8GB of NAND memory. Modules with double those figures are planned for later in the year, and modules with 16GB of DRAM and 32GB of NAND are in the works for next year.


The ArxCis can be plugged into existing servers and RAID controllers today as a substitute for battery backed-up (BBU) memory modules. They are even equipped with batteries to power a last-gasp write to NAND memory in the event of a power outage. But the ArxCis is more than a better backup in the event of system failure. Viking's non-volatile DIMMs are primarily aimed at big in-memory computing tasks, such as high-speed in-memory transactional database systems and indices such as those used in search engines and other "hyper-scale" computing applications.  Facebook's "Unicorn" search engine system, for example, keeps massive indices in memory to allow for real-time response to user queries, as does the "type-ahead" feature in Google's search.


Viking's executives also claim that non-volatile DIMM cards can be paired with solid-state disks to extend the life and performance of the disks. Since DDR memory is much faster than the NAND memory used by SSDs, and it doesn't have the limited number of "writes" that flash memory has (see Lee Hutchinson's look at SSDs for an explanation of how SSDs "wear out"). This keeps more data in RAM for constant writing, preventing the "amplification" effect of SSD storage from being magnified and driving drives toward end-of-life that much faster. Since data gets written to the NAND memory on the DIMM only when the module detects a drop in voltage, the modules can last up to 10 years before the NAND memory "rots" and is unwritable, according to Viking's estimates.


While the cost of NVDIMM memory puts it out of reach of every-day applications, the DIMMs will cost "a few hundred dollars each," Viking Vice President of Marketing Adrian Proctor told ComputerWorld. The entry of Micron and others into the NVDIMM market could eventually drive costs down and make them more practical in consumer devices, making "instant on" computing that much more instant.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

DIY Brainless Robots Exhibit Collective Behavior

DIY Brainless Robots Exhibit Collective Behavior | Amazing Science |

We’ve all seen the amazing capabilities of flocks of birds and schools of fish to move seemingly as one. Such collective behavior can be witnessed in almost all living systems. A lot of research is going into figuring out how swarm behavior works in order to mimic nature’s capabilities in swarm robotics. In these multirobot systems a large number of relatively simple robots can accomplish complex tasks through interdependent cooperation.

The ability of an individual agent to participate in collective behavior is often linked to cognition and social interaction implying that swarmbots require computational power and a sensor to function. But now scientists of Harvard university have demonstrated that brainless robots self-organize into coherent collective motion.

The robots used in the experiment are BristleBots (Bbots). These are very simple robots anyone can buildfor $5 from a toothbrush, a pager motor and a battery. When the brush head is pressed to the ground, the angled bristles give it forward movement. With a speed of 150 rounds per second, the motor turns the brush into a self-propelling robot. The Bbots don’t have computing power or sensors.

The scientists Luca Giomi, Nico Hawley-Weld and L. Mahadevan of the School of Engineering and Applied Sciences custom-build two different kinds of Bbots, Walkers and Spinners. Both have an elliptical chassis but Walkers have long bristles making them move in a straight line while the Spinners with their short bristles move in a circle.

The Harvard men placed the Bbots in a circular arena with upward sloping boundaries. When the Bbots ran up against the edge they were forced back. When less than ten Bbots populated the arena, they moved around randomly but once they exceeded that number they self-organized and showed collective behavior. The Spinners grouped up and moved along the boundary together and the Walkers eventually ended up standing still side by side.  

In their paper Giomi and his colleagues point out that although the Bbots don’t have sensors they do sense each other and their environment through contact interaction. The elliptical shape of the Bbots, their movement and spatial interaction are sufficient to produce collective motion. This suggests swarm behavior need not be dependent on cognition and social skills but can also be achieved by mechanical intelligence.

The outcome of the study is significant for the development of swarm robotics. The Walkers and Spinners could serve as terrain explorers because they translate their interactions with the environment into dynamic behavior. Their lack of artificial intelligence and sensors makes them very cheap and robust allowing for the deployment of vast numbers of Bbots.

And, as mentioned in the video, it also raises the more philosophical question how unintelligent ‘non-intelligent’ creatures actually are. 

No comment yet.
Scooped by Dr. Stefan Gruenwald!

An abundance of medium-sized exoplanet worlds is challenging current planet-formin models

An abundance of medium-sized exoplanet worlds is challenging current planet-formin models | Amazing Science |

Guided by the example of our own Solar System, with its distinct sets of large and small worlds, early planet-formation models were based on the notion of ‘core accretion’. Dust swirling around a star in a protoplanetary disk can aggregate into small planetesimals of rock and ice, which collide and stick together. The inner part of the disk contains too little material for these cores to grow much bigger than Earth. But farther out, they can attain ten Earth masses or more, enough to attract a vast volume of gas and become Jupiter-like.


The detection, starting in 1995, of Jupiter-sized planets with orbits as short as a few Earth days contradicted these models. The theorists revised their models to allow these ‘hot Jupiters’ to form far from their star and then migrate in. Yet these models predicted that anything reaching super-Earth size should either become a gas giant or be swallowed by its star, creating a ‘planetary desert’ in this size range. Kepler’s discoveries wreck those predictions. “It’s a tropical rainforest, not a desert,” says Andrew Howard, an astronomer at the University of California, Berkeley. “We hope the theory is going to catch up.”


Kepler measures a planet’s size by detecting how much light it blocks as it passes in front of its star. For a handful of the super-Earths detected by Kepler, ground-based observations have also determined mass, by tracking the wobble of the host star induced by the planet’s gravity. And some of these super-Earths seem to have very low densities — indicating that they may have small rocky cores surrounded by large gas envelopes.

Kepler astronomer Jack Lissauer, of Ames, thinks that they may have begun as small cores in the outer parts of their solar system, accreting a large amount of gas without reaching the point of runaway growth that leads to a true gas giant. Without the gravitational heft of a giant to hold in gas, such a planet would have a large, low-density atmosphere, but it could still grow to super-Earth size by a cooling process that shrinks the atmosphere and allows more gas to be drawn in, he says.


But that scenario may not explain smaller and denser super-Earths. Several such planets have already been detected, and Kepler is starting to reach the sensitivity required to spot them, says Greg Laughlin, an astronomer at the University of California, Santa Cruz. “Kepler is just seeing the tip of the iceberg.”


Nor can any current theory explain how super-Earths can sit so close to their stars. Lissauer says the problem lies in the migration portion of the models. But Norm Murray, an astrophysicist at the University of Toronto, is exploring other ways of forming super-Earths. Instead of assembling them and migrating them towards the star, Murray’s model first migrates rocky planetesimals and then allows them to accrete. “‘Migration then assembly’ is the catchphrase,” he says.


In any event, Laughlin says that modellers will probably find a way to explain the current observations. “They’ll scramble to fix the models,” he says. But it’s probably not the last time they’ll have to revisit their codes, he adds. “My prediction is that they’ll completely miss the next big thing, whatever that will be.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientists 3D-print self-assembling 'living tissue' using just water and oil

Scientists 3D-print self-assembling 'living tissue' using just water and oil | Amazing Science |

Researchers have created networks of water droplets that mimic some properties of cells in biological tissues. Using a three-dimensional printer, a team at the University of Oxford, UK, assembled tiny water droplets into a jelly-like material that can flex like a muscle and transmit electric signals like chains of neurons. The work is published today in Science.


These networks, which can contain up to 35,000 droplets, could one day become a scaffold for making synthetic tissues or provide a model for organ functions, says co-author Gabriel Villar of Cambridge Consultants, a technology-transfer company in Cambridge, UK. “We want to see just how far we can push the mimicry of living tissue,” he says.


The network relies on each water droplet having a lipid coating, which forms when the droplets are in a finely-tuned mix of oil and a pure lipid.

The lipid molecules have a water-loving head, which sticks to the droplet's surface, and a water-fearing tail, which pokes out into the oily solution. When two lipid-coated droplets come together, each with its carpet of water-fearing tails, they stick to each other like Velcro, forming a lipid bilayer, similar to those in cell membranes. The bilayer creates a structural and functional connection between droplets.


Although previous studies have shown that lipid-coated droplets can form such connections, their watery composition and spherical shape made them tricky to assemble. “I already made a raft of droplets that stuck together,” says biomedical engineer David Needham of the University of Southern Denmark in Odense, who was not involved in the study. “But to print them is really an achievement.”


No comment yet.
Scooped by Dr. Stefan Gruenwald!

Hubble telescope spots death of a white dwarf 10 billion years ago

Hubble telescope spots death of a white dwarf 10 billion years ago | Amazing Science |

When a white dwarf explodes as a type Ia supernova, its death is so bright that its light can be detected across the Universe. A new observation using the Hubble Space Telescope identified the farthest type Ia supernova yet seen, at a distance of greater than 10 billion light-years. In the tradition of supernova surveys, this event was nicknamed for Woodrow Wilson, 28th President of the United States. The previous record-holder, Supernova Mingus, was about 350 million light-years closer to Earth.


White dwarfs are the remains of stars similar in mass to the Sun. Since such a star would have to live out its entire life to form a white dwarf, there are limits to how early in the Universe's history a type Ia supernova can explode. Only 8 white dwarf supernovas have been identified farther than 9 billion light-years away. (Some core-collapse supernovas, which are the explosions of very massive stars, have been seen farther than Supernova Wilson.) Since all such explosions happen in a similar way, cosmologists use them to measure the expansion rate of the Universe.


Astronomers found this violent event by comparing the light from several separate long exposures of the same patch of the sky, known as CANDELS (the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey). Bright as it was, the distance was so great that Supernova Wilson appeared as an enhancement of the luminosity of its host galaxy. The researchers subtracted the light of the galaxy without the supernova from the combined supernova-galaxy combination, then analyzed the residual light to identify it as type Ia.


The Universe was only a few billion years old when Supernova Wilson exploded, nearly as early as such an event could possibly occur. The early era of Supernova Wilson's explosion means it was likely the result of two white dwarfs merging rather than a single white dwarf exceeding its maximum mass. This is because the most massive white dwarfs require more time to form than the Universe's existence had provided.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Research points to abrupt and widespread climate shift in the Sahara 5,000 years ago

Research points to abrupt and widespread climate shift in the Sahara 5,000 years ago | Amazing Science |

As recently as 5,000 years ago, the Sahara—today a vast desert in northern Africa, spanning more than 3.5 million square miles—was a verdant landscape, with sprawling vegetation and numerous lakes. Ancient cave paintings in the region depict hippos in watering holes, and roving herds of elephants and giraffes—a vibrant contrast with today's barren, inhospitable terrain.

The Sahara's "green" era, known as the African Humid Period, likely lasted from 11,000 to 5,000 years ago, and is thought to have ended abruptly, with the region drying back into desert within a span of one to two centuries. Now researchers at MIT, Columbia University and elsewhere have found that this abrupt climate change occurred nearly simultaneously across North Africa. The team traced the region's wet and dry periods over the past 30,000 years by analyzing sediment samples off the coast of Africa. Such sediments are composed, in part, of dust blown from the continent over thousands of years: The more dust that accumulated in a given period, the drier the continent may have been.

From their measurements, the researchers found that the Sahara emitted five times less dust during the African Humid Period than the region does today. Their results, which suggest a far greater change in Africa's climate than previously estimated, will be published in Earth and Planetary Science Letters. David McGee, an assistant professor in MIT's Department of Earth, Atmospheric and Planetary Sciences, says the quantitative results of the study will help scientists determine the influence of dust emissions on both past and present climate change.

This study, McGee says, is the first in which researchers have combined the two techniques—endmember modeling and thorium-230 normalization—a pairing that produced very precise measurements of dust emissions through tens of thousands of years. In the end, the team found that during some dry periods North Africa emitted more than twice the dust generated today. Through their samples, the researchers found the African Humid Period began and ended very abruptly, consistent with previous findings. However, they found that 6,000 years ago, toward the end of this period, dust emissions were one-fifth today's levels, and far less dusty than previous estimates. McGee says these new measurements may give scientists a better understanding of how dust fluxes relate to climate by providing inputs for climate models. Natalie Mahowald, a professor of earth and atmospheric sciences at Cornell University, says the group's combination of techniques yielded more robust estimates of dust than previous studies. "Dust is one of the most important aerosols for climate and biogeochemistry," Mahowald says. "This study suggests very large fluctuations due to climate over the last 10,000 years, which has enormous implications for human-derived climate change.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Discovery of a collapsed "Earth-sized" star as massive as our sun and about 75,000 times as dense as KOI-256

Discovery of a collapsed "Earth-sized" star as massive as our sun and about 75,000 times as dense as KOI-256 | Amazing Science |

The white orb in this video is more than meets the telescope lens. When astronomers saw red dwarf KOI-256 (large red object)—about 400 light-years away in the constellation Draco—dim every 28 hours or so, they first thought that a Jupiter-sized planet was passing in front. But upon looking closer at the Kepler space telescope data, they were surprised by how sharply the light dipped. Sensing something strange, the astronomers measured how much the red dwarf wobbled from the object's gravitational pull and found it wobbling 1000 times more intensely than it should. It turned out that the "planet" was actually a collapsed star, the researchers report in an upcoming issue of The Astrophysical Journal, and the dimness was caused when the smaller star passed behind KOI-256, blocking its light. When the astronomers returned to the Kepler data to look for the collapsed star passing in front, they found a surprisingly mild dip in brightness. That's because, although the tiny star is Earth-sized, it's as massive as our sun and about 75,000 times as dense as KOI-256. So its intense gravity acts as a lens, magnifying KOI-256's light in Earth's direction.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientists develop fusion rocket technology in lab and aim for Mars

Scientists develop fusion rocket technology in lab and aim for Mars | Amazing Science |

Researchers at the University of Washington say they've built all the pieces for a fusion-powered rocket system that could get a crew to Mars in 30 days. Now they just have to put the pieces together and see if they work.


"If we can pull off a fusion demonstration in a year, with hundreds of thousands of dollars ... there might be a better, cheaper, faster path to using fusion in other applications," said John Slough, a research assistant professor of aeronautics and astronautics.


Billions upon billions of dollars have been spent on fusion energy research over the past half-century — at places like the National Ignition Facility in California, where scientists are zapping deuterium-tritium pellets with lasers; Sandia National Laboratories in New Mexico, the home of theworld's most powerful laboratory radiation source; and the ITER experimental facility in France, where the world's biggest magnetic plasma chamber is being built.


So far, none of those multibillion-dollar projects have hit break-even, let alone the fusion jackpot. Timetables for the advent of fusion energy applications have repeatedly shifted to the right, reviving the old joke that the dawn of the fusion age will always be 30 years away. "The only answer to the 'always 30 years in the future' argument is that we simply demonstrate it," Slough said. And that's what he and his colleagues intend to do this summer, at their lab inside a converted warehouse in Redmond, Wash.


It's obvious that nuclear fusion works: A prime example of the phenomenon can be seen every day, just 93 million miles away. Like other stars, our sun generates its power by combining lighter elements (like hydrogen) into heavier elements (like helium) under tremendous gravitational pressure. A tiny bit of mass from each nucleus is converted directly into energy, demonstrating the power of the equation E=mc2.


Thermonuclear bombs operate on a similar principle. But it's not practical to set off bombs to produce peaceful energy, so how can the fusion reaction be controlled on a workable scale?


Slough and his colleagues are working on a system that shoots ringlets of metal into a specially designed magnetic field. The ringlets collapse around a tiny droplet of deuterium, a hydrogen isotope, compressing it so tightly that it produces a fusion reaction for a few millionths of a second. The reaction should result in a significant energy gain.


"It has gain, that's why we're doing it," Slough said. "It's just that the form the energy takes at the end is hot, magnetized metal plasma. ... The problem in the past was, what would you use it for? Because it kinda blows up."


That's where the magnetic field plays another role: In addition to compressing the metal rings around the deuterium target, the field would channel the spray of plasma out the back of the chamber, at a speed of up to 67,000 mph (30,000 meters per second). If a rocket ship could do that often enough — say, at least once a minute — Slough says you could send a human mission to Mars in one to three months, rather than the eight months it took to send NASA's Curiosity rover.



So far, none of those multibillion-dollar projects have hit break-even, let alone the fusion jackpot. Timetables for the advent of fusion energy applications have repeatedly shifted to the right, reviving the old joke that the dawn of the fusion age will always be 30 years away.


"The only answer to the 'always 30 years in the future' argument is that we simply demonstrate it," Slough said. And that's what he and his colleagues intend to do this summer, at their lab inside a converted warehouse in Redmond, Wash.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Unborn lizards can erupt from their eggs days early if vibrations hint at a threat from a hungry predator

Unborn lizards can erupt from their eggs days early if vibrations hint at a threat from a hungry predator | Amazing Science |

Researchers have long known that an array of factors can affect when eggs laid by all kinds of creatures finally hatch. Some fish eggs, for instance, hatch only at certain light or temperature levels, while fungal infections can prompt lizard eggs to crack open early. Chemical or physical signals sent by predators can prompt some frog embryos to speed up their breakouts, while others delay hatching in a bid to stay safe. In lizards and other reptiles, however, such "environmentally cued hatching" strategies aren't well understood.


That curtain began to lift a bit a few years ago, when Doody and student Philip Paull of Monash University in Australia began studying a population of delicate skinks (Lampropholis delicata) in a park near Sydney. There, the common lizards laid white, leathery eggs the size of aspirin capsules in rock crevices. The eggs generally incubate for 4 to 8 weeks before hatching, but Doody got a surprise in 2010, when he and Paull were plucking eggs from the crevices to make measurements. "They started hatching in our hands, at just a touch—it shocked us," Doody recalls. "It turned into a real mess, they were just hatching everywhere."


Soon, Doody launched a more systematic study of the phenomenon. In two lab experiments, the researchers compared the hatching dates for skink eggs exposed to vibrations with those of eggs that weren't shaken. And in three field experiments, they poked and prodded eggs with a small stick, or squeezed them gently with their fingers to measure how sensitive the eggs were to the kinds of disturbances a predator, such as a snake, might cause. They also measured how far the premature hatchlings could dash.

Together, the experiments offer "compelling evidence" that embryonic skinks can detect and respond to predator-like signals, the authors write in the March 2013 issue of Copeia. The vibrated laboratory eggs, for instance, hatched an average of 3.4 days earlier than the unshaken controls. And in the field, the hatching of disturbed eggs was "explosive," they note; the newborns often broke out of the egg and then sprinted more than one-half meter to nearby cover in just a few seconds. "It's amazing," Doody says. "It can be hard to see because it happens so quick."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Another step toward understanding of high-temperature superconductivity

Another step toward understanding of high-temperature superconductivity | Amazing Science |

Superconductors can revolutionize the way we use and distribute energy, change modes of transportation (e.g. Japan's magnetic levitation trains) and give us 100% energy-efficient technology. So why hasn't that happened yet? The problem is temperature! Most superconductors only work when they are cooled close to the forbidding absolute zero. The solution lies with those that work at higher temperatures.

Superconductors are materials that allow electrical current to flow with no energy loss, a phenomenon that can lead to a vastly energy-efficient future (imagine computers that never overheat). Although most superconductors work close to absolute zero (0°K or -273.15°C), some can operate at higher temperatures (around -135°C) – but how that happens is something of a mystery. Publishing in a recent PNAS article, Fabrizio Carbone's Laboratory for Ultrafast Microscopy and Electron Scattering (LUMES) at EPFL has developed a method that can shed light on "high-temperature" superconductivity.

How conventional superconductivity works: When electricity passes through a conductor, e.g. a wire, some energy is lost because of resistance. This is not always a bad thing, since it can be used for heat (radiators) or light (light bulbs). But when it comes to things like national energy grids and high-voltage cables, electrical resistance (up to 7% in some grids) can mean money and constant wear. This is where superconductors come in. These are materials that, when cooled down enough, conduct electricity with no resistance – and therefore no loss. How? As superconductors cool below a certain temperature, their atoms fall in line and "nudge" charge-carrying electrons together to make new particles called Cooper pairs. These electron pairs observe quantum physics and form an unusual state of matter (a Bose-Einstein Condensate) that is not affected by electrical resistance.

Recently, the lab of Fabrizio Carbone at EPFL addressed the issue by developing a novel method that can advance our understanding of high-temperature superconductivity. High-temperature superconductors (HTS) show promise because they can operate at temperatures around -135°C – still low, but considerably cheaper and more feasible than for conventional superconductors. However, progress in HTS is limited because, even though we know that Cooper pairs are involved in high-temperature superconductivity, there is no consensus as to how they are formed. Carbone's group was able, for the first time, to directly observe the formation of Cooper pairs in real time in a superconducting HTS and determine how the process affects the optical properties of the superconductor. Using a novel approach, the scientists cooled an HTS to its superconducting temperature and then repeatedly fired laser pulses on it to break up the Cooper pairs back into single electrons. As the Cooper pairs broke and re-formed, they caused a periodical change in the color spectrum of the superconductor. By measuring the color change, the researchers were able to directly study what happens in a superconducting HTS. What they discovered was that Cooper pair formation follows a completely different path than in conventional superconductors. Carbone's findings mark the first direct observation of Cooper pair formation in HTS superconductivity. They also provide scientists with a powerful tool to observe the phenomenon in real time. The hope is that by extending this innovative approach to different materials, we can begin to understand how high-temperature superconductivity really works.

Victoria Hull's curator insight, October 26, 2014 12:46 PM

This article starts off by explaining that the downside of superconductors is that they require very low temperatures to work (absolute zero). It then explains that superconductors work because when the temperature is low the atom line up and nudge electrons together forming cooper pairs. These pairs allow a different path to be used than the path in a standard conductor.

Group 8's curator insight, October 26, 2014 6:10 PM

This papers shows how we understand the processes that occur in superconducting materials to give them the properties that are desired. Superconductors are split into two kinds conventional that only work at tempertaures approaching absolute zero and HTS ( High Temperature superconductors). The HTS superconductoirs are of more interest as for practical use being a superconducting material at 0K has no practical uses. this study shows that in a superconductor electrons become pairs called cooper pairs and become a new state of matter known as Bose-Einstein condensates which are completely unaffected by resistance and the formation of this matter is of interest. this study observed the formation by using laser beams to break the pairs and watch them reform in real time, therefore observing the pathway in which thy form. 

Scooped by Dr. Stefan Gruenwald!

Conclusion of 175 scientists: A potential portal to other universes seems to have closed

Conclusion of 175 scientists: A potential portal to other universes seems to have closed | Amazing Science |

The sharpest map yet made of light from the infant universe shows no evidence of "dark flow" – a stream of galaxy clusters rushing in the same direction that hinted at the existence of a multiverse. A potential portal to other universes seems to have closed. That is at least the conclusion of 175 scientists working with data from the European Space Agency's Planck spacecraft. But champions of dark flow are not ready to give up yet, including one Planck scientist who says his team's analysis is flawed.


The first suggestion that the flow existed came in 2008, when a group led byAlexander Kashlinsky of NASA's Goddard Space Flight Center in Greenbelt, Maryland, scrutinised what was then the best map of the cosmic microwave background radiation, the big bang's afterglow. NASA's WMAP satellitemeasured the temperature of this ancient light, revealing fluctuations in the density of matter in the very early universe.


The light's wavelength can also change noticeably when photons are scattered off ionised gas moving through space, providing a way to probe the velocity of such gas. Kashlinsky's WMAP analysis found that hundreds of gas-rich galaxy clusters appeared to be streaming towards a region in the sky between the constellations Vela and Centaurus.


This flow suggested that the universe had somehow become lopsided, as if space-time itself was behaving like a tilted table and matter was sliding off, says Kashlinsky. That goes against the standard model of cosmology, which says that the universe is increasingly uniform on larger scales, making it unlikely that structures big enough to produce such a tilt would form. Some researchers suggested that, instead, other universes could be pulling on matter in ours, creating the flow. But other groups looking at WMAP data did not detect the controversial motion.


The latest search is based on a new, higher-resolution map of the cosmic microwave background from Planck. The Planck team says their multi-pronged analysis also found no evidence of galaxy clusters gushing along in a coherent stream.


"The Planck team's paper appears to rule out the claims of Kashlinsky and collaborators," says David Spergel of Princeton University, who was not involved in the work. If there is no dark flow, there is no need for exotic explanations for it, such as other universes, says Planck team memberElena Pierpaoli at the University of Southern California, Los Angeles. "You don't have to think of alternatives."


But it is too soon to rule out dark flow entirely, argues Fernando Atrio-Barandela at the University of Salamanca in Spain. A member of the Planck team, he withheld his name from his colleagues' paper because he says they overestimated the uncertainty in their measurements, making what might be a subtle signal of dark flow look like mere noise. "One has to be very careful not to wash the baby out with the bathwater," agrees Kashlinsky. He and Atrio-Barandela are running their own analysis with the new Planck data and expect to have results in just a few months.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Thunderstorms contain invisible pulses of ‘dark lightning’ - powerful radiation

Thunderstorms contain invisible pulses of ‘dark lightning’ - powerful radiation | Amazing Science |
Scientists investigate previously unknown sprays of X-rays and bursts of gamma rays.


A lightning bolt is one of nature’s most over-the-top phenomena, rarely failing to elicit at least a ping of awe no matter how many times a person has witnessed one. With his iconic kite-and-key experiments in the mid-18th century, Benjamin Franklin showed that lightning is an electrical phenomenon, and since then the general view has been that lightning bolts are big honking sparks no different in kind from the little ones generated by walking in socks across a carpeted room.


But scientists recently discovered something mind-bending about lightning: Sometimes its flashes are invisible, just sudden pulses of unexpectedly powerful radiation. It’s what Joseph Dwyer, a lightning researcher at the Florida Institute of Technology, has termed dark lightning.


Unknown to Franklin but now clear to a growing roster of lightning researchers and astronomers is that along with bright thunderbolts, thunderstorms unleash sprays of X-rays and even intense bursts of gamma rays, a form of radiation normally associated with such cosmic spectacles as collapsing stars. The radiation in these invisible blasts can carry a million times as much energy as the radiation in visible lightning, but that energy dissipates quickly in all directions rather than remaining in a stiletto-like lightning bolt.


Dark lightning appears sometimes to compete with normal lightning as a way for thunderstorms to vent the electrical energy that gets pent up inside their roiling interiors, Dwyer says. Unlike with regular lightning, though, people struck by dark lightning, most likely while flying in an airplane, would not get hurt. But according to Dwyer’s calculations, they might receive in an instant the maximum safe lifetime dose of ionizing radiation — the kind that wreaks the most havoc on the human body.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Drawing Einstein’s face with math: start with x(t)=-38/9sin(11/7-3t)

Drawing Einstein’s face with math: start with x(t)=-38/9sin(11/7-3t) | Amazing Science |
Wolfram Alpha collects famous faces into functions: Obama, 2pac, Sergey Brin.


A post on StackExchange from a couple of months ago inquired how to create the line drawings in the style that Wolfram Alpha has curated. Some debate ensued about whether the equations that produced the drawings were handwritten, but one commenter, Simon Woods, described a way to produce the curves.


Woods’ method, adapted from another comment by Rahul Narain, involves reverse engineering the curves using Wolfram's Mathematica by converting an image to grayscale, extracting the contours, and plotting the curve using a function “tocurve” that takes the line, a number of modes, and “symbolic parameter t” that parameterizes the line. The “Fourier” function in Mathematica will approximate the line with sinusoids, and the “Rationalize” function converts all the numbers to rational to produce equations that look similar to WolframAlpha’s collection.


This procedure would cover one closed-line element of a drawing, but many of the portraits on WolframAlpha have multiple elements (for instance, the color in Barack Obama’s hair, or Adele’s eyes, or Alexander Graham Bell’s beard). But once you have all that, drawing Sergey Brin wearing a set of Google Glasses is extremely easy. 

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Wearable Gesture Control from Thalmic Labs Senses Your Muscles

Wearable Gesture Control from Thalmic Labs Senses Your Muscles | Amazing Science |

With visions of Minority Report, many a user's hoped to control gadgets by wildly waving at a Kinect like a symphony conductor. Now there's another way to make your friends laugh at you thanks to the Thalmic Labs' MYO armband, which senses motion and electrical activity in your muscles to let you control your computer or other device via Bluetooth 4.0. The company says its proprietary sensor can detect signals right down to individual fingers before you even move them, which -- coupled with an extremely sensitive 6-axis motion detector -- makes for a highly responsive experience. Feedback to the user is given through haptics in the device, which also packs an ARM processor and onboard Lithium-Ion batteries. MYO is now up for a limited pre-order with Thalmic saying you won't be charged until it ships near year's end, while developers can also grab the API. If you're willing to risk some ridicule to be first on the block to grab one, hit the source.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Madagascar hit by the most severe locust plague since the 1950s

Madagascar hit by the most severe locust plague since the 1950s | Amazing Science |

A severe plague of locusts has infested about half of Madagascar, threatening crops and raising concerns about food shortages, a UN agency says. The UN's Food and Agricultural Organization (FAO) said billions of the plant-devouring insects could cause hunger for 60% of the population.

About $22m (£14.5m) was urgently needed to fight the plague in a country where many people are poor, the FAO added.


It was the worst plague to hit the island since the 1950s, the FAO said.

FAO locust control expert Annie Monard told BBC Focus on Africa the plague posed a major threat to the Indian Ocean island. "The last one was in the 1950s and it had a duration of 17 years so if nothing is done it can last for five to 10 years, depending on the conditions," she said.


Nearly 60% of the island's more than 22 million people could be threatened by a significant worsening of hunger. "Currently, about half the country is infested by hoppers and flying swarms - each swarm made up of billions of plant-devouring insects," the FAO said in a statement.


"FAO estimates that about two-thirds of the island country will be affected by the locust plague by September 2013 if no action is taken."


It said it needed donors to give more than $22m in emergency funding by June so that a full-scale spraying campaign could be launched to fight the plague.


The plague threatened pasture for livestock and rice crops - the main staple in Madagascar, the FAO said.


"Nearly 60% of the island's more than 22m people could be threatened by a significant worsening of hunger in a country that already had extremely high rates of food insecurity and malnutrition," it added.


An estimated 80% of people in Madagascar, which has a population of more than 22 million, live on less than a dollar a day.


The Locust Control Centre in Madagascar had treated 30,000 hectares of farmland since last October, but a cyclone in February made the situation worse, the FAO said.


The cyclone not only damaged crops but created "optimal conditions for one more generation of locusts to breed", it added.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The information paradox: What happens to matter falling into a black hole?

The information paradox: What happens to matter falling into a black hole? | Amazing Science |
Will an astronaut who falls into a black hole be crushed or burned to a crisp?


In March 2012, Joseph Polchinski began to contemplate suicide — at least in mathematical form. A string theorist at the Kavli Institute for Theoretical Physics in Santa Barbara, California, Polchinski was pondering what would happen to an astronaut who dived into a black hole. Obviously, he would die. But how?


According to the then-accepted account, he wouldn’t feel anything special at first, even when his fall took him through the black hole’s event horizon: the invisible boundary beyond which nothing can escape. But eventually — after hours, days or even weeks if the black hole was big enough — he would begin to notice that gravity was tugging at his feet more strongly than at his head. As his plunge carried him inexorably downwards, the difference in forces would quickly increase and rip him apart, before finally crushing his remnants into the black hole’s infinitely dense core.


But Polchinski’s calculations, carried out with two of his students — Ahmed Almheiri and James Sully — and fellow string theorist Donald Marolf at the University of California, Santa Barbara (UCSB), were telling a different story. In their account, quantum effects would turn the event horizon into a seething maelstrom of particles. Anyone who fell into it would hit a wall of fire and be burned to a crisp in an instant.


The team’s verdict, published in July 2012, shocked the physics community. Such firewalls would violate a foundational tenet of physics that was first articulated almost a century ago by Albert Einstein, who used it as the basis of general relativity, his theory of gravity. Known as the equivalence principle, it states in part that an observer falling in a gravitational field — even the powerful one inside a black hole — will see exactly the same phenomena as an observer floating in empty space. Without this principle, Einstein’s framework crumbles.


Well aware of the implications of their claim, Polchinski and his co-authors offered an alternative plot ending in which a firewall does not form. But this solution came with a huge price. Physicists would have to sacrifice the other great pillar of their science: quantum mechanics, the theory governing the interactions between subatomic particles.


The result has been a flurry of research papers about firewalls, all struggling to resolve the impasse, none succeeding to everyone’s satisfaction. Steve Giddings, a quantum physicist at the UCSB, describes the situation as “a crisis in the foundations of physics that may need a revolution to resolve”.


With that thought in mind, black-hole experts came together last month at CERN, Europe’s particle-physics laboratory near Geneva, Switzerland, to grapple with the issue face to face. They hoped to reveal the path towards a unified theory of ‘quantum gravity’ that brings all the fundamental forces of nature under one umbrella — a prize that has eluded physicists for decades.


The firewall idea “shakes the foundations of what most of us believed about black holes”, said Raphael Bousso, a string theorist at the University of California, Berkeley, as he opened his talk at the meeting. “It essentially pits quantum mechanics against general relativity, without giving us any clues as to which direction to go next.”


The roots of the firewall crisis go back to 1974, when physicist Stephen Hawking at the University of Cambridge, UK, showed that quantum effects cause black holes to run a temperature2. Left in isolation, the holes will slowly spew out thermal radiation — photons and other particles — and gradually lose mass until they evaporate away entirely (see Figure).


These particles aren’t the firewall, however; the subtleties of relativity guarantee that an astronaut falling through the event horizon will not notice this radiation. But Hawking’s result was still startling — not least because the equations of general relativity say that black holes can only swallow mass and grow, not evaporate.


Hawking’s argument basically comes down to the observation that in the quantum realm, ‘empty’ space isn’t empty. Down at this sub-sub-microscopic level, it is in constant turmoil, with pairs of particles and their corresponding antiparticles continually popping into existence before rapidly recombining and vanishing. Only in very delicate laboratory experiments does this submicroscopic frenzy have any observable consequences. But when a particle–antiparticle pair appears just outside a black hole’s event horizon, Hawking realized, one member could fall in before the two recombined, leaving the surviving partner to fly outwards as radiation. The doomed particle would balance the positive energy of the outgoing particle by carrying negative energy inwards — something allowed by quantum rules. That negative energy would then get subtracted from the black hole’s mass, causing the hole to shrink.


Hawking’s original analysis has since been refined and extended by many researchers, and his conclusion is now accepted almost universally. But with it came the disturbing realization that black-hole radiation leads to a paradox that challenges quantum theory.


Quantum mechanics says that information cannot be destroyed. In principle, it should be possible to recover everything there is to know about the objects that fell in a black hole by measuring the quantum state of the radiation coming out. But Hawking showed that it was not that simple: the radiation coming out is random. Toss in a kilogram of rock or a kilogram of computer chips and the result will be the same. Watch the black hole even until it dies, and there would still be no way to tell how it was formed or what fell in it.


But that same year, the deadlock was broken by a discovery made by Juan Maldacena, a physicist then at Harvard University in Cambridge. Maldacena’s insight built on an earlier proposal that any three-dimensional (3D) region of our Universe can be described by information encoded on its two-dimensional (2D) boundary, in much the same way that laser light can encode a 3D scene on a 2D hologram. “We used the word ‘hologram’ as a metaphor,” says Leonard Susskind, a string theorist at Stanford University in California, and one of those who came up with the proposal. “But after doing more mathematics, it seemed to make literal sense that the Universe is a projection of information on the boundary.”


What Maldacena came up with was a concrete mathematical formulation of the hologram idea that made use of ideas from superstring theory, which posits that elementary particles are composed of tiny vibrating loops of energy. His model envisages a 3D universe containing strings and black holes that are governed only by gravity, bounded by a 2D surface on which elementary particles and fields obey ordinary quantum laws without gravity. Hypothetical residents of the 3D space would never see this boundary because it is infinitely far away. But that wouldn’t matter: anything happening in the 3D universe could be described equally well by equations in the 2D universe, and vice versa. “I found that there’s a mathematical dictionary that allows you to go back and forth between the languages of these two worlds,” Maldacena explains.


One of the most promising resolutions, according to Susskind, has come from Daniel Harlow, a quantum physicist at Princeton University in New Jersey, and Patrick Hayden, a computer scientist at McGill University in Montreal, Canada. They considered whether an astronaut could ever detect the paradox with a real-world measurement. To do so, he or she would first have to decode a significant portion of the outgoing Hawking radiation, then dive into the black hole to examine the infalling particles. The pair’s calculations show that the radiation is so tough to decode that the black hole would evaporate before the astronaut was ready to jump in. “There’s no fundamental law preventing someone from measuring the paradox,” says Harlow. “But in practice, it’s impossible.”


Giddings, however, argues that the firewall paradox requires a radical solution. He has calculated that if the entanglement between the outgoing Hawking radiation and its infalling twin is not broken until the escaping particle has travelled a short distance away from the event horizon, then the energy released would be much less ferocious, and no firewall would be generated. This protects the equivalence principle, but requires some quantum laws to be modified. At the CERN meeting, participants were tantalized by the possibility that Giddings’ model could be tested: it predicts that when two black holes merge, they may produce distinctive ripples in space-time that can be detected by gravitational-wave observatories on Earth.


There is another option that would save the equivalence principle, but it is so controversial that few dare to champion it: maybe Hawking was right all those years ago and information is lost in black holes. Ironically, it is Preskill, the man who bet against Hawking’s claim, who raised this alternative, at a workshop on firewalls at Stanford at the end of last year. “It’s surprising that people are not seriously thinking about this possibility because it doesn’t seem any crazier than firewalls,” he says — although he adds that his instinct is still that information survives.


The reluctance to revisit Hawking’s old argument is a sign of the immense respect that physicists have for Maldacena’s dictionary relating gravity to quantum theory, which seemingly proved that information cannot be lost. “This is the deepest ever insight into gravity because it links it to quantum fields,” says Polchinski, who compares Maldacena’s result — which has now accumulated close to 9,000 citations — to the nineteenth-century discovery that a single theory connects light, electricity and magnetism. “If the firewall argument had been made in the early 1990s, I think it would have been a powerful argument for information loss,” says Bousso. “But now nobody wants to entertain the possibility that Maldacena is wrong.”


Maldacena is flattered that most physicists would back him in a straight-out fight against Einstein, although he believes it won’t come to that. “To completely understand the firewall paradox, we may need to flesh out that dictionary,” he says, “but we won’t need to throw it out.”


The only consensus so far is that this problem will not go away any time soon. During his talk, Polchinski fielded all proposed strategies for mitigating the firewall, carefully highlighting what he sees as their weaknesses. “I’m sorry that no one has gotten rid of the firewall,” he concludes. “But please keep trying.”


No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scott Reef: Coral Reef Back From the Dead After Coral Bleaching Nearly Destroyed It

Scott Reef: Coral Reef Back From the Dead After Coral Bleaching Nearly Destroyed It | Amazing Science |

Back in 1998, Scott Reef was a ghost town. Rising ocean temperatures caused by El Niño had triggered a catastrophic bleaching event that decimated the enormous reef system off the coast of Western Australia. The prognosis was grim—more than 249 kilometers away from its nearest neighbors, the Scott system had no hope of being reseeded by their coral larvae, a process scientists believed was vital to reef recovery. But just 15 years later, Scott Reef has regrown into the vibrant ecosystem pictured above, and its isolation may have been the key to its survival. Although the Scott system did not benefit from the arrival of larvae from other reefs, an abundance of plant-eating fish in the area kept dangerous algae in check and allowed the few remaining local larvae to hang on long enough to begin the slow but steady process of repopulating the reef, researchers report online today in Science. The reason those hungry fish were there to save the day? There were no humans around to hunt them. So while climate change may be wreaking havoc on coral reefs around the world, these ecosystems might stand a chance of bouncing back once humans are no longer around to bother them.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Ants follow Fermat's principle of least time

Ants follow Fermat's principle of least time | Amazing Science |

Ants have long been known to choose the shortest of several routes to a food source, but what happens when the shortest route is not the fastest? This situation can occur, for example, when ants are forced to travel on two different surfaces, where they can walk faster on one surface than on the other. In a new study, scientists have found that ants behave the same way as light does when traveling through different media: both paths obey Fermat's principle of least time, taking the fastest route rather than the most direct one. Besides revealing insight into ant communities, the findings could offer inspiration to researchers working on solving complex problems in robotics, logistics, and information technology.

The figure shows a ‘refracted’ trail of Wasmannia auropunctata workers at the medium border between smooth (white) and rough (green) felt. The position of the food is on the rough felt. The density of workers on the rough felt is higher than on the smooth felt because travel speed is lower. In addition, it appears, although not very obvious, as if the ants on the rough felt ‘float’ on top of the felt hairs, indicating the difficulty of walking on this substrate.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

For Early Primates, a Night Was Filled With Color

For Early Primates, a Night Was Filled With Color | Amazing Science |
A genetic examination of tarsiers indicates that the saucer-eyed primates developed three-color vision when they were still nocturnal.


A new study suggests that primates’ ability to see in three colors may not have evolved as a result of daytime living, as has long been thought. The findings, published in the journalProceedings of the Royal Society B, are based on a genetic examination oftarsiers, the nocturnal, saucer-eyed primates that long ago branched off from monkeys, apes and humans.


By analyzing the genes that encodephotopigments in the eyes of modern tarsiers, the researchers concluded that the last ancestor that all tarsiers had in common had highly acute three-color vision, much like that of modern-day primates.


Such vision would normally indicate a daytime lifestyle. But fossils show that the tarsier ancestor was also nocturnal, strongly suggesting that the ability to see in three colors somehow predated the shift to daytime living.

The coexistence of the two normally incompatible traits suggests that primates were able to function during twilight or bright moonlight for a time before making the transition to a fully diurnal existence.


“Today there is no mammal we know of that has trichromatic vision that lives during night,” said an author of the study, Nathaniel J. Dominy, associate professor of anthropology at Dartmouth. “And if there’s a pattern that exists today, the safest thing to do is assume the same pattern existed in the past.


“We think that tarsiers may have been active under relatively bright light conditions at dark times of the day,” he added. “Very bright moonlight is bright enough for your cones to operate.”

QMP's curator insight, April 19, 2013 5:50 PM

This is a great article about how the night time helped early primates to evolve the colored vision that we experience today.

Scooped by Dr. Stefan Gruenwald!

'Quantum illumination' proof lights the way to improving quantum encryption and radar

'Quantum illumination' proof lights the way to improving quantum encryption and radar | Amazing Science |

Seth Lloyd’s idea was to ‘entangle’ the beam sent out from a device with a reference beam, so that each photon in one beam would be matched to a photon in the other, like soldiers in two marching lines. Any noise would involve photons coming in at random, which could be told apart and ignored because they lack a correlated twin. By filtering out noise in this way, Lloyd suggested, quantum illumination offered a means to detect objects that would be invisible using non-entangled beams.


Physicists were surprised by Lloyd's theoretical result. Noise is normally thought to destroy entanglement. This is why quantum cryptography — a highly secure scheme for sharing information — is usually done inside fibre-optic cables. But Lloyd calculated that a trace of entanglement can remain despite the presence of noise.


Now, for the first time, researchers have experimentally demonstrated a version of Lloyd’s proposal. The result, by Marco Genovese, a physicist at the National Institute for Metrological Research in Turin, Italy, and his colleagues, is published on the preprint server arXiv and is forthcoming in Physical Review Letters.


To generate a pair of entangled beams, the researchers sent laser light through a ‘nonlinear’ crystal, which split the beam of incoming photons into two beams of correlated, lower-energy photons. They sent one of these beams to a reference detector and the other through a space, next to which they placed a primary detector. Nearby they added a light-scattering device, similar to a disco ball, to generate noise.


When no object was in the space, the beam just went through and Genovese and his colleagues recorded only a few random photon correlations typical of background noise. When they placed a small piece of glass in the space, however, photons scattered off it and the number of correlated hits on the detectors rose by 10 times. To check that this signal was indeed caused by quantum illumination, the researchers repeated the experiment with non-entangled light. They recorded few photon correlations, both when the object was present and when it was not.


“It’s a good experiment, and it’s a good first start in trying to demonstrate the effect,” says Lloyd, although he points out that that the method he originally proposed was slightly different, and potentially more powerful.

Genovese suggests that the experiment challenges the common belief that quantum schemes are highly susceptible to noise. Indeed, there have been theoretical proposals to exploit the basic physics of quantum illumination to improve the robustness of some quantum schemes, including quantum cryptography.


“Up to now, every [quantum] experiment performed was strictly limited by noise,” says Genovese. “This perspective highlights the possibility of using quantum protocols in more realistic situations.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

WIRED: New face-sized tarantula discovered in Sri Lanka

WIRED: New face-sized tarantula discovered in Sri Lanka | Amazing Science |

A new type of tarantula about the size of your face has been found in northern Sri Lanka. Scientists found the spiders -- with a leg span up to 8 inches (20cm) across -- living in trees and the old doctor's quarters of a hospital in Mankulam.


Covered in beautiful, ornate markings, the spiders belong to the genus Poecilotheria, known as "Pokies" for short. These are the tiger spiders, an arboreal group indigenous to India and Sri Lanka that are known for being colourful, fast, and venomous. As a group, the spiders are related to a class of South American tarantula that includes the Goliath bird-eater, the world's largest.

The new spider, named Poecilotheria rajaei after a local police inspector who helped the team navigate post-civil war northern Sri Lanka, differs from similar species primarily in the markings on its legs and underside, which bears a pink abdominal band.


"This species has enough significant differences to separate it from the other species," said Peter Kirk, editor of the British Tarantula Society's journal, which published a study describing the spider in December. But, Kirk notes, taxonomic determinations based on physical descriptions can provoke disagreement. "I absolutely would love to see DNA sampling done - on all the species of Poecilotheria," he said.


The spider's unique leg markings include geometric patterns with daffodil-yellow and grey inlays on the first and fourth legs. It was first seen during a Sri Lankan arachnid survey led by Ranil Nanayakkara, co-founder of Sri Lanka's Biodiversity Education and Research. In October 2009, a local villager presented Nanayakkara and his team with a dead male specimen that didn't resemble knownPoecilotheria in the area. Before the team could begin describing the presumptive new species, they needed more individuals. Scouring the semi-evergreen, forested area for females and juveniles required the help of police inspector Michael Rajakumar Purajah, who accompanied the team through areas just beginning to recover from a civil war. Eventually, the team found enough spiders - including the ones hiding in a hospital - to assemble a detailed description of the new arachnids.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Phinergy demonstrates aluminum-air battery capable of fueling an electric vehicle for 1000 miles

Phinergy demonstrates aluminum-air battery capable of fueling an electric vehicle for 1000 miles | Amazing Science |

Phinergy, an Israeli developer of metal-air energy systems, has demonstrated a new type of aluminum-air battery that is capable of providing enough energy to power an electric vehicle (EV) for up to 1000 miles at a time—with occasional stops to take on more water. The company claims they have developed new technology that prevents carbon dioxide from entering the system, which in the past, has led to breakdowns of the materials used in such batteries.

Metal-air batteries get their energy via interaction between oxygen and metals. In this new battery system, the aluminum serves as the anode and the oxygen in the air as a cathode. The system is made up of aluminum plates that give up their energy and must eventually be replaced (via recycling, the company says). Water is used as an electrolyte, and thus it too must be replenished on a regular basis. The company claims that each plate holds enough energy to carry an EV for approximately 20 miles and that their system currently holds 50 of the plates at one time, which together add up to a charge capacity of 1000 miles (the system needs a water fill-up every 200 miles). Once the plates are depleted they must be replaced.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Simulations indicate Milky Way may have up to 2000 black holes in its halo

Simulations indicate Milky Way may have up to 2000 black holes in its halo | Amazing Science |

Valery Rashkov and Piero Madau, space scientists with the University of California have uploaded a paper to the preprint server arXiv in which they suggest that the Milky Way galaxy likely has between 70 and 2000 intermediate-mass black holes (IMBHs) existing in its outer edges. They came to this conclusion by building a computer model that mimics what they believe occurred when galaxies, and by extension, black holes merged during their formative years.

In building their simulation, the researchers began with the idea that when galaxies form, they have a "seed" black hole at their center. Over time, they suggest, some early galaxies ran into one another, merging as they did so, causing the black holes at their respective centers to merge as well. But not all couplings worked out, their simulations show. Because of gravitational waves created by such collisions, smaller black holes could be ejected, and would as a result, travel all the way to the outer reaches of the galaxy where they would reside alone in space. But that's not the end of the story, the simulations also showed that such ejections occurred less than 20 percent of the time and that the IMBHs ejected fell into two classes: Naked and Clothed. The naked IMBHs were those that had their sub-halos destroyed in the merger, while the clothed ones were those that had dark matter satellites that survived. Naked IMBHs, they say, would number slightly fewer than those that were clothed.

Conversely, when galaxies merge and neither of their merging black holes is ejected, the two combine to form a single massive black hole that remains in a stable state. This can happen many times of course, leading to galaxies with black holes that have nearly unfathomable mass. Proving the simulations correct will likely be a daunting task, as observers can't see the IMBHs directly—any light inside of them cannot escape the immense gravity they exert. The hope is that observers will be able to detect those that are clothed, by noting the material that remains around them, or by observing the motion of other bodies that are close enough to be impacted by their gravity.

No comment yet.