Amazing Science
802.9K views | +25 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

Brain regions thought to be uniquely human share many similarities with monkeys

Brain regions thought to be uniquely human share many similarities with monkeys | Amazing Science |
New research suggests a surprising degree of similarity in the organization of regions of the brain that control language and complex thought processes in humans and monkeys. The study also revealed some key differences. The findings may provide valuable insights into the evolutionary processes that established our ties to other primates but also made us distinctly human.

The research concerns the ventrolateral frontal cortex, a region of the brain known for more than 150 years to be important for cognitive processes including language, cognitive flexibility, and decision-making. "It has been argued that to develop these abilities, humans had to evolve a completely new neural apparatus; however others have suggested precursors to these specialized brain systems might have existed in other primates," explains lead author Dr. Franz-Xaver Neubert of the University of Oxford, in the UK.

By using non-invasive MRI techniques in 25 people and 25 macaques, Dr. Neubert and his team compared ventrolateral frontal cortex connectivity and architecture in humans and monkeys. The investigators were surprised to find many similarities in the connectivity of these regions. This suggests that some uniquely human cognitive traits may rely on an evolutionarily conserved neural apparatus that initially supported different functions. Additional research may reveal how slight changes in connectivity accompanied or facilitated the development of distinctly human abilities.

The researchers also noted some key differences between monkeys and humans. For example, ventrolateral frontal cortex circuits in the two species differ in the way that they interact with brain areas involved with hearing.

"This could explain why monkeys perform very poorly in some auditory tasks and might suggest that we humans use auditory information in a different way when making decisions and selecting actions," says Dr. Neubert.

A region in the human ventrolateral frontal cortex -- called the lateral frontal pole -- does not seem to have an equivalent area in the monkey. This area is involved with strategic planning, decision-making, and multi-tasking abilities.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Ancient Plague's DNA Recovered From A 1,500-Year-Old Tooth

Ancient Plague's DNA Recovered From A 1,500-Year-Old Tooth | Amazing Science |
Plague may have hastened the fall of the Roman Empire. Its DNA reveals ancient roots in China.

Scientists have reconstructed the genetic code of a strain of bacteria that caused one of the most deadly pandemics in history nearly 1,500 years ago. They did it by finding the skeletons of people killed by the plague and extracting DNA from traces of blood inside their teeth. This plague struck in the year 541, under the reign of the Roman emperor Justinian, so it's usually called the Justinian plague. The emperor actually got sick himself but recovered. He was one of the lucky ones.

"Some of the estimates are that up to 50 million people died," says evolutionary biologistDavid Wagner at Northern Arizona University. "It's thought that the Justinian plague actually led partially to the downfall of the Roman Empire." The plague swept through Europe, northern Africa and parts of Asia. Historians say that when it arrived in Constantinople, thousands of bodies piled up in mass graves. People started wearing name tags so they could be identified if they suddenly collapsed. Given the descriptions, scientists suspected that it was caused by the bacterium Yersinia pestis — the same kind of microbe that later caused Europe's Black Death in the 14th century.

The bacteria get spread by fleas. After someone gets infected from a flea bite, the microbes travel to the nearest lymph node and start multiplying. "And so you get this mass swelling in that lymph node, which is known as a buboe," says Wagner. "That's where the term bubonic plague comes from."

The Justinian plague has been hard to study scientifically. But recently, archaeologists stumbled upon a clue outside Munich.

Housing developers were digging up farmland when they uncovered a burial site with graves that dated as far back as the Justinian plague.

"They found some [graves] that had multiple individuals buried together, which is oftentimes indicative of an infectious disease," Wagner says. "And so in this particular case, we examined material from two different victims. One of those victims was buried together with another adult and a child, so it's presumed that they all may have died of the plague at the same time."

Skeletons were all that was left of the pair. But inside their teeth was dental pulp that still contained traces of blood — and the blood contained the DNA of plague bacteria.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Brain responds to tiniest speech details

Brain responds to tiniest speech details | Amazing Science |
Scientists begin to unravel how neurons recognize specific language sounds.

The sounds that make up speech, built from slight variations in vowels and consonants, trigger specific responses in the part of the brain responsible for speech processing, researchers report today in SciencePhonemes — such as the 'buh' sound in 'bad' or the 'duh' in 'dad' — are thought to be the smallest linguistic elements that change a word's meaning. But the study suggests that the brain's superior temporal gyrus can recognize even smaller bits of speech, called features, that may be common across languages.

“We’ve known for a pretty long time now what area of the brain is really important for processing speech sounds,” says lead author Edward Chang, a neuroscientist at the University of California in San Francisco. “What we haven’t known is the details about how individual sounds are processed.”

Chang's team made the discovery by working with six patients who were preparing to undergo brain surgery to treat epilepsy. An array of electrodes was implanted in the brain of each person as part of pre-surgical testing. Each volunteer then listened to speech samples comprising 500 sentences spoken by 400 people that covered the entire inventory of phonetic American English sounds.

When researchers compared the electrode data to the different phonemes heard by the volunteers, they found that phonemes with similar features seemed to elicit characteristic electric responses in neurons located within each patient's superior temporal gyrus.

Chang sees this as the starting point for understanding the mechanism that underlies the brain's seemingly effortless decoding of a stream of speech. “One of the things that happens in speech and language is that we transform sounds into meaning,” he says. A set of feature units in some combination gives rise to a phoneme; those combine to create a word, and together, groups of words create meaning.

Josef Rauschecker, a neuroscientist at Georgetown University in Washington DC, notes that monkeys are known to have neurons that respond to phonetic features. The discovery of a similar capability in the human brain opens the door to studying the evolution of speech recognition, he says.

Identifying the neural mechanisms that make up normal phonetic coding in the brain can lead to a better understanding of abnormalities, says Mitchell Steinschneider, a neuroscientist at Albert Einstein College of Medicine of Yeshiva University in New York. For people with hearing loss, for instance, this might mean the development of more sophisticated processors to aid artificial hearing, he adds.

Laura Perez's curator insight, January 31, 2014 5:48 AM

Parece que nuestro cerebro detecta fragmentos más pequeños que el fonema... Uou!

Scooped by Dr. Stefan Gruenwald!

Stephen Hawking claims: 'There are no black holes with an event horizon'

Stephen Hawking claims: 'There are no black holes with an event horizon' | Amazing Science |

Most physicists foolhardy enough to write a paper claiming that “there are no black holes” — at least not in the sense we usually imagine — would probably be dismissed as cranks. But when the call to redefine these cosmic crunchers comes from Stephen Hawking, it’s worth taking notice. In a paper posted online, the physicist, based at the University of Cambridge, UK, and one of the creators of modern black-hole theory, does away with the notion of an event horizon, the invisible boundary thought to shroud every black hole, beyond which nothing, not even light, can escape.

In its stead, Hawking’s radical proposal is a much more benign “apparent horizon”, which only temporarily holds matter and energy prisoner before eventually releasing them, albeit in a more garbled form.

“There is no escape from a black hole in classical theory,” Hawking told Nature. Quantum theory, however, “enables energy and information to escape from a black hole”. A full explanation of the process, the physicist admits, would require a theory that successfully merges gravity with the other fundamental forces of nature. But that is a goal that has eluded physicists for nearly a century. “The correct treatment,” Hawking says, “remains a mystery.”

Hawking posted his paper on the arXiv preprint server on 22 January1. He titled it, whimsically, 'Information preservation and weather forecasting for black holes', and it has yet to pass peer review. The paper was based on a talk he gave via Skype at a meeting at the Kavli Institute for Theoretical Physics in Santa Barbara, California, in August 2013 (watch video of the talk).

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientists accidentally drill into magma and could now be on the verge of producing volcano-powered electricity

Scientists accidentally drill into magma and could now be on the verge of producing volcano-powered electricity | Amazing Science |

Can enormous heat deep in the earth be harnessed to provide energy for us on the surface? A promising report from a geothermal borehole project that accidentally struck magma – the same fiery, molten rock that spews from volcanoes – suggests it could.

The Icelandic Deep Drilling Project, IDDP, has been drilling shafts up to 5km deep in an attempt to harness the heat in the volcanic bedrock far below the surface of Iceland.

But in 2009 their borehole at Krafla, northeast Iceland, reached only 2,100m deep before unexpectedly striking a pocket of magma intruding into the Earth’s upper crust from below, at searing temperatures of 900-1000°C.

This borehole, IDDP-1, was the first in a series of wells drilled by the IDDP in Iceland looking for usable geothermal resources. The special report in this month’s Geothermics journal details the engineering feats and scientific results that came from the decision not to the plug the hole with concrete, as in a previous case in Hawaii in 2007, but instead attempt to harness the incredible geothermal heat.

Wilfred Elders, professor emeritus of geology at the University of California, Riverside, co-authored three of the research papers in the Geothermics special issue with Icelandic colleagues.

“Drilling into magma is a very rare occurrence, and this is only the second known instance anywhere in the world,“ Elders said. The IDDP and Iceland’s National Power Company, which operates the Krafla geothermal power plant nearby, decided to make a substantial investment to investigate the hole further.

Elders said that the success of the drilling was “amazing, to say the least”, adding: “This could lead to a revolution in the energy efficiency of high-temperature geothermal projects in the future.”

The well funnelled superheated, high-pressure steam for months at temperatures of over 450°C – a world record. In comparison, geothermal resources in the UK rarely reach higher than around 60-80°C.

The magma-heated steam was measured to be capable of generating 36MW of electrical power. While relatively modest compared to a typical 660MW coal-fired power station, this is considerably more than the 1-3MW of an average wind turbine, and more than half of the Krafla plant’s current 60MW output.

Most importantly it demonstrated that it could be done. “Essentially, IDDP-1 is the world’s first magma-enhanced geothermal system, the first to supply heat directly from molten magma,” Elders said. The borehole was being set up to deliver steam directly into the Krafla power plant when a valve failed which required the borehole to be stoppered. Elders added that although the borehole had to plugged, the aim is to repair it or drill another well nearby.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Bose-Einstein Condensate Made at Room Temperature for First Time

Bose-Einstein Condensate Made at Room Temperature for First Time | Amazing Science |

The quantum mechanical phenomena, known as Bose-Einstein Condensate (BEC), was first demonstrated in 1995 when experiments proved that the septuagenarian theory did in fact exist in the physical world. Of course, to achieve the phenomena a state of near absolute zero (-273 Celsius, -459 Fahrenheit) had to be created.

Now researchers at IBM’s Binnig and Rohrer Nano Center have been able to achieve the BEC at room temperature using a specially developed polymer, a laser, and some mirrors.

IBM believes that this experiment could potentially be used in the development of novel optoelectronic devices, including energy-efficient lasers and ultra-fast optical switches. One application for BEC is for the building of so-called atom lasers, which could have applications ranging from atomic-scale lithography to measurement and detection of gravitational fields.

For the first time, the IBM team achieved it at room temperature by placing a thin polymer film—only 35 nanometers thick—between two mirrors and then shining a laser into the configuration. The bosonic particles are created as the light travels through the polymer film and bounces back and forth between the two mirrors.

While this BEC state of matter only lasts for a few picoseconds (trillionths of a second), the IBM researchers believe that it exists long enough to create a source of laser-like light or an optical switch that could be used in optical interconnects.

“That BEC would be possible using a polymer film instead of the usual ultra-pure crystals defied our expectations,” said Dr. Thilo Stöferle, a physicist, at IBM Research, in a press release. “It’s really a beautiful example of quantum mechanics where one can directly see the quantum world on a macroscopic scale.”

Now that the researchers have managed to trigger the effect, they are now looking to gain more control over it. In the process they will be evaluating how the effect could best be exploited for a range of applications. One interesting application that will be examined is using the BEC in analog quantum simulations for such macroscopic quantum phenomena as superconductivity, which is extremely difficult to model with today’s simulation approaches.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

DNA sequencing shows that ancient hunter had blue eyes and dark skin

DNA sequencing shows that ancient hunter had blue eyes and dark skin | Amazing Science |

A hunter-gatherer who lived in Europe some 7,000 years ago probably had blue eyes and dark skin, a combination that has largely disappeared from the continent in the millennia since, scientists said Tuesday.

The discovery, published in the journal Nature this week, was made by scientists from the United States, Europe and Australia who analyzed ancient DNA extracted from a male tooth found in a cave in northern Spain.

"We have the stereotype that blue eyes are found only in light-skinned people but that's not necessarily the case," lead researcher Carles Lalueza-Fox said in a telephone interview Tuesday with The Associated Press.

Lalueza-Fox, who works at the Institute of Evolutionary Biology in Barcelona, Spain, said the man's skin would have been darker than most modern Europeans, while his eyes may have resembled those of Scandinavians, his closest genetic relatives today. The combination of blue eyes and dark skin, which is sometimes seen in people with mixed European and African ancestry, may once have been common among ancient European hunter-gatherers, he said.

The researchers also found the man had genes that indicated he was poor at digesting milk and starch, an ability which only spread among Europeans with the arrival of Neolithic farmers from the Middle East. The arrival of this group was also believed to have introduced several diseases associated with proximity to animals — and the genes that helped resist them.

But the hunter-gatherer whose remains were found in the La Brana caves, near Spanish city of Leon, already had some genes that would have helped him fight diseases such as measles, flu and smallpox. This came as a surprise to researchers, indicating that the genetic transition was already under way 7,000 years ago, Lalueza-Fox said.  The lack of such genes among pre-Columbian populations in the Americas was one of the reasons they were so susceptible to these diseases when the Europeans arrived.

Researchers are hoping to make further discoveries from a second skeleton found at the site, said Lalueza-Fox.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Nanoscale heat engine exceeds standard efficiency limit

Nanoscale heat engine exceeds standard efficiency limit | Amazing Science |

In 2012, a team of physicists from Germany proposed a scheme for realizing a nanoscale heat engine composed of a single ion. Like a macroscale heat engine, the theoretical nanoscale version can convert heat into mechanical work by taking advantage of the temperature difference between two thermal reservoirs. Because the single-ion heat engine is so small, at the time the physicists noted that it had the potential to tap into the quantum regime and experience quantum effects.

Now in a new paper, the physicists, from the Universities of Mainz and Erlangen-Nürnberg in Germany, have theoretically shown that a nanoscale heat engine can take advantage of nonthermal effects.

"Our theoretical and numerical findings show that the performance of quantum heat engines may be enhanced by coupling them to engineered nonthermal reservoirs, like squeezed reservoirs," coauthor Eric Lutz, Physics Professor at the University of Erlangen-Nürnberg, told "These results follow from the application of the second law of thermodynamics to a reservoir configuration that is more general than usually considered in textbooks. From a theoretical point of view, they indicate that the second law is less restrictive away from equilibrium."

The physicists showed that when the high-temperature thermal reservoir to which the quantum heat engine is attached is "squeezed," the heat engine's efficiency at maximum power dramatically increases and can exceed the standard Carnot limit by a factor of two. Since the power of an engine vanishes at maximum efficiency, the efficiency at maximum power is the quantity of prime interest for practical applications.

As an expression of the second law of thermodynamics, Carnot's result places a fundamental limit on a heat engine's maximum efficiency. However, this limit holds only for the particular configuration that involves two thermal reservoirs at different temperatures.

The engine proposed here has only one thermal reservoir, since the reservoir that is squeezed is considered nonthermal. While thermal reservoirs are characterized only by their temperatures, nonthermal reservoirs can be controlled in additional ways, such as by squeezing.

As the physicists explain, squeezing is a quantum optics concept that has been shown to be a useful tool in high-precision spectroscopy, quantum information, quantum cryptography, and other areas. However, the use of squeezed thermal reservoirs in quantum thermodynamics has been largely unexplored until now.

The physicists' simulations showed that this heat engine can be experimentally realized with current technology involving a single ion and laser reservoirs. The simulations revealed that such a heat engine could realistically operate at maximum power with an efficiency that is up to four times larger than the efficiency obtained with two thermal reservoirs, and a factor of two above the standard Carnot limit.

In the future, these dramatic improvements in efficiency through squeezing could lead to the realization of more efficient nanoengines.

"We succeeded recently to trap ions and plan to verify the predicted results in the lab," Lutz said. "We are currently investigating heat pumps and the options to scale the number of ions up."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New, unusually large Tsamsa bacteriophage kills anthrax bacterium

New, unusually large Tsamsa bacteriophage kills anthrax bacterium | Amazing Science |

From a zebra carcass on the plains of Namibia in Southern Africa, an international team of researchers has discovered a new, unusually large bacteriophage that infects the bacterium that causes anthrax. The novel bacteriophage could eventually open up new ways to detect, treat or decontaminate the anthrax bacillus and its relatives that cause food poisoning.

The virus was isolated from samples collected from carcasses of zebras that died of anthrax in Etosha National Park, Namibia. The anthrax bacterium, Bacillus anthracis, forms spores that survive in soil for long periods. Zebras are infected when they pick up the spores while grazing; the bacteria multiply and when the animal dies, they form spores that return to the soil as the carcass decomposes.

While anthrax is caused by a bacterium that invades and kills its animal host, bacteriophages, literally "bacteria eaters" are viruses that invade and kill bacterial hosts.

They also noticed that the new virus, named Bacillus phage Tsamsa, is unusually large, with a giant head, a long tail and a large genome, placing it among the largest known bacteriophages.

Tsamsa infects not only B. anthracis but also some closely related bacteria, including strains of Bacillus cereus, which can cause food poisoning. Sequencing the genome allowed researchers to identify the gene for lysin, an enzyme that the virus uses to kill bacterial cells, that has potential use as an antibiotic or disinfecting agent.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Direct measurements of the wave nature of matter, previously only known from theory

Direct measurements of the wave nature of matter, previously only known from theory | Amazing Science |

At the heart of quantum mechanics is the wave-particle duality: matter and light possess both wave-like and particle-like attributes. Typically, the wave-like properties are inferred indirectly from the behavior of many electrons or photons, though it's sometimes possible to study them directly. However, there are fundamental limitations to those experiments—namely information about the wave properties of matter that is inherently inaccessible.

And therein lies a loophole: two groups used indirect experiments to reconstruct the wave structure of electrons. A.S. Stodolna and colleagues manipulated hydrogen atoms to measure their electron's wave structure, validating more than 30 years of theoretical work on the phenomenon known as the Stark effect. A second experiment by Daniel Lüftner and collaborators reconstructed the electronic structure of individual organic molecules through repeated scanning, with each step providing a higher resolution. In both cases, the researchers were able to match theoretical predictions to their results, verifying some previously challenging aspects of quantum mechanics.

Neither a wave nor a particle description can describe all experimental results obtained by physicists. Photons interfere with each other and themselves like waves when they pass through openings in a barrier, yet they show up as individual points of light on a phosphorescent screen. Electrons create orbital patterns inside atoms described by three-dimensional waves, yet they undergo collisions as if they were particles. Certain experiments are able to reconstruct the distribution of electric charge inside materials, which appears very wave-like, yet the atoms look like discrete bodies in those same experiments.

The wave functions in the Stark effect have a peculiar mathematical property, one which Stodolna and colleagues recreated in the lab. They separated individual hydrogen atoms from hydrogen sulfide (H2S) molecules, then subjected them to a series of laser pulses to induce specific energy transitions inside the atoms. By measuring the ways the light scattered, the researchers were able to recreate the predicted wave functions—the first time this has been accomplished. The authors also argued that this method, known as photoionization microscopy, could be used to reconstruct wavefunctions for other atoms and molecules.

Lüftner and colleagues took a different approach and examined the wave functions of organic molecules chemically attached (adsorbed) on a silver surface. Specifically, they looked at pentacene (C22H14) and the easy-to-remember compound perylene-3,4,9,10-tetracboxylic dianhydride (or PTCDA, C24H8O6). Unlike hydrogen, the wave functions for these molecules cannot be calculated exactly. They usually require using "ab initio" computer models.

The researchers were particularly interested in finding the phase, that bit of the wave function that can't be measured directly. They determined that they could reconstruct it by using the particular way the molecules bonded to the surface, which enhanced their response to photons of a specific wavelength. The experiment involved taking successive iterative measurements by exciting the molecules using light, then measuring the angles at which the photons were scattered away.

Reconstructing the phase of the wave function required exploiting the particular mathematical form it took in this system. Specifically, the waves had a relatively sharp edge, allowing the researchers to make an initial guess and then refine the value as they took successive measurements. Even with this sophisticated process, they were only able to determine the phase to an arbitrary precision—something entirely to be expected from fundamental quantum principles. However, they were able to experimentally reconstruct the entire wave function of a molecule. There was previously no way to check whether our calculated wave functions were accurate or not.


Physical Review Letters, 2013. DOI: 10.1103/PhysRevLett.110.213001 and
PNAS, 2013. DOI: 10.1073/pnas.1315716110  (About DOIs).

No comment yet.
Scooped by Dr. Stefan Gruenwald!

A Wikipedia for robots allowing them to share knowledge and experience worldwide

A Wikipedia for robots allowing them to share knowledge and experience worldwide | Amazing Science |

European scientists from six institutes and two universities have developed an online platform where robots can learn new skills from each other worldwide — a kind of “Wikipedia for robots.” The objective is to help develop robots better at helping elders with caring and household tasks. “The problem right now is that robots are often developed specifically for one task”, says René van de Molengraft, TU/e researcher and RoboEarth project leader.

“RoboEarth simply lets robots learn new tasks and situations from each other. All their knowledge and experience are shared worldwide on a central, online database.” In addition, some computing and “thinking” tasks can be carried out by the system’s “cloud engine,” he said, “so the robot doesn’t need to have as much computing or battery power on‑board.”

For example, a robot can image a hospital room and upload the resulting map to RoboEarth. Another robot, which doesn’t know the room, can use that map on RoboEarth to locate a glass of water immediately, without having to search for it endlessly. In the same way a task like opening a box of pills can be shared on RoboEarth, so other robots can also do it without having to be programmed for that specific type of box.

RoboEarth is based on four years of research by a team of scientists from six European research institutes (TU/e, Philips, ETH Zürich, TU München and the universities of Zaragoza and Stuttgart).

Robots learn from each other on 'Wiki for robots'

Scooped by Dr. Stefan Gruenwald!

Facing the Intelligence Explosion: There is Plenty of Room Above

Facing the Intelligence Explosion: There is Plenty of Room Above | Amazing Science |

Why are AIs in movies so often of roughly human-level intelligenceOne reason is that we almost always fail to see non-humans as non-human. We anthropomorphize. That’s why aliens and robots in fiction are basically just humans with big eyes or green skin or some special power. Another reason is that it’s hard for a writer to write characters that are smarter than the writer. How exactly would a superintelligent machine solve problem X?

The human capacity for efficient cross-domain optimization is not a natural plateau for intelligence. It’s a narrow, accidental, temporary marker created by evolution due to things like the slow rate of neuronal firing and how large a skull can fit through a primate’s birth canal. Einstein may seem vastly more intelligent than a village idiot, but this difference is dwarfed by the difference between the village idiot and a mouse.

As Vernor Vinge put it: The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly.”[1 How could an AI surpass human abilities? Let us count the ways:

  • Speed. Our axons carry signals at seventy-five meters per second or slower. A machine can pass signals along about four million times more quickly.
  • Serial depth. The human brain can’t rapidly perform any computation that requires more than one hundred sequential steps; thus, it relies on massively parallel computation.[2More is possible when both parallel and deep serial computations can be performed.
  • Computational resources. The brain’s size and neuron count are constrained by skull size, metabolism, and other factors. AIs could be built on the scale of buildings or cities or larger. When we can make circuits no smaller, we can just add more of them.
  • Rationality. As we explored earlier, human brains do nothing like optimal belief formation or goal achievement. Machines can be built from the ground up using (computable approximations of) optimal Bayesian decision networks, and indeed this is already a leading paradigm in artificial agent design.
  • Introspective access/editability. We humans have almost no introspective access to our cognitive algorithms, and cannot easily edit and improve them. Machines can already do this (read about EURISKO and metaheuristics). A limited hack like the method of loci greatly improves human memory; machines can do this kind of thing in spades.


1Vernor Vinge, “Signs of the Singularity,” IEEE Spectrum, June 2008,

2J. A. Feldman and Dana H. Ballard, “Connectionist Models and Their Properties,” Cognitive Science 6 (3 1982): 205–254, doi: 10.1207/s15516709cog0603_1.

Steffi Tan's curator insight, March 24, 2015 5:43 AM

Vernor Vinge answered the question, "Will computers ever be as smart as humans?" with the simple sentence of "Yes, but only briefly."


For only a short period of time as technology ever develops, will technology be on the same intellectual playing field before it is able to surpass and exponentially grow in its capabilities. Emphasis again on how controlled setting need to be taken if an intelligence explosion were to occur. However, even if everyone agrees on the priority of safety, it only requires a single group of people to blindly walk into such circumstance for the event to cause issues for everyone.

Scooped by Dr. Stefan Gruenwald!

Nitric Oxide sensing mechanism in plants elucidated for the first time

Nitric Oxide sensing mechanism in plants elucidated for the first time | Amazing Science |
The elusive trigger that allows plants to 'see' the gas nitric oxide (NO), an important signalling molecule, has been tracked down by scientists at The University of Nottingham.

Plants fine-tune their growth and survival in response to various signals, including internal hormones and external factors such as light or temperature. Nitric oxide gas is one such signal.

Professor Holdsworth said: "In plants, NO regulates many different processes throughout the plant's lifetime from seeds to flowering and responses to the environment. Although the effect of NO on plants has been known for many years, a general mechanism for the initial sensing of this important molecule has remained elusive. We have identified a small number of key proteins, called transcription factors, which act as 'master sensors' to control NO responses throughout the plant life cycle."

Nitric oxide (NO) is an important signaling compound in prokaryotes and eukaryotes. In plants, NO regulates critical developmental transitions and stress responses. Scientists now identified a mechanism for NO sensing that coordinates responses throughout development based on targeted degradation of plant-specific transcriptional regulators, the group VII ethylene response factors (ERFs). They show that the N-end rule pathway of targeted proteolysis targets these proteins for destruction in the presence of NO, and thus establish them as critical regulators of diverse NO-regulated processes, including seed germination, stomatal closure, and hypocotyl elongation. Furthermore, the researchers could define the molecular mechanism for NO control of germination and crosstalk with abscisic acid (ABA) signaling through ERF-regulated expression of ABSCISIC ACID INSENSITIVE5 (ABI5). This important work demonstrates how NO sensing is integrated across multiple physiological processes by direct modulation of transcription factor stability and identifies group VII ERFs as central hubs for the perception of gaseous signals in plants.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Xenobiology: A New Scientific Model Defines Alien Intelligence

Xenobiology: A New Scientific Model Defines Alien Intelligence | Amazing Science |
Should we ever detect an extraterrestrial civilization, or any kind of alien life for that matter, it's a safe bet they'll look very different from us. They'll also probably think in a way that's completely foreign to what we're used to. Here's how experts believe we might be able to predict what the minds of aliens will be like.

Intelligence has historically been studied by comparing nonhuman cognitive and language abilities with human abilities. Primate-like species, which show human-like anatomy and share evolutionary lineage, have been the most studied. However, when comparing animals of non-primate origins our abilities to profile the potential for intelligence remains inadequate.

Historically our measures for nonhuman intelligence have included a variety of tools: (1) physical measurements – brain to body ratio, brain structure/convolution/neural density, presence of artifacts and physical tools, (2) observational and sensory measurements – sensory signals, complexity of signals, cross-modal abilities, social complexity, (3) data mining – information theory, signal/noise, pattern recognition, (4) experimentation – memory, cognition, language comprehension/use, theory of mind, (5) direct interfaces – one way and two way interfaces with primates, dolphins, birds and (6) accidental interactions – human/animal symbiosis, cross-species enculturation. Because humans tend to focus on “human-like” attributes and measures and scientists are often unwilling to consider other “types” of intelligence that may not be human equated, our abilities to profile “types” of intelligence that differ on a variety of scales is weak. Just as biologists stretch their definitions of life to look at extremophiles in unusual conditions, so must we stretch our descriptions of types of minds and begin profiling, rather than equating, other life forms we may encounter.

COMPLEX (COmplexity of Markers for Profiling Life in EXobiology) offers a new approach to profile a variety of organisms along multiple dimensions including EQ – Encephalization QuotientCS – Communication Signal complexityIC – Individual ComplexitySC – Social Complexity and II – Interspecies Interaction. Because Earth species are found along a variety of continuums, defining an intelligence profile along these different trajectories rather than comparing them only to human intelligence, may give us insight into a potential tool for quickly assessing unknown species. The application of profiling nonhuman species, out of world, will be both observational and potentially interactive in some way. Using profiles and indicators gleaned from Earth species to help us develop profiles and using pattern recognition, modeling and other data mining techniques could help jump start our understanding of other organisms and their potential for certain “types” of intelligence.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

WIRED: The Experiment That Forever Changed How We Think About Reality

WIRED: The Experiment That Forever Changed How We Think About Reality | Amazing Science |

Is reality blurry or do we just see it that way? In the early days of quantum mechanics, Einstein and other scientists argued that our theories just weren't strong enough. The uncertainty principle says that you can’t know certain properties of a quantum system at the same time. For example, you can’t simultaneously know the position of a particle and its momentum. But what does that imply about reality? If we could peer behind the curtains of quantum theory, would we find that objects really do have well defined positions and momentums? Or does the uncertainty principle mean that, at a fundamental level, objects just can’t have a clear position and momentum at the same time. In other words, is the blurriness in our theory or is it in reality itself?

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Crystallography turns 100

Crystallography turns 100 | Amazing Science |

A special issue of Nature celebrates the 100th anniversary of the first X-ray diffraction experiments, which marked the birth of modern crystallography. A multimedia feature reviews the impact of crystallography on everything from chemistry to structural biology, and a feature article previews how free-electron lasers will change the field, while two experts compare the lasers to traditional synchrotrons. We also cover the role of women in crystallography, the prospects for structural biologists' careers, and much more.

Since modern crystallography dawned with X-ray diffraction experiments on crystals by Max von Laue in 1912 and William and Lawrence Bragg (a father-and-son team) in 1913, and was recognized by Nobel prizes in physics for von Laue in 1914 and the Braggs in 1915, the discipline has informed almost every branch of the natural sciences.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Cheaper, faster and safer way of converting cells to stem cells, using acid shock

Cheaper, faster and safer way of converting cells to stem cells, using acid shock | Amazing Science |

Scientists in Japan showed stem cells can now be made quickly just by dipping blood cells into acid. The latest development, published in the journal Nature, could make the technology cheaper, faster and safer.

The researchers report a unique cellular reprogramming phenomenon, called stimulus-triggered acquisition of pluripotency (STAP), which requires neither nuclear transfer nor the introduction of transcription factors. In STAP, strong external stimuli such as a transient low-pH stressor reprogrammed mammalian somatic cells, resulting in the generation of pluripotent cells. Through real-time imaging of STAP cells derived from purified lymphocytes, as well as gene rearrangement analysis, the scientists found that committed somatic cells give rise to STAP cells by reprogramming rather than selection. STAP cells showed a substantial decrease in DNA methylation in the regulatory regions of pluripotency marker genes. Blastocyst injection showed that STAP cells efficiently contribute to chimaeric embryos and to offspring via germline transmission. They could also demonstrate the derivation of robustly expandable pluripotent cell lines from the obtained STAP cells. Thus, these findings indicate that epigenetic fate determination of mammalian cells can be markedly converted in a context-dependent manner by strong environmental cues.

The finding has been described as "remarkable" by the Medical Research Council's Prof Robin Lovell-Badge and as "a major scientific discovery" by Dr Dusko Ilic, a reader in stem cell science at Kings College London.

Dr Ilic added: "The approach is indeed revolutionary.

"It will make a fundamental change in how scientists perceive the interplay of environment and genome." But he added: "It does not bring stem cell-based therapy closer. We will need to use the same precautions for the cells generated in this way as for the cells isolated from embryos or reprogrammed with a standard method."

And Prof Lovell-Badge said: "It is going to be a while before the nature of these cells are understood, and whether they might prove to be useful for developing therapies, but the really intriguing thing to discover will be the mechanism underlying how a low pH shock triggers reprogramming - and why it does not happen when we eat lemon or vinegar or drink cola?"

Scientist accused of falsifying data

Dr. Stefan Gruenwald's insight:

This story is under investigation. It was published in Nature but the data seem to be partially / completely false.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New theory suggests way to teleport energy long distances

New theory suggests way to teleport energy long distances | Amazing Science |

A trio of researchers at Tohoku University in Japan, led by Masahiro Hotta, has proposed a new way to teleport energy that allows for doing so over long distances. In their paper published in Physical Review A, the team describes a theory they've developed that takes advantage of the properties of squeezed light or vacuum states to allow for "teleporting" information about an energy state, allowing for making use of that energy—in essence, teleporting energy over long distances.

On television shows such as Star Trek, people are moved from one location to another via teleportation, where the people (or objects) are not literally sent—instead their essence is reestablished in another local, giving the illusion of movement. In real life, nothing like that exists, though scientists have begun using the term teleportation to describe the results of entanglement experiments—where two entangled particles are joined somehow despite no apparent connection between them. Changes to one particle happen automatically to the other. Scientists have broadened their experiments to include light and matter, and more recently, energy.

Back in 2008, Hotta, with another team, first devised a theory for teleporting energy based on taking advantages of vacuum states—theory suggests they are not truly empty, instead there are particles in them that pop in and out of existence, some of which are entangled. While interesting, the theory suggested that teleporting energy could only be carried out over very short distances. In this new effort, Hotta et al have found a way to increase the teleportation distance by making use of a property known as squeezed light which is tied to a squeezed vacuum state.

Quantum mechanics laws limit the ways that values in a system (such as a vacuum) can be measured—physicists have found however, that increasing the uncertainty of one value, decrease the uncertainty of the value of others—a sort of squeezing effect. When applied to light, theory suggests, it leads to more pairs traveling together through a vacuum, which in turn leads to more of them being entanglement, and that the team suggests should allow for teleporting energy over virtually any distance.

The researchers suggest their theory could be put to the test in a lab and Hotta hints that he and another partner are in the process of doing just that.

Reference: Physical Review A  and arXiv 

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Watching electrons move within an atom

Watching electrons move within an atom | Amazing Science |

Can scientists image the motion of electrons inside atoms? It's a challenging problem, but solving it would help us develop a more complete understanding of things like chemical reactions and the interactions of light and matter. So researchers are using a variety of techniques to probe the internal structure of atoms, seeking to test theories and find new, potentially interesting phenomena.

The latest in this line of work is an effort by chemists Henri J. Suominen and Adam Kirrander, who propose using X-ray lasers to study the electron dynamics in noble gas atoms. In a new paper inPhysical Review Letters, they outline how this process should work: exciting the electrons into energy states where they are weakly bound to their atoms, then scattering specially prepared X-ray photons off those atoms. Studying the scatter pattern should allow researchers to reconstruct the electron dynamics in some detail.

While this proposed experiment will not be easy to perform and is sensitive to known issues with X-ray lasers, it could also lead to direct measurements of electron motion inside atoms—a significant accomplishment.

The figure shows a visualization of an argon atom in a Rydberg state (three spheres) at different points in time, after bombardment by X-ray photons. The circular patterns at left are those formed by the X-rays after they scatter. The three images in sequence show the evolution of the atom in space and time.

Rydberg atoms have the electrons in their outer layers excited until the electrons are only weakly bound to the nucleus, making the atoms physically very large. The increased size allows light to scatter off the outermost electrons without much interference from the nucleus or from the inner core of electrons. In other words, it's a way to isolate the electron dynamics from other messy phenomena. Noble gases like argon are particularly useful for this, since they are largely non-reactive chemically and relatively easy to model theoretically.

Electrons in Rydberg states also have much slower reaction times: picoseconds (10-12 s, or trillionths of a second) as opposed to femtoseconds (10-15 s) or less: still really short, but a factor of a 1000x improvement is nothing to sneeze at.

That leads to the second aspect of the proposed experiment: using X-ray lasers, which interact with the electrons on shorter time scales than their reaction times.

X-ray lasers (including the Linac Coherent Light Source [LCLS] at Stanford's SLAC laboratory) are highly tunable, producing any of a variety of wavelengths in controlled bursts of photons. An X-ray laser can capture electron behavior in both space and time, minimally disturbing the atoms in the process. That's in contrast with infrared or other types of light, which can strongly interact with electrons, changing the experimental outcome.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Is it possible? Atoms at a temperature below absolute zero

Is it possible? Atoms at a temperature below absolute zero | Amazing Science |
On the absolute temperature scale, which is used by physicists and is also called the Kelvin scale, it is not possible to go below zero – at least not in the sense of getting colder than zero kelvin.

According to the physical meaning of temperature, the temperature of a gas is determined by the chaotic movement of its particles – the colder the gas, the slower the particles. At zero kelvin (minus 273 degrees Celsius) the particles stop moving and all disorder disappears. Thus, nothing can be colder than absolute zero on the Kelvin scale. Physicists have now created an atomic gas in the laboratory that nonetheless has negative Kelvin values. These negative absolute temperatures have several apparently absurd consequences: although the atoms in the gas attract each other and give rise to a negative pressure, the gas does not collapse – a behavior that is also postulated for dark energy in cosmology.

The meaning of a negative absolute temperature can best be illustrated with rolling spheres in a hilly landscape, where the valleys stand for a low potential energy and the hills for a high one. The faster the spheres move, the higher their kinetic energy as well: if one starts at positive temperatures and increases the total energy of the spheres by heating them up, the spheres will increasingly spread into regions of high energy. If it were possible to heat the spheres to infinite temperature, there would be an equal probability of finding them at any point in the landscape, irrespective of the potential energy. If one could now add even more energy and thereby heat the spheres even further, they would preferably gather at high-energy states and would be even hotter than at infinite temperature. The Boltzmann distribution would be inverted, and the temperature therefore negative. At first sight it may sound strange that a negative absolute temperature is hotter than a positive one. This is simply a consequence of the historic definition of absolute temperature, however; if it were defined differently, this apparent contradiction would not exist.

This inversion of the population of energy states is not possible in water or any other natural system as the system would need to absorb an infinite amount of energy – an impossible feat! However, if the particles possess an upper limit for their energy, such as the top of the hill in the potential energy landscape, the situation will be completely different. The researchers in Immanuel Bloch’s and Ulrich Schneider’s research group have now realised such a system of an atomic gas with an upper energy limit in their laboratory, following theoretical proposals by Allard Mosk and Achim Rosch.

In their experiment, the scientists first cool around a hundred thousand atoms in a vacuum chamber to a positive temperature of a few billionths of a Kelvin and capture them in optical traps made of laser beams. The surrounding ultrahigh vacuum guarantees that the atoms are perfectly thermally insulated from the environment. The laser beams create a so-called optical lattice, in which the atoms are arranged regularly at lattice sites. In this lattice, the atoms can still move from site to site via the tunnel effect, yet their kinetic energy has an upper limit and therefore possesses the required upper energy limit. Temperature, however, relates not only to kinetic energy, but to the total energy of the particles, which in this case includes interaction and potential energy. The system of the Munich and Garching researchers also sets a limit to both of these. The physicists then take the atoms to this upper boundary of the total energy – thus realising a negative temperature, at minus a few billionths of a kelvin.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Mollusc nacre shells inspire super-glass 200 times stronger than a standard pane

Mollusc nacre shells inspire super-glass 200 times stronger than a standard pane | Amazing Science |

A team at McGill University in Montreal began their research with a close-up study of natural materials like mollusc shells, bone and nails which are astonishingly resilient despite being made of brittle minerals. The secret lies in the fact that the minerals are bound together into a larger, tougher unit. The binding means the shell contains abundant tiny fault lines called interfaces. Outwardly, this might seem a weakness, but in practice it is a masterful deflector of external pressure.

To take one example, the shiny, inner shell layer of some molluscs, known as nacre or mother of pearl, is some 3,000 times tougher than the minerals it is made of. "Making a material tougher by introducing weak interfaces may seem counter-intuitive, but it appears to be a universal and powerful strategy in natural materials," the paper said. Taking what they learnt, the team used a 3D laser to engrave microscopic fissures into glass slides, filled them with a polymer, and found it made them 200 times tougher.

The glass could absorb impacts better—yielding and bending slightly instead of shattering. "A container made of standard glass will break and shatter if it is dropped on the floor.  The engraved glass can "stretch" by almost five percent before snapping—compared to a strain capacity of only 0.1 percent for standard glass.

The stronger glass may find application in bullet-proof windows, glasses, or even smartphone screens. Glass is functional because of its transparency, hardness, resistance to chemicals and durability—but the main drawback is its brittleness.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Common crop pesticides kill honeybee larvae in the bee hive

Common crop pesticides kill honeybee larvae in the bee hive | Amazing Science |
Four pesticides commonly used on crops to kill insects and fungi also kill honeybee larvae within their hives, according to Penn State and University of Florida researchers.

The team also found that N-methyl-2-pyrrolidone (NMP) -- an inert, or inactive, chemical commonly used as a pesticide additive -- is highly toxic to honeybee larvae.

"We found that four of the pesticides most commonly found in beehives kill bee larvae," said Jim Frazier, professor of entomology, Penn State. "We also found that the negative effects of these pesticides are sometimes greater when the pesticides occur in combinations within the hive. Since pesticide safety is judged almost entirely on adult honeybee sensitivity to individual pesticides and also does not consider mixtures of pesticides, the risk assessment process that the Environmental Protection Agency uses should be changed."

According to Frazier, the team's previous research demonstrated that forager bees bring back to the hive an average of six different pesticides on the pollen they collect. Nurse bees use this pollen to make beebread, which they then feed to honeybee larvae.

To examine the effects of four common pesticides -- fluvalinate, coumaphos, chlorothalonil and chlorpyrifos -- on bee larvae, the researchers reared honeybee larvae in their laboratory. They then applied the pesticides alone and in all combinations to the beebread to determine whether these insecticides and fungicides act alone or in concert to create a toxic environment for honeybee growth and development.

The researchers also investigated the effects of NMP on honeybee larvae by adding seven concentrations of the chemical to a pollen-derived, royal jelly diet. NMP is used to dissolve pesticides into formulations that then allow the active ingredients to spread and penetrate the plant or animal surfaces onto which they are applied. The team fed their treated diet, containing various types and concentrations of chemicals, to the laboratory-raised bee larvae.

Multi-billion pounds of these inactive ingredients of pesticides can easily overwhelm the total chemical burden from the active pesticide, drug and personal-care ingredients with which they are formulated. Among these co-formulants are surfactants and solvents of known high toxicity to fish, amphibians, honey bees and other non-target organisms. While we have found that NMP contributes to honeybee larvae mortality, the overall role of these inactive ingredients in pollinator decline remains to be determined."

Ines Jurisic's curator insight, January 28, 2014 2:57 PM

Einstein who said: “Remove the bee from the earth and at the same stroke you remove at least one hundred thousand plants that will not survive.”

Jocelyn Bassett's curator insight, July 12, 2015 11:38 PM

Pesticides contributes to honeybee larvae mortality.

Scooped by Dr. Stefan Gruenwald!

3-D imaging provides window into living cells, no dye required

3-D imaging provides window into living cells, no dye required | Amazing Science |

University of Illinois researchers have developed a new imaging technique that needs no dyes or other chemicals, yet renders high-resolution, three-dimensional, quantitative imagery of cells and their internal structures using conventional microscopes and white light.

Called white-light diffraction tomography (WDT), the imaging technique opens a window into the life of a cell without disturbing it and could allow cellular biologists unprecedented insight into cellular processes, drug effects and stem cell differentiation.

The team, led by electrical and computer engineering and bioengineering professor Gabriel Popescu, published their results in the journal Nature Photonics“One main focus of imaging cells is trying to understand how they function, or how they respond to treatments, for example, during cancer therapies,” Popescu said. “If you need to add dyes or contrast agents to study them, this preparation affects the cells’ function itself. It interferes with your study. With our technique, we can see processes as they happen and we don’t obstruct their normal behavior.”

Because it uses white light, WDT can observe cells in their natural state without exposing them to chemicals, ultraviolet radiation, or mechanical forces – the three main methods used in other microscopy techniques. White light also contains a broad spectrum of wavelengths, thus bypassing the interference issues inherent in laser light – speckles, for example.

The 3-D images are a composite of many cross-sectional images, much like an MRI or CT image. The microscope shifts its focus through the depth of the cell, capturing images of various focus planes. Then the computer uses the theoretical model and compiles the images into a coherent three-dimensional rendering.

See a video showcasing examples of the 3D images.

“With this imaging we can tell at what scale things within the cell are transported randomly and at what scale processes are actually organized and deterministic,” Popescu said. “At first glance, the dynamics looks pretty messy, but then you look at it – we stare at movies for hours and hours – and you realize it all makes sense. Everything is organized perfectly at certain scales. That’s what makes a cell alive. Randomness is just nature’s way to try new things.”

Next, the researchers hope to pursue cross-disciplinary collaborations to explore applications of WDT in biology as well as expansions of the imaging optics demonstrated in WDT. For example, they are using WDT to watch stem cells as they differentiate in hopes of better understanding how they turn into different cell types. Since stem cells are so sensitive, only a chemical-free, non-invasive, white-light technique such as WDT could be used to study them without adverse effects.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

DWave’s updated quantum optimizer gets beaten by a classical computer

DWave’s updated quantum optimizer gets beaten by a classical computer | Amazing Science |

New Scientist reports that Matthias Troyer of ETH Zurich in Switzerland has tested a D-Wave Two computer against a conventional, "classical" machine running an optimised algorithm – and they have found no evidence of superior performance in the D-Wave machine.

Quantum computing promises a huge speedup for certain classes of problems, such as factoring numbers into primes. But so far, building a true quantum computer with more than a few bits of processing power has proven an insurmountable hurdle. A company called DWave initially confused matters by announcing that it had developed a quantum computer, but after a bit of back-and-forth, the company has settled on calling its machine a quantum optimizer. It can perform calculations that may rely on quantum effects, but it's not a general quantum computer.

With that settled, the obvious question became whether the quantum optimizer was worth the money—did it actually outperform classical computers for some problems? Some initial results published last year looked promising, as an early production machine outperformed classical computers on a number of tests. But that work came under fire because some of the algorithms run on the classical machine weren't as optimized as they could have been.

Now, a new team of computer scientists has taken DWave's latest creation, a 512-bit quantum optimizer, and put it through its paces on a single problem. And here, the results are pretty clear: a single classical processor handily beats the DWave machine in most circumstances.

The work was done by a large team that includes people from where the DWave 2 machine is housed (USC), a lone Google employee, and a handful of researchers from other academic institutions. The USC machine is an updated version of the one that ran the previous set of tests; 503 of its bits are functional, making it significantly more powerful than the previous version. In this case, rather than tackling a variety of problems, the team focused on a single one: resolving what's called a "spin glass," which starts with a collection of individual spins that are randomly oriented and then finds a low-energy state as those spins interact and reorient.

In theory, this is similar to how the DWave machine works, so (at least in a superficial analysis) you might expect the machine to perform well on the problem. To get its answer, it simply simulates the same process rather than taking an algorithmic shortcut. Pitted against it is a single-processor classical computer.

You'd think that simulating a process would be rather inefficient compared to actually running a similar process. But you'd be wrong. If you only consider the time involved in performing the calculations, then DWave does show a considerable advantage, one that starts off rising as the complexity of the problem increases. But at some point, that trend reverses. By the time the problem size is approaching that of the number of bits in DWave's machine, the gains have largely vanished.

And that's only considering the time spent calculating. The DWave machine needs time to be set up to model the problem, and then it needs to expend time on error correction. When the full time involved in performing the calculation is considered, the classical computer outperforms the DWave machine on most problems, often by a wide margin. "We find that while the DW2 is sometimes up to 10× faster in pure annealing time," the authors say, "there are many cases where it is ≥ 100× slower."

The researchers readily admit that spin glass isn't the only problem that the DWave machine can solve, and there may be others that it handles better. It's also possible, they recognize, that better error correction could give DWave's quantum optimizer a boost. But it could also be that the optimizer just isn't as good as a classical computer, though a better implementation of this optimization might be.

We may not have to wait too long to find out, as the authors say, "Future studies will probe these alternatives and aim to determine whether one can find a class of problem instances for which an unambiguous speedup over classical hardware can be observed."

Original paper:


Scott Aaronson has written his take and has announced his "second retirement" as "chief Dwave Critic. However, I am expecting this to be like Michael Corleone. Michael Corleone: Just when I thought I was out... they pull me back in.

DWave, Google and Lockheed remain optimistic of the usefulness of the machine and of future speed up. Troyer's team ran their tests on a D-Wave Two owned by Lockheed Martin and operated by the University of Southern California in Los Angeles. There were certain instances in which the D-Wave computer was up to 10 times faster at problem solving, but in other instances it was one-hundredth the speed of the classical computer. D-Wave's advantage also tended to disappear when the team added in the time needed to configure the D-Wave Two to solve the problem, a step that is not necessary on regular PCs.

The findings don't worry Google: "At this stage we're mainly interested in understanding better what limits and what enhances the performance of quantum hardware to inform future hardware designs," says Google spokesman Jason Freidenfelds. He says Google is also more focused on problems with different structures than the one used in Troyer's test, such as machine-learning problems like the Glass blink-detection algorithm. Google had also used the machine to help improve machine learning of automatic classification of images. They were able to improve the identification of cars in pictures. This work is applicable to the self driving car work.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

How the Rosetta Spacecraft Will Land on a Comet

How the Rosetta Spacecraft Will Land on a Comet | Amazing Science |
The two-part Rosetta spacecraft is designed to orbit and land on a comet. See how it will orbit and land on the Comet 67P/Churyumov-Gerasimenko in November 2014.

After 10 years in space, the Rosetta spacecraft closes in on its cometary prey. Rosetta will go into orbit near the nucleus of comet 67p/Churyumov-Gerasimenko. The probe is carrying a small lander designed to settle on the comet nucleus, take samples and conduct experiments.

Rosetta is the first mission designed to both orbit a comet and deposit lander on its surface. Part of Rosetta’s mission is to catalog the elements and molecules that exist in the comet’s dust. A previous sample-return mission to a different comet found particles of organic matter that are the building blocks of life.

The lander was named after Philae Island in the Nile River.  A comet nucleus has very low gravity, so the lander relies on harpoons and ice screws to secure itself to the surface.comet and depositlander on its surface. Part of Rosetta’s mission is to catalog the elements and molecules that exist in the comet’s dust. A previous sample-return mission to a different comet found particles of organic matter that are the building blocks of life.

Rosetta took a winding path through the solar system, performing slingshot maneuvers past the Earth and Mars to use those planets’ gravity for a speed boost. The probe examined two asteroids – Steins and Lutetia – before closing in on its primary prey, the comet known as 67p/Churyumov-Gerasimenko.

No comment yet.