Amazing Science
Find tag "computers"
503.6K views | +194 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

High-fidelity photon-to-atom quantum state transfer could form backbone of quantum networks

High-fidelity photon-to-atom quantum state transfer could form backbone of quantum networks | Amazing Science |

In a quantum network, information is stored, processed, and transmitted from one device to another in the form of quantum states. The quantum nature of the network gives it certain advantages over classical networks, such as greater security.

One promising method for implementing a quantum network involves using both atoms and photons for their unique advantages. While atoms are useful as nodes (in the form of quantum memories and processors) due to their long storage times, photons are useful as links (on optical fibers) because they're better at carrying quantum information over large distances.

However, using both atoms and photons requires that quantum states be converted between single atoms and single photons. This in turn requires a high degree of control over the emission and absorption processes in which single atoms act as senders and receivers of single photons. Because it's difficult to achieve complete overlap between the atomic and photonic modes, photon-to-atom state transfer usually suffers from low fidelities of below 10%. This means that more than 90% of the time the state transfer is unsuccessful.

In a new paper published in Nature Communications, a team of researchers led by Jürgen Eschner, Professor at Saarland University in Saarbrucken, Germany, has experimentally demonstrated photon-to-atom quantum state transfer with a fidelity of more than 95%. This drastic improvement marks an important step toward realizing future large-scale quantum networksThe researchers' protocol consists of transferring the polarization state of a laser photon onto the ground state of a trapped calcium ion. To do this, the researchers prepared the calcium ion in a quantum superposition state, in which it simultaneously occupies two atomic levels. When the ion absorbs a photon emitted by a laser at an 854-nm wavelength, the photon's polarization state gets mapped onto the ion. Upon absorbing the photon, the ion returns to its ground state and emits a single photon at a 393-nm wavelength. Detection of this 393-nm photon signifies a successful photon-to-atom quantum state transfer.

he researchers showed that this method achieves very high fidelities of 95-97% using a variety of atomic states and both linear and circular polarizations. The method also has a relatively high efficiency of 0.438%. The researchers explain that the large fidelity improvement is due in large part to the last step involving the detection of the 393-nm photon.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

For $25 a year, Google will keep a copy of any genome in the cloud

For $25 a year, Google will keep a copy of any genome in the cloud | Amazing Science |

Google is approaching hospitals and universities with a new pitch. Have genomes? Store them with us. The search giant’s first product for the DNA age is Google Genomics, a cloud computing service that it launched last March but went mostly unnoticed amid a barrage of high profile R&D announcements from Google, like one late last month about a far-fetched plan to battle cancer with nanoparticles (see “Can Google Use Nanoparticles to Search for Cancer?”).

Google Genomics could prove more significant than any of these moonshots. Connecting and comparing genomes by the thousands, and soon by the millions, is what’s going to propel medical discoveries for the next decade. The question of who will store the data is already a point of growing competition between Amazon, Google, IBM, and Microsoft.

Google began work on Google Genomics 18 months ago, meeting with scientists and building an interface, or API, that lets them move DNA data into its server farms and do experiments there using the same database technology that indexes the Web and tracks billions of Internet users.

“We saw biologists moving from studying one genome at a time to studying millions,” says David Glazer, the software engineer who led the effort and was previously head of platform engineering for Google+, the social network. “The opportunity is how to apply breakthroughs in data technology to help with this transition.”

Some scientists scoff that genome data remains too complex for Google to help with. But others see a big shift coming. When Atul Butte, a bioinformatics expert at Stanford heard Google present its plans this year, he remarked that he now understood “how travel agents felt when they saw Expedia.”

The explosion of data is happening as labs adopt new, even faster equipment for decoding DNA. For instance, the Broad Institute in Cambridge, Massachusetts, said that during the month of October it decoded the equivalent of one human genome every 32 minutes. That translated to about 200 terabytes of raw data.

This flow of data is smaller than what is routinely handled by large Internet companies (over two months, Broad will produce the equivalent of what gets uploaded to YouTube in one day) but it exceeds anything biologists have dealt with. That’s now prompting a wide effort to store and access data at central locations, often commercial ones. The National Cancer Institute said last month that it would pay $19 million to move copies of the 2.6 petabyte Cancer Genome Atlas into the cloud. Copies of the data, from several thousand cancer patients, will reside both at Google Genomics and in Amazon’s data centers.

The idea is to create “cancer genome clouds” where scientists can share information and quickly run virtual experiments as easily as a Web search, says Sheila Reynolds, a research scientist at the Institute for Systems Biology in Seattle. “Not everyone has the ability to download a petabyte of data, or has the computing power to work on it,” she says.

corneja's curator insight, November 27, 2014 7:20 PM

"Our genome in the cloud"... it sounds like the title of a song. Google is offering to keep genome data in the cloud.

Scooped by Dr. Stefan Gruenwald!

Launching in 2015: A Certificate Authority to Encrypt the Entire Web

Launching in 2015: A Certificate Authority to Encrypt the Entire Web | Amazing Science |

Today EFF is pleased to announce Let’s Encrypt, a new certificate authority (CA) initiative that we have put together with Mozilla, Cisco, Akamai, IdenTrust, and researchers at the University of Michigan that aims to clear the remaining roadblocks to transition the Web from HTTP to HTTPS.

Although the HTTP protocol has been hugely successful, it is inherently insecure. Whenever you use an HTTP website, you are always vulnerable to problems, including account hijacking and identity theft; surveillance and tracking by governmentscompanies, and both in concert; injection of malicious scripts into pages; and censorship that targets specific keywords orspecific pages on sites. The HTTPS protocol, though it is not yet flawless, is a vast improvement on all of these fronts, and we need to move to a future where every website is HTTPS by default.With a launch scheduled for summer 2015, the Let’s Encrypt CA will automatically issue and manage free certificates for any website that needs them. Switching a webserver from HTTP to HTTPS with this CA will be as easy as issuing one command, or clicking one button.

The biggest obstacle to HTTPS deployment has been the complexity, bureaucracy, and cost of the certificates that HTTPS requires. We’re all familiar with the warnings and error messages produced by misconfigured certificates. These warnings are a hint that HTTPS (and other uses of TLS/SSL) is dependent on a horrifyingly complex and often structurally dysfunctional bureaucracy for authentication.

The need to obtain, install, and manage certificates from that bureaucracy is the largest reason that sites keep using HTTP instead of HTTPS. In our tests, it typically takes a web developer 1-3 hours to enable encryption for the first time. The Let’s Encrypt project is aiming to fix that by reducing setup time to 20-30 seconds. You can help test and hack on the developer preview of our Let's Encrypt agent software.

Let’s Encrypt will employ a number of new technologies to manage secure automated verification of domains and issuance of certificates. We will use a protocol we’re developing called ACME between web servers and the CA, which includes support for new and stronger forms of domain validation. We will also employ Internet-wide datasets of certificates, such as EFF’s own Decentralized SSL Observatory, the University of Michigan’s, and Google'sCertificate Transparency logs, to make higher-security decisions about when a certificate is safe to issue.

The Let’s Encrypt CA will be operated by a new non-profit organization called the Internet Security Research Group (ISRG). EFF helped to put together this initiative with Mozilla and the University of Michigan, and it has been joined for launch by partners including Cisco, Akamai, and Identrust.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Google joins the effort to combat overfishing, with Global Fishing Watch

Google joins the effort to combat overfishing, with Global Fishing Watch | Amazing Science |
Google has partnered with SkyTruth and Oceana to produce a new tool to track global fishing activity. Known as Global Fishing Watch, the interactive web tool uses satellite data to provide detailed vessel tracking, and aims to harness the power of citizen engagement to tackle the issue of overfishing.

According to the United Nations Food and Agriculture Organization, more than 90 percent of the world’s fisheries are working at peak capacity, with as much as one-third of marine fish stocks now suffering from overfishing.

Though a clear issue, the distant and out-of-sight nature of commercial fishing creates a problem when it comes to accountability. To help combat this, Google has teamed up with marine advocacy group Oceana and mapping company SkyTruth to develop the Global Fishing Watch – a tool that allows anyone with an internet connection access to the timing and position of intensive fishing around the world.

Currently in the prototype stage, the tool makes use of Automatic Identification System (AIS) satellite location data – a tool initially designed to help avoid maritime collisions. The system analyses the movement pattern of each ship to determine whether it is indeed a fishing vessel, before plotting its activity on an interactive map.
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Stanford project brings machine learning right to your browser and will make your tabs a lot smarter

Stanford project brings machine learning right to your browser and will make your tabs a lot smarter | Amazing Science |

Deep learning is one of the buzziest topics in technology at the moment, and for good reason: This subset of machine learning can unearth all kinds of useful new insights in data and teach computers to do things like understand human speech and see things. It employs the use of artificial neural networks to teach computers things like speech recognition, computer vision, and natural language processing. In the last few years, deep learning has helped forge advances in areas like object perception and machine translation—research topics that have long proven difficult for AI researchers to crack.

Trouble is, deep learning takes a ton of computational power, so its use is limited to companies that have the resources to throw at it. But what if you could achieve this kind of heavy duty artificial intelligence in the browser? That's exactly the aim of a project out of Standford called ConvNetJS. It's a JavaScript framework that brings deep learning models to the browser without the need for all that computing muscle. In time, it could make your tabs a lot smarter.

"Deep Learning has relatively recently achieved quite a few really nice results on big, important datasets," says Andrej Karpathy, the Standard PhD student behind ConvNetJS. "However, the majority of the available libraries and code base are currently geared primarily towards efficiency." Caffe, a popular convolutional neural network framework used by Flickr for image recognition (among many others) is written in C++ and is slow to compile. "ConvNetJS is designed with different trade-offs in mind," says Karpathy.

So how does this actually play out in the real world? Right now, Karpathy's website points to a couple of basic, live demos: classifying digits and other data using a neural network and using deep learning to dynamically "paint" an image based on any photo you upload. They are admittedly geeky—and not especially practical—examples, but what's important is the computation that's happening on the front end and how that's likely to evolve in the future. One likely usage is the development of browser extensions that run neural networks directly on websites. This could allow for more easily implemented image recognition or tools that can quickly parse and summarize long articles and perform sentiment analysis on their text. As the client-side technology evolves, the list of possibilities for machine learning in the browser will only grow.

Because it's JavaScript-based, the framework can't pull off quite the computational heavy lifting that other tools can, but it nonetheless raises the interesting prospect of bringing machine learning directly into the browser. "The idea is that a website could train a network on their end and then distribute the weights to the client, so the compute all happens on client and not server side, perhaps significantly improving latencies and significantly simplifying the necessary codebase," Karpathy explains. "ConvNetJS is not where I want it to be," admits Karpathy. "I work on it on a side of my PhD studies and with many deadlines it's hard to steal time. But I am slowly working on cleaner API, more docs, and WebGL support. Regardless, I think the cool trend here more generally is the possibility of running (or training) neural nets in the browser. Making our tabs smarter."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Google to Provide All Funding for Most Prestigious Award in Computing, Making It Worth $1 Million Per Year

Google to Provide All Funding for Most Prestigious Award in Computing, Making It Worth $1 Million Per Year | Amazing Science |
The A.M. Turing Award, ACM's most prestigious technical award, is given for major contributions of lasting importance to computing.

ACM announced on November 13, 2014 that the funding level for the ACM A.M. Turing Award is now $1,000,000, to be provided by Google Inc. The new amount is four times its previous level.

Leslie Lamport, a Principal Researcher at Microsoft Research, has been named as the recipient of the 2013 ACM A.M. Turing Award for imposing clear, well-defined coherence on the seemingly chaotic behavior of distributed computing systems, in which several autonomous computers communicate with each other by passing messages. He devised important algorithms and developed formal modeling and verification protocols that improve the quality of real distributed systems. These contributions have resulted in improved correctness, performance, and reliability of computer systems.

View a video by Microsoft Research on Leslie Lamport's work and read his 1978 paper, "Time, Clocks, and the Ordering of Events in a Distributed System," one of the most cited in the history of computer science. In a second video, in his own voice for the June 2014 issue ofCommunications of the ACM, Lamport asserts that the best logic for stating things clearly is mathematics, a concept, he notes, that some find controversial. Assessing his body of work, he concludes that he created a path that others have followed to places well beyond his imagination.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

IBM Watson uses cognitive technologies to help finding new sources of oil

IBM Watson uses cognitive technologies to help finding new sources of oil | Amazing Science |

Scientists at IBM and Repsol SA, Spain largest energy company, announced today (Oct. 30) the world’s first research collaboration using cognitive technologies like IBM’s Watson to jointly develop and apply new tools to make it cheaper and easier to find new oil fields.

An engineer will typically have to manually read through an enormous set of journal papers and baseline reports with models of reservoir, well, facilities, production, export, and seismic imaging data.

IBM says its cognitive technologies could help by analyzing hundreds of thousands of papers, prioritize data, and link that data to the specific decision at hand. It will introduce “new real-time factors to be considered, such as current news events around economic instability, political unrest, and natural disasters.”

The oil and gas industry boasts some of the most advanced geological, geophysical and chemical science in the world. But the challenge is to integrate critical geopolitical, economic, and other global news into decisions. And that will require a whole new approach to computing that can speed access to business insights, enhance strategic decision-making, and drive productivity, IBM says.

This goes beyond the capabilities of Watson. But scientists at IBM’s Cognitive Environments Laboratory (CEL), collaborating with Repsol, plan to develop and apply new prototype cognitive tools for real-world use cases in the oil and gas industry. They will experiment with a combination of traditional and new interfaces based upon spoken dialog, gesture, robotics and advanced visualization and navigation techniques.

The objective is build conceptual and geological models, highlight the impact of the potential risks and uncertainty, visualize trade-offs, and explore what-if scenarios to ensure the best decision is made, IBM says.

Repsol is making an initial investment of $15 million to $20 million to develop two applications targeted for next year, Repsol’s director for exploration and production technology Santiago Quesada explained to Bloomberg Business Week. “One app will be used for oil exploration and the other to help determine the most attractive oil and gas assets to buy.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Faster switching helps ferroelectrics become viable replacement for transistors

Faster switching helps ferroelectrics become viable replacement for transistors | Amazing Science |

Ferroelectric materials – commonly used in transit cards, gas grill igniters, video game memory and more – could become strong candidates for use in next-generation computers, thanks to new research led by scientists at the University of California, Berkeley, and the University of Pennsylvania.

The researchers found an easy way to improve the performance of ferroelectric materials in a way that makes them viable candidates for low-power computing and electronics. They described their work in a study published today (Sunday, Oct. 26) in the journal Nature Materials.

Ferroelectric materials have spontaneous polarization as a result of small shifts of negative and positive charges within the material. A key characteristic of these materials is that the polarization can be reversed in response to an electric field, enabling the creation of a “0” or “1” data bit for memory applications. Ferroelectrics can also produce an electric charge in response to physical force, such as being pressed, squeezed or stretched, which is why they are found in applications such as push-button igniters on portable gas grills.

“What we discovered was a fundamentally new and unexpected way for these ferroelectric materials to respond to applied electric fields,” said study principal investigator Lane Martin, UC Berkeley associate professor of materials science and engineering. “Our discovery opens up the possibility for faster switching and new control over novel, never-before-expected multi-state devices.”

Martin and other UC Berkeley researchers partnered with a team led by Andrew Rappe, University of Pennsylvania professor of chemistry and of materials science and engineering. UC Berkeley graduate student Ruijuan Xu led the study’s experimental design, and Penn graduate student Shi Liu led the study’s theoretical modeling.

Scientists have turned to ferroelectrics as an alternative form of data storage and memory because the material holds a number of advantages over conventional semiconductors. For example, anyone who has ever lost unsaved computer data after power is unexpectedly interrupted knows that today’s transistors need electricity to maintain their “on” or “off” state in an electronic circuit.

Because ferroelectrics are non-volatile, they can remain in one polarized state or another without power. This ability of ferroelectric materials to store memory without continuous power makes them useful for transit cards, such as the Clipper cards used to pay fare in the Bay Area, and in certain memory cards for consumer electronics. If used in next-generation computers, ferroelectrics would enable the retention of information so that data would be there if electricity goes out and then is restored.

“If we could integrate these materials into the next generation of computers, people wouldn’t lose their data if the power goes off,” said Martin, who is also a faculty scientist at the Lawrence Berkeley National Laboratory. “For an individual, losing unsaved work is an inconvenience, but for large companies like eBay, Google and Amazon, losing data is a significant loss of revenue.”

So what has held ferroelectrics back from wider use as on/off switches in integrated circuits? The answer is speed, according to the study authors.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Australian teams set new records for silicon quantum computing

Australian teams set new records for silicon quantum computing | Amazing Science |

Two research teams working in the same laboratories at UNSW Australia have found distinct solutions to a critical challenge that has held back the realization of super powerful quantum computers.

The teams created two types of quantum bits, or "qubits" – the building blocks for quantum computers – that each process quantum data with an accuracy above 99%. The two findings have been published simultaneously today in the journal Nature Nanotechnology.

"For quantum computing to become a reality we need to operate the bits with very low error rates," says Scientia Professor Andrew Dzurak, who is Director of the Australian National Fabrication Facility at UNSW, where the devices were made.

"We've now come up with two parallel pathways for building a quantum computer in silicon, each of which shows this super accuracy," adds Associate Professor Andrea Morello from UNSW's School of Electrical Engineering and Telecommunications.

The UNSW teams, which are also affiliated with the ARC Centre of Excellence for Quantum Computation & Communication Technology, were first in the world to demonstrate single-atom spin qubits in silicon, reported in Nature in 2012 and 2013.

Now the team led by Dzurak has discovered a way to create an "artificial atom" qubit with a device remarkably similar to the silicon transistors used in consumer electronics, known as MOSFETs. Post-doctoral researcher Menno Veldhorst, lead author on the paper reporting the artificial atom qubit, says, "It is really amazing that we can make such an accurate qubit using pretty much the same devices as we have in our laptops and phones".

Meanwhile, Morello's team has been pushing the "natural" phosphorus atom qubit to the extremes of performance. Dr Juha Muhonen, a post-doctoral researcher and lead author on the natural atom qubit paper, notes: "The phosphorus atom contains in fact two qubits: the electron, and the nucleus. With the nucleus in particular, we have achieved accuracy close to 99.99%. That means only one error for every 10,000 quantum operations."

Dzurak explains that, "even though methods to correct errors do exist, their effectiveness is only guaranteed if the errors occur less than 1% of the time. Our experiments are among the first in solid-state, and the first-ever in silicon, to fulfill this requirement."

The high-accuracy operations for both natural and artificial atom qubits is achieved by placing each inside a thin layer of specially purified silicon, containing only the silicon-28 isotope. This isotope is perfectly non-magnetic and, unlike those in naturally occurring silicon, does not disturb the quantum bit. The purified silicon was provided through collaboration with Professor Kohei Itoh from Keio University in Japan.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The Internet Of Things Could Become A Big Trend in 2015

The Internet Of Things Could Become A Big Trend in 2015 | Amazing Science |

Goldman Sachs report on the Internet of things:

The Internet of Things (IoT) is emerging as the  third wave in the development of the Internet. The  1990s’ fixed Internet wave connected 1 billion users while the 2000s’ mobile wave connected another 2 billion.

The IoT has the potential to connect 10X as many (28 billion) “things” to the Internet by 2020, ranging from bracelets to cars.

Breakthroughs in the cost of sensors, processing power and bandwidth to connect devices are enabling ubiquitous connections right now. Early simple products like fitness trackers and thermostats are already gaining traction.

Lots of room to participate ...
Personal lives, workplace productivity and consumption will all change. Plus there will be a string of new businesses, from those that will expand the Internet “pipes”, to those that will analyze the reams of data, to those that will make new things we have not even thought of yet.

Benchmarking the future: early adopters
We see five key early verticals of adoption: Wearables, Cars, Homes, Cities, and Industrials.

Test cases for what the IoT can achieve 
Focus is on new products and sources of revenue and new ways to achieve cost efficiencies that can drive sustainable competitive advantages. 

Key to watch out for 

Privacy and security concerns. A likely source of friction on the path to adoption. 

Focus, Enablers, Platforms, & Industrials

The IoT building blocks will come from those that can web-enable devices, provide common platforms on which they can communicate, and develop new applications to capture new users.

Enablers and Platforms

We see increased share for Wi-Fi, sensors and low-cost microcontrollers. Focus is on software applications for managing communications between devices, middleware, storage, and data analytics.


Home automation is at the forefront of the early product opportunity, while factory floor optimization may lead the efficiency side.

75 billion. That's the potential size of the Internet Things sector, which could become a multi-trillion dollar market by the end of the decade.

That's a very big number of devices as extrapolated from a Cisco report that details how many devices will be connected to the Internet of Things by 2020. That's 9.4 devices for every one of the 8 billion people that's expected to be around in seven years.

To help put that into more perspective, back in Cisco also came out with the number of devices it thinks were connected to the Internet in 2012, a number Cisco's Rob Soderbery placed at 8.7 billion. Most of the devices at the time, he acknowledged were the PCs, laptops, tablets and phones in the world. But other types of devices will soon dominate the collection of the Internet of Things, such as sensors and actuators.

By the end of the decade, a nearly nine-fold increase in the volume of devices on the Internet of Things will mean a lot of infrastructure investment and market opportunities will available in this sector. And by "a lot," I mean ginourmous. In an interview with Barron's, Cisco CEO John Chambers figures that will translate to a $14-trillion industry.

See also: Cisco Hearts Internet Of Things

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Stephen Wolfram: Introducing Tweet-a-Program

Stephen Wolfram: Introducing Tweet-a-Program | Amazing Science |

Wouldn't it be great if you could just call up a supercomputer and ask it to do your data-wrangling for you? Actually, scratch that, no-one uses the phone anymore. What'd be really cool is if machines could respond to your queries straight from Twitter. It's a belief that's shared by Wolfram Research, which has just launched the Tweet a Program system to its computational knowledge engine, Wolfram Alpha. In a blog post, founder Stephen Wolfram explains that even complex queries can be executed within the space of 140 characters, including data visualizations.

In the Wolfram Language a little code can go a long way. And to use that fact to let everyone have some fun with the introduction of Tweet-a-ProgramCompose a tweet-length Wolfram Language program, and tweet it to @WolframTaP. TheTwitter bot will run your program in the Wolfram Cloud and tweet the result back to you. One can do a lot with Wolfram Language programs that fit in a tweet. It’s easy to make interesting patterns or even complicated fractals. Putting in some math makes it easy to get all sorts of elaborate structures and patterns.

The Wolfram Language not only knows how to compute π, as well as a zillion other algorithms; it also has a huge amount of built-in knowledge about the real world. So right in the language, you can talk about movies or countries or chemicals or whatever. And here’s a 78-character program that makes a collage of the flags of Europe, sized according to country population. There are many, many kinds of real-world knowledge built into the Wolfram Language, including some pretty obscure ones. The Wolfram Language does really well with words and text and deals with images too.

Martin (Marty) Smith's curator insight, September 24, 2014 8:40 PM

Now THIS is coolest thing I read today. Ever see the great movie 3 Days of the Condor when Robert Redford calls the computer and via a series of commands tracing a phone number. That was COOL. This "tweet-a-program" to control a super computer with 140 characters is awesome. 

Rescooped by Dr. Stefan Gruenwald from Innovative Marketing and Crowdfunding!

Cryogenic on-chip quantum electron cooling leads towards computers that consume 10x less power

Cryogenic on-chip quantum electron cooling leads towards computers that consume 10x less power | Amazing Science |

Researchers at UT Arlington have created the first electronic device that can cool electrons to -228 degrees Celsius (-375F), without any kind of external cooling. The chip itself remains at room temperature, while a quantum well within the device cools the electrons down cryogenic temperatures. Why is this exciting? Because thermal excitation (heat) is by far the biggest problem when it comes to creating both high-performance and ultra-low-power computers. These cryogenic, quantum well-cooled electrons could allow for the creation of electronic devices that consume 10 times less energy than current devices, according to the researchers.

What, you may ask, is a quantum well? In essence, a quantum well is a very narrow gap between two semiconducting materials. Electrons are happily bouncing along the piece of semiconductor when they hit the gap (the well). Only electrons that have very specific characteristics can cross the boundary. In this case, only electrons with very low energy (i.e. cold electrons) are allowed to pass, while hot electrons are sent back from whence they came. The well is created by sandwiching a narrow-bandgap semiconductor between two semiconductors with a wider bandgap – it’s basically the quantum equivalent of the neck between the two bulbs of an hourglass.

Via Marty Koenig
No comment yet.
Scooped by Dr. Stefan Gruenwald!

New study revisits Miller-Urey experiment at the quantum level with the aid of computers

New study revisits Miller-Urey experiment at the quantum level with the aid of computers | Amazing Science |
For the first time, researchers have reproduced the results of the Miller-Urey experiment in a computer simulation, yielding new insight into the effect of electricity on the formation of life's building blocks at the quantum level.

In 1953, American chemist Stanley Miller had famously electrified a mixture of simple gas and water to simulate lightning and the atmosphere of early Earth. The revolutionary experiment—which yielded a brownish soup of amino acids—offered a simple potential scenario for the origin of life's building blocks. Miller's work gave birth to modern research on pre-biotic chemistry and the origins of life.

For the past 60 years, scientists have investigated other possible energy sources for the formation of life's building blocks, including ultra violet light, meteorite impacts, and deep sea hydrothermal vents.

In this new study, Antonino Marco Saitta, of the Université Pierre et Marie Curie, Sorbonne, in Paris, France and his colleagues wanted to revisit Miller's result with electric fields, but from a quantum perspective.

Saitta and study co-author Franz Saija, two theoretical physicists, had recently applied a new quantum model to study the effects of electric fields on water, which had never been done before. After coming across a documentary on Miller's work, they wondered whether the quantum approach might work for the famous spark-discharge experiment.

The method would also allow them to follow individual atoms and molecules through space and time—and perhaps yield new insight into the role of electricity in Miller's work.

"The spirit of our work was to show that the electric field is part of it," Saitta said, "without necessarily involving lightning or a spark."  Another key insight from their study is that the formation of some of life's building blocks may have occurred on mineral surfaces, since most have strong natural electric fields.

"The electric field of mineral surfaces can be easily 10 or 20 times stronger than the one in our study," Saitta said. "The problem is that it only acts on a very short range. So to feel the effects, molecules would have to be very close to the surface." "I think that this work is of great significance," said François Guyot, a geochemist at the French Museum of Natural History.

"Regarding the mineral surfaces, strong electric fields undoubtedly exist at their immediate proximity. And because of their strong role on the reactivity of organic molecules, they might enhance the formation of more complex molecules by a mechanism distinct from the geometrical concentration of reactive species, a mechanisms often proposed when mineral surfaces are invoked for explaining the formation of the first biomolecules."

One of the leading hypotheses in the field of life's origin suggests that important prebiotic reactions may have occurred on mineral surfaces. But so far scientists don't fully understand the mechanism behind it.

"Nobody has really looked at electric fields on mineral surfaces," Saitta said. "My feeling is that there's probably something to explore there."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Ancient Computer Even More Ancient Than Previously Thought

Ancient Computer Even More Ancient Than Previously Thought | Amazing Science |

The astonishing Antikythera mechanism is even older than previously suspected, new research suggests. Instead of being "1500 years ahead of its time," it may have been closer to 1800.

The mechanism was found in 1901 in the wreck of a ship that sank in the Aegean Sea around 60 BC. Though its origins are unknown, it could be used to calculate astronomical motion, making it a sort of forerunner to computers.

The sheer sophistication of the device makes it mysterious, being more advanced than any known instrument of its day – or for centuries thereafter. Even with parts missing after spending such a long time in the briny deep, it was examined to have at least 30 gears. This is perhaps why for many, it represents the pinnacle of technology of the ancient world and what was lost during the Dark Ages.

If devices such as this had survived, Kepler might have found the task of explaining the orbits of the planets far easier to achieve. Although the makers likely would not have understood why the moon slowed down and sped up in its orbit, they were sufficiently aware of the phenomenon. In fact, the mechanism mimics it precisely. One of the mechanism's functions was to predict eclipses, and a study of these dials indicates it was operating on a calender starting from 205 BC.

Estimates of the mechanism's date of manufacture have gradually been pushed back, starting with the year in which it sank. The device was housed in a box, which has engravings dated to 80 to 90BC, but the lettering appears consistent with a date of 100 to 150 BC

However, in The Archive of History of Exact Sciences, Dr. Christian Carman of Argentina's National University of Quilmes and Dr. James Evans of the University of Puget Sound believe they have identified the solar eclipse that occurs in the 13th month of the mechanism's calender. If so, this would make its start date, when the dials are set to zero, May 205 BC.

Scooped by Dr. Stefan Gruenwald!

One Man's Quest to Build a The First Mind-Warping 4-D Videogame

One Man's Quest to Build a The First Mind-Warping 4-D Videogame | Amazing Science |

THERE'S A ROW of books on a shelf in Marc ten Bosch's living room that contains a crash course in higher dimensions. Titles like Flatland. Einstein, Picasso: Space, Time and the Beauty That Causes Havoc. The Fourth Dimension and Non-Euclidean Geometry in Modern Art. A young-adult novel called The Boy Who Reversed Himself. They're all devoted to helping our brains break out of the three dimensions in which we exist, to aid our understanding of a universe that extends beyond our perception.

This is not just a hypothetical pursuit. Most of us think of time as the fourth dimension, but modern physics theorizes that there is a fourth spatial dimension as well—not width, height, or length but something else that we can't experience through our physical senses. From this fourth dimension, we would be able to see every angle of the three-dimensional world at once, much as we three-dimensional beings can take in the entirety of a two-dimensional plane. Mathematician Bernhard Riemann came up with the concept in the 19th century, and physicists, artists, and philosophers have struggled with it ever since. Writers from Wilde to Proust, Dostoevsky to Conrad invoked the fourth dimension in their work. H. G. Wells' Invisible Man disappeared by discovering a way to travel along it. Cubism was in part an attempt by Picasso and others to visualize what fourth-dimensional creatures might see.

Still, most of us are no closer to fundamentally comprehending the fourth dimension than we were when Riemann first conceived it. People have written papers, drawn diagrams, taken psychedelics, but what we really want to do is witness it. Mathematician Rudy Rucker wrote that he had spent 15 years trying to imagine 4-D space and been granted for his labors “perhaps 15 minutes' worth of direct vision” of it.

But for the past five years, ten Bosch has been trying to take us directly into it, in the form of a videogame called Miegakure. The game, essentially a series of puzzles, augments the usual arsenal of in-game movement by allowing the player's avatar, with the press of a button, to travel along the fourth spatial dimension. Building something so ambitious has consumed ten Bosch's life. Chris Hecker, a friend and fellow game designer, marvels that ten Bosch “can't even see the game he's making.” Ten Bosch, who is 30, describes his daily schedule as “wake up, work on the game, go get lunch somewhere, work on the game, go to sleep.” Even after toiling for half a decade, he is still only about 75 percent done.

But among the tight-knit community of indie game developers, Miegakure is a hotly anticipated title. The select few who have played it have showered it with praise. 1 Ten Bosch has twice been invited to preview it at the prestigious Experimental Gameplay Workshop at the annual Game Developers Conference in San Francisco. He won the “amazing game” award at IndieCade, the biggest annual showcase of independent games.

The interactions in Miegakure are basic: You can move the character, you can make him jump, you can press a button to enter one of the Torii gates (most of which lead to a puzzle). And you can press another button to travel along the unseeable fourth dimension. When you press it, the world appears to morph and fold in on itself, revealing colored slices to walk on. These slices look like parallel worlds; they're even visually distinct so that players can distinguish them as separate realms. One looks like desert, another like grass, another like ice. Walking onto each slice and then pressing the button seems to transport you into each new universe.

But here's the thing: They're not new universes. They're 3-D cross-sections—“hyperslices,” maybe?—of a 4-D shape. The “morph” button, which appears to make the world around you swirl and the objects within it disappear, does not in fact move your character even a millimeter. You're not teleporting. You're just changing perspective—except you're not looking left or right, not up or down or forward or back. You're looking into the unseeable fourth dimension and only then traveling along it.

Over time, the game nudges you toward an understanding of this by including 3-D objects that move in more than one “universe” when your character pushes them. You find maps that help to illustrate how the spaces intersect. And soon you're performing the miracles that mathematicians say a 4-D being could perform in three-dimensional space: walking through walls, making blocks seem to float in the air, disappearing and reappearing, and interlocking two seemingly impenetrable rings. The math is solid—every shape in the game is defined by four coordinates instead of three—but just as when an illusionist performs that same ring trick, it feels like magic.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

AI breaking ground: building a natural description of images

AI breaking ground: building a natural description of images | Amazing Science |

People can summarize a complex scene in a few words without thinking twice. It’s much more difficult for computers. But we’ve just gotten a bit closer -- we’ve developed a machine-learning system that can automatically produce captions (like the three above) to accurately describe images the first time it sees them. This kind of system could eventually help visually impaired people understand pictures, provide alternate text for images in parts of the world where mobile connections are slow, and make it easier for everyone to search on Google for images.

Recent research has greatly improved object detection, classification, and labeling. But accurately describing a complex scene requires a deeper representation of what’s going on in the scene, capturing how the various objects relate to one another and translating it all into natural-sounding language.

Many efforts to construct computer-generated natural descriptions of images propose combining current state-of-the-art techniques in both computer vision and natural language processing to form a complete image description approach. But what if we instead merged recent computer vision and language models into a single jointly trained system, taking an image and directly producing a human readable sequence of words to describe it?

This idea comes from recent advances in machine translation between languages, where a Recurrent Neural Network (RNN) transforms, say, a French sentence into a vector representation, and a second RNN uses that vector representation to generate a target sentence in German.

Now, what if we replaced that first RNN and its input words with a deep Convolutional Neural Network (CNN) trained to classify objects in images? Normally, the CNN’s last layer is used in a final Softmax among known classes of objects, assigning a probability that each object might be in the image. But if we remove that final layer, we can instead feed the CNN’s rich encoding of the image into a RNN designed to produce phrases. We can then train the whole system directly on images and their captions, so it maximizes the likelihood that descriptions it produces best match the training descriptions for each image.

Natural Language Careers's curator insight, November 19, 2014 8:53 AM

Google making progress towards automatic captioning.  Cool stuff.

Scooped by Dr. Stefan Gruenwald!

IBM developing 150-petaflops supercomputer for national labs

IBM developing 150-petaflops supercomputer for national labs | Amazing Science |

IBM recently announced that the U.S. Department of Energy has awarded IBM contracts valued at $325 million to develop and deliver “the world’s most advanced ‘data-centric’ supercomputing systems” at Lawrence Livermore and Oak Ridge National Laboratories to advance innovation and discovery in science, engineering and national security.”

The world is generating more than 2.5 billion gigabytes of “big data” every day, according to IBM’s 2013 annual report, requiring entirely new approaches to supercomputing. Repeatedly moving data back and forth from storage to processor is unsustainable with the onslaught of Big Data because of the significant amount of time and energy that massive and frequent data movement entails, IBM says, so the emphasis on faster microprocessors becomes progressively more untenable because the computing infrastructure is dominated by data movement and data management.

To address this issue, for the past five years IBM researchers have pioneered a new “data centric” approach — an architecture that embeds compute power everywhere data resides in the system, allowing for a convergence of analytics, modeling, visualization, and simulation, and driving new insights at “incredible” speeds.

IBM says the two Laboratories anticipate that the new IBM OpenPOWER-based supercomputers will be among the “fastest and most energy-efficient” systems, thanks to this data-centric approach. The systems at each laboratory are expected to offer five to 10 times better performance on commercial and high-performance computing applications compared to the current systems at the labs, and will be more than five times more energy efficient.

The “Sierra” supercomputer at Lawrence Livermore and “Summit” supercomputer at Oak Ridge will each have a peak performance of more than 150 petaflops (compared to today’s fastest supercomputer, China’s Tianhe-2, with 33.86 petaflops) with more than five petabytes of dynamic and flash memory to help accelerate the performance of data-centric applications. The systems will also be capable of moving data to the processor, when necessary, at more than 17 petabytes per second (which is equivalent to moving over 100 billion photos on Facebook in a second) to speed time to insights.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Self-assembling DNA molecules can make future electronic devices even smaller

Self-assembling DNA molecules can make future electronic devices even smaller | Amazing Science |

Leonid Gurevich, Associate Professor at Aalborg University’s Department of Physics and Nanotechnology, has been involved in the successful experiments conducted by an international research consortium after years of focused collaboration to make molecular electronics a possible replacement for traditional solutions in our ever-smaller devices. In order to provide sufficient processing power in new computers, mobile phones and tablets, the industry is already producing chips today with billions of transistors squeezed into less than one square centimeter: We are now talking about transistors that are less than a thousand atoms across and fabricated with an accuracy of just tens of atoms. And that is just one of the reasons we are rapidly approaching the fundamental limit of conventional semiconductor electronics. A way out could be to mimic nature’s approach — use custom-designed molecules and let them self-assemble into functional devices, explains Leonid Gurevich.

The computer chip in the new iPhone 6 is a good example of what the electronics industry has achieved with constant miniaturization. The chip has two billion transistors and is produced with a resolution of 20 nanometers – just 0.00002 millimeters. Chip manufacturers are even on their way to producing chips with just 14 nanometers resolution. But miniaturization with conventional semiconductor electronics is approaching its fundamental limits, and molecular electronics can be the next step: The idea of molecules replacing electronic circuitry originated back in the 1970s, but it hasn’t left the laboratory yet and development in this field remains largely limited to very short molecules or molecular layers, while conclusive results on long molecules have yet to be obtained. With this paper we establish that charge transport through long molecules is possible, we describe a way to measure single molecules, and we identify the mechanism of charge transport in one of the most promising conductive molecules – G4 DNA. What is certain is that this finding will reinvigorate the field of molecular electronics, in particular DNA electronics, says Leonid Gurevich.

Of course, we cannot expect that DNA will replace silicon in our phones and computers tomorrow. The transition to DNA-based devices or molecular electronics in general will represent a paradigm shift in the way we design, assemble and program electronic devices today. It will be a long journey and we have many questions that we need to answer before DNA electronics becomes a reality. So it’s still too early to say when they will be part of our everyday life. But with sufficient funding, I believe the future looks bright, says Leonid Gurevich.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Data smashing: Uncovering lurking order in underlying data

Data smashing: Uncovering lurking order in underlying data | Amazing Science |

From recognizing speech to identifying unusual stars, new discoveries often begin with comparison of data streams to find connections and spot outliers. But simply feeding raw data into a data-analysis algorithm is unlikely to produce meaningful results, say the authors of a new Cornell study. That’s because most data comparison algorithms today have one major weakness: somewhere, they rely on a human expert to specify what aspects of the data are relevant for comparison, and what aspects aren’t.

But these experts can’t keep up with the growing amounts and complexities of big data. So the Cornell computing researchers have come up with a new principle they call “data smashing” for estimating the similarities between streams of arbitrary data without human intervention, and even without access to the data sources.

Data smashing is based on a new way to compare data streams. The process involves two steps.

  1. The data streams are algorithmically “smashed” to “annihilate” the information in each other.
  2. The process measures what information remains after the collision. The more information remains, the less likely the streams originated in the same source.

Data-smashing principles could open the door to understanding increasingly complex observations, especially when experts don’t know what to look for, according to the researchers. The researchers— Hod Lipson, associate professor of mechanical engineering and of computing and information science, and Ishanu Chattopadhyay, a former postdoctoral associate with Lipson now at the University of Chicago — demonstrated this idea with data from real-world problems, including detection of anomalous cardiac activity from heart recordings and classification of astronomical objects from raw photometry.

In all cases and without access to original domain knowledge, the researchers demonstrated that the performance of these general algorithms was on par with the accuracy of specialized algorithms and heuristics tweaked by experts to work.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

255 Terabits/s: Researchers demonstrate record data transmission over new type of fiber

255 Terabits/s: Researchers demonstrate record data transmission over new type of fiber | Amazing Science |

Researchers at Eindhoven University of Technology (TU/e) in the Netherlands and the University of Central Florida (CREOL), report in the journal Nature Photonics the successful transmission of a record high 255 Terabits/s over a new type of fiber allowing 21 times more bandwidth than currently available in communication networks. This new type of fiber could be an answer to mitigating the impending optical transmission capacity crunch caused by the increasing bandwidth demand.

Due to the popularity of Internet services and emerging network of capacity-hungry datacentres, demand for telecommunication bandwidth is expected to continue at an exponential rate. To transmit more information through current optical glass fibers, an option is to increase the power of the signals to overcome the losses inherent in the glass from which the fibre is manufactured. However, this produces unwanted photonic nonlinear effects, which limit the amount of information that can be recovered after transmission over the standard fiber.

The team at TU/e and CREOL, led by dr. Chigo Okonkwo, an assistant professor in the Electro-Optical Communications (ECO) research group at TU/e and dr. Rodrigo Amezcua Correa, a research assistant professor in Micro-structured fibers at CREOL, demonstrate the potential of a new class of fiber to increase transmission capacity and mitigate the impending 'capacity crunch' in their article that appeared yesterday in the online edition of the journal Nature Photonics.

The new fiber has seven different cores through which the light can travel, instead of one in current state-of-the-art fibers. This compares to going from a one-way road to a seven-lane highway. Also, they introduce two additional orthogonal dimensions for data transportation – as if three cars can drive on top of each other in the same lane. Combining those two methods, they achieve a gross transmission throughput of 255 Terabits/s over the fiber link. This is more than 20 times the current standard of 4-8 Terabits/s.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Google tests waters for potential ultra-fast wireless service

Google tests waters for potential ultra-fast wireless service | Amazing Science |

 Google Inc is preparing to test new technology that may provide the foundation for a wireless version of its high-speed "Fiber" Internet service, according to telecommunication experts who scrutinized the company's regulatory filings.

In a public but little-noticed application with the U.S. Federal Communications Commission on Monday, Google asked the agency for permission to conduct tests in California across different wireless spectrums, including a rarely-used millimeter-wave frequency capable of transmitting large amounts of data.

It is unclear from the heavily redacted filing what exactly Google intends to do, but it does signal the Internet giant's broader ambition of controlling Internet connectivity. The technology it seeks to test could form the basis of a wireless connection that can be broadcast to homes, obviating the need for an actual ground cable or fiber connection, experts say.

By beaming Internet services directly into homes, Google would open a new path now thoroughly dominated by Verizon, AT&T, Comcast and other entrenched cable and broadband providers. It could potentially offer a quicker and cheaper way to deliver high-speed Internet service, a potential threat to the cable-telecoms oligopoly, experts said.

“From a radio standpoint it’s the closest thing to fiber there is,” said Stephen Crowley, a wireless engineer and consultant who monitors FCC filings, noting that millimeter frequencies can transmit data over short distances at speeds of several gigabits per second.

“You could look at it as a possible wireless extension of their Google Fiber wireless network, as a way to more economically serve homes. Put up a pole in a neighborhood, instead of having to run fiber to each home,” said Crowley.

Craig Barratt, the head of the Google Access and Energy division leading the effort to offer high-speed fiber networks in Kansas City and other locations, signed off as the authorized person submitting Google's FCC application.

The world’s No.1 Internet search engine has expanded into providing consumers with services such as Internet access. The company said it wants to roll out its high-speed Internet service to more than 30 U.S. cities, and in 2013 it struck a deal to provide free wireless Internet access to 7,000 Starbucks cafes across America.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

IBM opens a new era of computing with brain-like chip: 4096 cores, 1 million neurons, 5.4 billion transistors

IBM opens a new era of computing with brain-like chip: 4096 cores, 1 million neurons, 5.4 billion transistors | Amazing Science |

Scientists at IBM Research have created by far the most advanced neuromorphic (brain-like) computer chip to date. The chip, called TrueNorth, consists of 1 million programmable neurons and 256 million programmable synapses across 4096 individual neurosynaptic cores. Built on Samsung’s 28nm process and with a monstrous transistor count of 5.4 billion, this is one of the largest and most advanced computer chips ever made. Perhaps most importantly, though, TrueNorth is incredibly efficient: The chip consumes just 72 milliwatts at max load, which equates to around 400 billion synaptic operations per second per watt — or about 176,000 times more efficient than a modern CPU running the same brain-like workload, or 769 times more efficient than other state-of-the-art neuromorphic approaches. Yes, IBM is now a big step closer to building a brain on a chip.

The animal brain (which includes the human brain, of course), as you may have heard before, is by far the most efficient computer in the known universe. As you can see in the graph below, the human brain has a “clock speed” (neuron firing speed) measured in tens of hertz, and a total power consumption of around 20 watts. A modern silicon chip, despite having features that are almost on the same tiny scale as biological neurons and synapses, can consume thousands or millions times more energy to perform the same task as a human brain. As we move towards more advanced areas of computing, such as artificial general intelligence and big data analysis — areas that IBM just happens to be deeply involved with — it would really help if we had a silicon chip that was capable of brain-like efficiency.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Ultra-fast ‘phase-change materials’ could lead to 1,000-times-faster computers

Ultra-fast ‘phase-change materials’ could lead to 1,000-times-faster computers | Amazing Science |

Replacing silicon, new ultra-fast “phase-change materials” (PCMs) that could eventually enable processing speeds 500 to 1,000 times faster than the average laptop computer today — while using less energy — have been modeled and tested by researchers from the University of Cambridge, the Singapore A*STAR Data-Storage Institute, and the Singapore University of Technology and Design.

PCMs are capable of reversibly switching between two structural phases with different electrical states — one crystalline and conducting and the other glassy and insulating — in billionths of a second, increasing the number of calculations per second.

Also, logic operations and memory are co-located, rather than separated, as they are in silicon-based computers (causing interconnect delays and slowing down computation speed), and PCM devices can function down to about two nanometers (compared to the current smallest logic and memory devices based on silicon, which are about 20 nanometers in size). The researchers have also demonstrated that multiple parallel calculations are possible for PCM logic/memory devices.

Achieving record switching speed

The researchers used a new type of PCM based on a specific chalcogenide glass material that goes further: it can be melted and recrystallized in as little as 900 picoseconds (trillionths of a second) using appropriate voltage pulses.

PCM devices recently demonstrated to perform in-memory logic do have shortcomings: they do not perform calculations at the same speeds as silicon, and they exhibit a lack of stability in the starting amorphous phase.

However, the Cambridge and Singapore researchers found that, by performing the logic-operation process in reverse — starting from the crystalline phase and then melting the PCMs in the cells to perform the logic operations — the materials are both much more stable and capable of performing operations much faster.

The results are published in the journal Proceedings of the National Academy of Sciences.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Intel putting 3D scanners in consumer tablets next year, phones to follow

Intel putting 3D scanners in consumer tablets next year, phones to follow | Amazing Science |

Intel has been working on a 3D scanner small enough to fit in the bezel of even the thinnest tablets. The company aims to have the technology in tablets from 2015, with CEO Brian Krzanich telling the crowd at MakerCon in New York on Thursday that he hopes to put the technology in phones as well.

"Our goal is to just have a tablet that you can go out and buy that has this capability," Krzanich said. "Eventually within two or three years I want to be able to put it on a phone."

Krzanich and a few of his colleagues demonstrated the technology, which goes by the name "RealSense," on stage using a human model and an assistant who simply circled the model a few times while pointing a tablet at the subject. A full 3D rendering of the model slowly appeared on the screen behind the stage in just a few minutes. The resulting 3D models can be manipulated with software or sent to a 3D printer.

"The idea is you go out, you see something you like and you just capture it," Krzanich explained. He said consumer tablets with built in 3D scanners will hit the market in the third or fourth quarter of 2015, with Intel also working on putting the 3D scanning cameras on drones.

The predecessor to the 3D scanning tablets demonstrated on stage were announced earlier this month in the form of the Dell Venue 8 7000 series Android tablet sports Intel's RealSense snapshot depth camera, which brings light-field camera-like capabilities to a tablet. It will be available later this year.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

How a quantum computer could defeat a classical computer

How a quantum computer could defeat a classical computer | Amazing Science |

The first definitive defeat for a classical computer by a quantum computer could one day be achieved with a quantum device that runs an algorithm known as “boson sampling,” recently developed by researchers at MIT.

Boson sampling uses single photons of light and optical circuits to take samples from an exponentially large probability distribution, which has been proven to be extremely difficult for classical computers.

The snag: how to generate the dozens of single photons needed to run the algorithm.

Now researchers at the Centre for Quantum Photonics (CQP) at the University of Bristol with collaborators from the University of Queensland (UQ) and Imperial College London say they have discovered how.

“We realized we could chain together many standard two-photon sources in such a way as to give a dramatic boost to the number of photons generated,” said CQP research leader Anthony Laing, a research fellow at the Centre for Quantum Photonics in the University of Bristol’s School of Physics.

Details of the research are in a paper published in Physical Review Letters.


No comment yet.