Amazing Science
700.7K views | +579 today
Follow
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers create a first frequency comb of time-bin entangled qubits

Researchers create a first frequency comb of time-bin entangled qubits | Amazing Science | Scoop.it
An international team of researchers has built a chip that generates multiple frequencies from a robust quantum system that produces time-bin entangled photons. In contrast to other quantum state realizations, entangled photons don't need bulky equipment to keep them in their quantum state, and they can transmit quantum information across long distances. The new device creates entangled photons that span the traditional telecommunications spectrum, making it appealing for multi-channel quantum communication and more powerful quantum computers.

 

"The advantages of our chip are that it's compact and cheap. It's also unique that it operates on multiple channels," said Michael Kues, Institut National de la Recherche Scientifique (INRS), University of Quebec, Canada. The researchers will present their results at the Conference on Lasers and Electro-Optics (CLEO), which is held June 5 -10 in San Jose, California.

 

The basis of quantum communications and computing lies in qubits, the quantum equivalent of classical bits. Instead of representing a one or a zero, qubits can exhibit an unusual property called superposition to represent both numbers simultaneously.

 

In order to take full advantage of superposition to perform difficult calculations or send information securely, another weird quantum mechanical property called entanglement enters the picture. Entanglement was famously called "spooky action at a distance" by Albert Einstein. It links particles so that measurements on one instantaneously affect the other.

 

Kues and his colleagues used photons to realize their qubits and entangled them by sending two short laser pulses through an interferometer, a device that directs light beams along different paths and then recombines them, to generate double pulses.

 

To generate multiple frequencies, Kres and his colleagues sent the pulses through a tiny ring, called a microring resonator. The resonator generates photon pairs on a series of discrete frequencies, using spontaneous form-wave mixing, thus creating a frequency comb.

 

The interferometer the team used has one long arm and one short arm, and when a single photon comes out of the system, it is in a superposition of time states, as if it traveled through both the long arm and the short arm simultaneously. Time-bin entanglement is a particularly robust form of photon entanglement. Photons can also have their polarization entangled, but waveguides and other types of optical equipment may alter polarization states.

 

Other research groups have generated time-bin entangled photons, but Kues and his colleagues are the first to create photons with multiple frequencies using the same chip. This feature can enable multiplexed and multi-channel quantum communications and increased quantum computation information capacity. Kues notes that the chip could improve quantum key distribution, a process that lets two parties share a secret key to encrypt messages with theoretically unbreakable security. It could also serve as a component of a future quantum computer.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Europe plans giant billion-euro quantum technologies project

Europe plans giant billion-euro quantum technologies project | Amazing Science | Scoop.it

The European Commission has quietly announced plans to launch a €1-billion (US$1.13 billion) project to boost a raft of quantum technologies — from secure communication networks to ultra-precise gravity sensors and clocks. 

 

The initiative, to launch in 2018, will be similar in size, timescale and ambition to two existing European flagships, the decade-long Graphene Flagship and the Human Brain Project, although the exact format has yet to be decided, Nathalie Vandystadt, a commission spokesperson, toldNature. Funding will come from a mixture of sources, including the commission, as well as other European and national funders, she added.

 

The commission is likely to have a “substantial role” in funding the flagship, says Tommaso Calarco, who leads the Integrated Quantum Science and Technology centre at the Universities of Ulm and Stuttgart in Germany. He co-authored a blueprint behind the initiative, which was published in March, called the Quantum Manifesto. Countries around the world are investing in these technologies, says Calarco. Without such an initiative, Europe risks becoming a second-tier player, he says. “The time is really now or never.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Approaching electronic DNA circuits: Making precise graphene pattern with DNA

Approaching electronic DNA circuits: Making precise graphene pattern with DNA | Amazing Science | Scoop.it

DNA’s unique structure is ideal for carrying genetic information, but scientists have recently found ways to exploit this versatile molecule for other purposes: By controlling DNA sequences, they can manipulate the molecule to form many different nanoscale shapes.

 

Chemical and molecular engineers at MIT and Harvard University have now expanded this approach by using folded DNA to control the nanostructure of inorganic materials. After building DNA nanostructures of various shapes, they used the molecules as templates to create nanoscale patterns on sheets of graphene. This could be an important step toward large-scale production of electronic chips made of graphene, a one-atom-thick sheet of carbon with unique electronic properties.

“This gives us a chemical tool to program shapes and patterns at the nanometer scale, forming electronic circuits, for example,” says Michael Strano, a professor of chemical engineering at MIT and a senior author of a paper describing the technique in the April 9 issue of Nature Communications.

 

Peng Yin, an assistant professor of systems biology at Harvard Medical School and a member of Harvard’s Wyss Institute for Biologically Inspired Engineering, is also a senior author of the paper, and MIT postdoc Zhong Jin is the lead author. Other authors are Harvard postdocs Wei Sun and Yonggang Ke, MIT graduate students Chih-Jen Shih and Geraldine Paulus, and MIT postdocs Qing Hua Wang and Bin Mu.

 

Most of these DNA nanostructures are made using a novel approach developed in Yin’s lab. Complex DNA nanostructures with precisely prescribed shapes are constructed using short synthetic DNA strands called single-stranded tiles. Each of these tiles acts like an interlocking toy brick and binds with four designated neighbors. Using these single-stranded tiles, Yin’s lab has created more than 100 distinct nanoscale shapes, including the full alphabet of capital English letters and many emoticons. These structures are designed using computer software and can be assembled in a simple reaction. Alternatively, such structures can be constructed using an approach called DNA origami, in which many short strands of DNA fold a long strand into a desired shape.

 

However, DNA tends to degrade when exposed to sunlight or oxygen, and can react with other molecules, so it is not ideal as a long-term building material. “We’d like to exploit the properties of more stable nanomaterials for structural applications or electronics,” Strano says. Instead, he and his colleagues transferred the precise structural information encoded in DNA to sturdier graphene. The chemical process involved is fairly straightforward, Strano says: First, the DNA is anchored onto a graphene surface using a molecule called aminopyrine, which is similar in structure to graphene. The DNA is then coated with small clusters of silver along the surface, which allows a subsequent layer of gold to be deposited on top of the silver.

 

Once the molecule is coated in gold, the stable metallized DNA can be used as a mask for a process called plasma lithography. Oxygen plasma, a very reactive “gas flow” of ionized molecules, is used to wear away any unprotected graphene, leaving behind a graphene structure identical to the original DNA shape. The metallized DNA is then washed away with sodium cyanide.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Quantum computing closer as RMIT drives towards first quantum data bus

Quantum computing closer as RMIT drives towards first quantum data bus | Amazing Science | Scoop.it

RMIT University researchers have trialled a quantum processor capable of routing quantum information from different locations in a critical breakthrough for quantum computing. The work opens a pathway towards the "quantum data bus", a vital component of future quantum technologies.

 

The research team from the Quantum Photonics Laboratory at RMIT in Melbourne, Australia, the Institute for Photonics and Nanotechnologies of the CNR in Italy and the South University of Science and Technology of China, have demonstrated for the first time the perfect state transfer of an entangled quantum bit (qubit) on an integrated photonic device.

 

Quantum Photonics Laboratory Director Dr Alberto Peruzzo said after more than a decade of global research in the specialised area, the RMIT results were highly anticipated. "The perfect state transfer has emerged as a promising technique for data routing in large-scale quantum computers," Peruzzo said. "The last 10 years has seen a wealth of theoretical proposals but until now it has never been experimentally realized. "Our device uses highly optimised quantum tunnelling to relocate qubits between distant sites. It's a breakthrough that has the potential to open up quantum computing in the near future."

 

The difference between standard computing and quantum computing is comparable to solving problems over an eternity compared to a short time. "Quantum computers promise to solve vital tasks that are currently unmanageable on today's standard computers and the need to delve deeper in this area has motivated a worldwide scientific and engineering effort to develop quantum technologies," Peruzzo said.

 

"It could make the critical difference for discovering new drugs, developing a perfectly secure quantum Internet and even improving facial recognition.'' Peruzzo said a key requirement for any information technology, along with processors and memories, is the ability to relocate data between locations.

 

Full scale quantum computers will contain millions, if not billions, of quantum bits (qubits) all interconnected, to achieve computational power undreamed of today. While today's microprocessors use data buses that route single bits of information, transferring quantum information is a far greater challenge due to the intrinsic fragility of quantum states.

"Great progress has been made in the past decade, increasing the power and complexity of quantum processors," Peruzzo said.

 

Robert Chapman, an RMIT PhD student working on the experiment, said the protocol they developed could be implemented in large scale quantum computing architectures, where interconnection between qubits will be essential.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Meraculous: Full Genome Alignment With Supercomputers in Mere Minutes

Meraculous: Full Genome Alignment With Supercomputers in Mere Minutes | Amazing Science | Scoop.it
A team of scientists from Berkeley Lab, JGI and UC Berkeley, simplified and sped up genome assembly, reducing a months-long process to mere minutes. This was primarily achieved by “parallelizing” the code to harness the processing power of supercomputers, such as NERSC’s Edison system.

 

Genomes are like the biological owner’s manual for all living things. Cells read DNA instantaneously, getting instructions necessary for an organism to grow, function and reproduce. But for humans, deciphering this “book of life” is significantly more difficult.

 

Nowadays, researchers typically rely on next-generation sequencers to translate the unique sequences of DNA bases (there are only four) into letters: A, G, C and T. While DNA strands can be billions of bases long, these machines produce very short reads, about 50 to 300 characters at a time. To extract meaning from these letters, scientists need to reconstruct portions of the genome—a process akin to rebuilding the sentences and paragraphs of a book from snippets of text.

But this process can quickly become complicated and time-consuming, especially because some genomes are enormous. For example, while the human genome contains about 3 billion bases, the wheat genome contains nearly 17 billion bases and the pine genome contains about 23 billion bases. Sometimes the sequencers will also introduce errors into the dataset, which need to be filtered out. And most of the time, the genomes need to be assembled de novo, or from scratch. Think of it like putting together a ten billion-piece jigsaw puzzle without a complete picture to reference.

 

By applying some novel algorithms, computational techniques and the innovative programming language Unified Parallel C (UPC) to the cutting-edge de novo genome assembly tool Meraculous, a team of scientists from the Lawrence Berkeley National Laboratory (Berkeley Lab)’s Computational Research Division (CRD), Joint Genome Institute (JGI) and UC Berkeley, simplified and sped up genome assembly, reducing a months-long process to mere minutes. This was primarily achieved by “parallelizing” the code to harness the processing power of supercomputers, such as the National Energy Research Scientific Computing Center’s (NERSC’s) Edison system. Put simply, parallelizing code means splitting up tasks once executed one-by-one and modifying or rewriting the code to run on the many nodes (processor clusters) of a supercomputer all at once.

 

“Using the parallelized version of Meraculous, we can now assemble the entire human genome in about eight minutes using 15,360 computer processor cores. With this tool, we estimate that the output from the world’s biomedical sequencing capacity could be assembled using just a portion of NERSC’s Edison supercomputer,” says Evangelos Georganas, a UC Berkeley graduate student who led the effort to parallelize Meraculous. He is also the lead author of a paper published and presented at the SC Conference in November 2014.  

 

“This work has dramatically improved the speed of genome assembly,” says Leonid Oliker computer scientist in CRD. “The new parallel algorithms enable assembly calculations to be performed rapidly, with near linear scaling over thousands of cores. Now genomics researchers can assemble large genomes like wheat and pine in minutes instead of months using several hundred nodes on NERSC’s Edison.”

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science
Scoop.it!

Laser technique promises super-fast and super-secure quantum cryptography

Laser technique promises super-fast and super-secure quantum cryptography | Amazing Science | Scoop.it
A new method of implementing an 'unbreakable' quantum cryptographic system is able to transmit information at rates more than ten times faster than previous attempts.

 

Researchers have developed a new method to overcome one of the main issues in implementing a quantum cryptography system, raising the prospect of a useable 'unbreakable' method for sending sensitive information hidden inside particles of light. By 'seeding' one laser beam inside another, the researchers, from the University of Cambridge and Toshiba Research Europe, have demonstrated that it is possible to distribute encryption keys at rates between two and six orders of magnitude higher than earlier attempts at a real-world quantum cryptography system. The results are reported in the journal Nature Photonics.

 

Encryption is a vital part of modern life, enabling sensitive information to be shared securely. In conventional cryptography, the sender and receiver of a particular piece of information decide the encryption code, or key, up front, so that only those with the key can decrypt the information. But as computers get faster and more powerful, encryption codes get easier to break.

 

Quantum cryptography promises 'unbreakable' security by hiding information in particles of light, or photons, emitted from lasers. In this form of cryptography, quantum mechanics are used to randomly generate a key. The sender, who is normally designated as Alice, sends the key via polarised photons, which are sent in different directions. The receiver, normally designated as Bob, uses photon detectors to measure which direction the photons are polarised, and the detectors translate the photons into bits, which, assuming Bob has used the correct photon detectors in the correct order, will give him the key.

 

The strength of quantum cryptography is that if an attacker tries to intercept Alice and Bob's message, the key itself changes, due to the properties of quantum mechanics. Since it was first proposed in the 1980s, quantum cryptography has promised the possibility of unbreakable security. "In theory, the attacker could have all of the power possible under the laws of physics, but they still wouldn't be able to crack the code," said the paper's first author Lucian Comandar, a PhD student at Cambridge's Department of Engineering and Toshiba's Cambridge Research Laboratory.


Via Mariaschnee
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Bioinformatics, Comparative Genomics and Molecular Evolution
Scoop.it!

Web resource: 1000 Fungal Genomes Project (2016)

Web resource: 1000 Fungal Genomes Project (2016) | Amazing Science | Scoop.it

Sequencing unsampled fungal diversity.  Efforts to sequence 1000+ fungal genomes. Also see the Google+ site for more discussion opportunities.

 

This project is in collaboration with the work of the JGI and you can find links on this site to the nomination page for submitting candidate species to the project.


Via Kamoun Lab @ TSL, Arjen ten Have
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Discover stories within data using SandDance, a new Microsoft Research project

Discover stories within data using SandDance, a new Microsoft Research project | Amazing Science | Scoop.it

Data can be daunting. But within those numbers and spreadsheets is a wealth of information. There are also stories that the data can tell, if you’re able to see them. SandDance, a new Microsoft Garage project from Microsoft Research, helps you visually explore data sets to find stories and extract insights. It uses a free Web and touch-based interface to help users dynamically navigate through complex data they upload into the tool.

 

While data science experts will find that SandDance is a powerful tool, its ease of use can help people who aren’t experts in data science or programming the ability to analyze information – and present it – in a way that is accessible to a wider audience.

 

“We had this notion that a lot of visualization summarized data, and that summary is great, but sometimes you need the individual elements of your data set too,” says Steven Drucker, a principal researcher who’s focused on information visualization and data collections.

 

“We don’t want to lose sight of the trees because of the forest, but we also want to see the forest and the overall shape of the data. With this, you’ll see information about individuals and how they’re relative to each other. Most tools show one thing or the other. With SandDance, you can look at data from many different angles.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

These new quantum dot crystals could replace silicon in super-fast, next-gen computers

These new quantum dot crystals could replace silicon in super-fast, next-gen computers | Amazing Science | Scoop.it

Solid, crystalline structures of incredibly tiny particles known as quantum dots have been developed by engineers in the US, and they're so close to perfect, they could be a serious contender for a silicon alternative in the super-fast computers of the future.

 

Just as single-crystal silicon wafers revolutionised computing technology more than 60 years ago (your phone, laptop, PC, and iPad wouldn’t exist without one), quantum dot solids could change everything about how we transmit and process information in the decades to come.

 

But despite the incredible potential of quantum dot crystals in computing technology, researchers have been struggling for years to organise each individual dot into a perfectly structured solid - something that’s crucial if you want to install it in a processor and run an electric charge through it.

 

The problem? Past efforts to build something out of quantum dots - which are made up of a mere 5,000 atoms each - have failed, because researchers couldn’t figure out how to 'glue' them together without using another type of material that messes with their performance.

 

"Previously, they were just thrown together, and you hoped for the best," lead researcher Tobias Hanrath from Cornell University told The Christian Science Monitor. "It was like throwing a couple thousand batteries into a bathtub and hoping you get charge flowing from one end to the other."

 

Instead of pursuing different chemicals and materials that could work as the 'glue' but hinder the quantum dot’s electrical properties, Hanrath and his team have figured out how to ditch the glue and stick the quantum dots to each other, Lego-style.

"If you take several quantum dots, all perfectly the same size, and you throw them together, they’ll automatically align into a bigger crystal," Hanrath says.

 

To achieve this, the researchers first made nanocrystals from lead and selenium, and built these into crystalline fragments. These fragments were then used to form two-dimensional, square-shaped 'superstructures' - tiny building blocks that attach to each other without the help of other atoms. 

 

Publishing the results in Nature Materials, the team claims that the electrical properties of these superstructures are potentially superior to all other existing semiconductor nanocrystals, and they could be used in new types of devices for super-efficient energy absorption and light emission. The structures aren’t entirely perfect though, which is a key limitation of using quantum dots as your building blocks. While every silicon atom is exactly the same size, each quantum dot can vary by about 5 percent, and even when we’re talking about something that’s a few thousand atoms small, that 5 percent size variability is all it takes to prevent perfection.

 

Hanrath says that’s a good and a bad thing - good because they managed to hit the limits of what can be done with quantum dot solids, but bad, because they’ve hit the limits of what can be done with quantum dot solids.

 

"It's the equivalent of saying, 'Now we've made a really large single-crystal wafer of silicon, and you can do good things with it,'" he says in a press release. "That's the good part, but the potentially bad part of it is, we now have a better understanding that if you wanted to improve on our results, those challenges are going to be really, really difficult.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists suggest a 100 times faster type of memory cell based on Josephson junctions

Scientists suggest a 100 times faster type of memory cell based on Josephson junctions | Amazing Science | Scoop.it

A group of scientists from Moscow Institute of Physics and Technology and from the Moscow State University has developed a fundamentally new type of memory cell based on superconductors -- this type of memory will be able to work hundreds of times faster than the types of memory devices commonly used today, according to an article published in the journal Applied Physics Letters.

 

"With the operational function that we have proposed in these memory cells, there will be no need for time-consuming magnetization and demagnetization processes. This means that read and write operations will take only a few hundred picoseconds, depending on the materials and the geometry of the particular system, while conventional methods take hundreds or thousands of times longer than this," said the corresponding author of the study, Alexander Golubov, the Head of MIPT's Laboratory of Quantum Topological Phenomena in Superconducting Systems.

 

Golubov and his colleagues have proposed creating basic memory cells based on quantum effects in "sandwiches" of a superconductor -- dielectric (or other insulating material) -- superconductor, which were predicted in the 1960s by the British physicist Brian Josephson. The electrons in these "sandwiches" (they are called "Josephson junctions") are able to tunnel from one layer of a superconductor to another, passing through the dielectric like balls passing through a perforated wall.

 

Today, Josephson junctions are used both in quantum devices and conventional devices. For example, superconducting qubits are used to build the D-wave quantum system, which is capable of finding the minima of complex functions using the quantum annealing algorithm. There are also ultra-fast analogue-to-digital converters, devices to detect consecutive events, and other systems that do not require fast access to large amounts of memory. There have also been attempts to use the Josephson Effect to create ordinary processors. An experimental processor of this type was created in Japan in the late 1980s. In 2014, the research agency IAPRA resumed its attempts to create a prototype of a superconducting computer.

 

Josephson junctions with ferromagnets used as the middle of the "sandwich" are currently of greatest practical interest. In memory elements that are based on ferromagnets the information is encoded in the direction of the magnetic field vector in the ferromagnet. However, there are two fundamental flaws with this process: firstly, the low density of the "packaging" of the memory elements -- additional chains need to be added to provide extra charge for the cells when reading or writing data, and secondly the magnetization vector cannot be changed quickly, which limits the writing speed.

 

The group of physicists from MIPT and MSU proposed encoding the data in Josephson cells in the value of the superconducting current. By studying the superconductor-normal metal/ferromagnet-superconductor-insulator-superconductor junctions, the scientists discovered that in certain longitudinal and transverse dimensions the layers of the system may have two energy minima, meaning they are in one of two different states. These two minima can be used to record data -- zeros and ones.

 

In order to switch the system from "zero" to "one" and back again, the scientists have suggested using injection currents flowing through one of the layers of the superconductor. They propose to read the status using the current that flows through the whole structure. These operations can be performed hundreds of times faster than measuring the magnetization or magnetization reversal of a ferromagnet.

 

"In addition, our method requires only one ferromagnetic layer, which means that it can be adapted to so-called single flux quantum logic circuits, and this means that there will be no need to create an entirely new architecture for a processor. A computer based on single flux quantum logic can have a clock speed of hundreds of gigahertz, and its power consumption will be dozens of times lower," said Golubov.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Experiment shows magnetic chips could dramatically increase computing's energy efficiency

Experiment shows magnetic chips could dramatically increase computing's energy efficiency | Amazing Science | Scoop.it

In a breakthrough for energy-efficient computing, engineers at the University of California, Berkeley, have shown for the first time that magnetic chips can operate with the lowest fundamental level of energy dissipation possible under the laws of thermodynamics.

 

The findings, to be published Friday, March 11, 2016 in the peer-reviewed journal Science Advances, mean that dramatic reductions in power consumption are possible—as much as one-millionth the amount of energy per operation used by transistors in modern computers.

 

This is critical for mobile devices, which demand powerful processors that can run for a day or more on small, lightweight batteries. On a larger, industrial scale, as computing increasingly moves into 'the cloud,' the electricity demands of the giant cloud data centers are multiplying, collectively taking an increasing share of the country's—and world's—electrical grid.

 

"We wanted to know how small we could shrink the amount of energy needed for computing," said senior author Jeffrey Bokor, a UC Berkeley professor of electrical engineering and computer sciences and a faculty scientist at the Lawrence Berkeley National Laboratory. "The biggest challenge in designing computers and, in fact, all our electronics today is reducing their energy consumption."

 

Lowering energy use is a relatively recent shift in focus in chip manufacturing after decades of emphasis on packing greater numbers of increasingly tiny and faster transistors onto chips. "Making transistors go faster was requiring too much energy," said Bokor, who is also the deputy director the Center for Energy Efficient Electronics Science, a Science and Technology Center at UC Berkeley funded by the National Science Foundation. "The chips were getting so hot they'd just melt."

 

Researchers have been turning to alternatives to conventional transistors, which currently rely upon the movement of electrons to switch between 0s and 1s. Partly because of electrical resistance, it takes a fair amount of energy to ensure that the signal between the two states is clear and reliably distinguishable, and this results in excess heat.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Project to reverse-engineer the brain to make computers to think like humans

Project to reverse-engineer the brain to make computers to think like humans | Amazing Science | Scoop.it

Three decades ago, the U.S. government launched the Human Genome Project, a 13-year endeavor to sequence and map all the genes of the human species. Although initially met with skepticism and even opposition, the project has since transformed the field of genetics and is today considered one of the most successful scientific enterprises in history.

 

Now the Intelligence Advanced Research Projects Activity (IARPA), a research organization for the intelligence community modeled after the defense department’s famed DARPA, has dedicated $100 million to a similarly ambitious project. The Machine Intelligence from Cortical Networks program, or MICrONS, aims to reverse-engineer one cubic millimeter of the brain, study the way it makes computations, and use those findings to better inform algorithms in machine learning and artificial intelligence. IARPA has recruited three teams, led by David Cox, a biologist and computer scientist at Harvard University, Tai Sing Lee, a computer scientist at Carnegie Mellon University, and Andreas Tolias, a neuroscientist at the Baylor College of Medicine. Each team has proposed its own five-year approach to the problem.

 

“It’s a substantial investment because we think it’s a critical challenge, and [it’ll have a] transformative impact for the intelligence community as well as the world more broadly,” says Jacob Vogelstein at IARPA, who manages the MICrONS program.

 

MICrONS, as a part of President Obama’s BRAIN Initiative, is an attempt to push forward the status quo in brain-inspired computing. A great deal of technology today already relies on a class of algorithms called artificial neural networks, which, as their name would suggest, are inspired by the architecture (or at least what we know about the architecture) of the brain. Thanks to significant increases in computing power and the availability of vast amounts of data on the Internet, Facebook can identify faces, Siri can recognize voices, cars can self-navigate, and computers can beat humans at games like chess. These algorithms, however, are still primitive, relying on a highly simplified process of analyzing information for patterns.

Based on models dating back to the 1980s, neural networks tend to perform poorly in cluttered environments, where the object the computer is trying to identify is hidden among a large number of objects, many of which are overlapping or ambiguous. These algorithms do not generalize well, either. Seeing one or two examples of a dog, for instance, does not teach the computer how to identify all dogs.

 

Humans, on the other hand, seem to overcome these challenges effortlessly. We can make out a friend in a crowd, focus on a familiar voice in a noisy setting, and deduce patterns in sounds or an image based on just one or a handful of examples. We are constantly learning to generalize without the need for any instructions. And so the MICrONS researchers have turned to the brain to find what these models are missing. “That’s the smoking gun,” Cox says.

 

While neural networks retain elements of the architecture found in the brain, the computations they use are not copied directly from any algorithms that neurons use to process information. In other words, the ways in which current algorithms represent, transform, and learn from data are engineering solutions, determined largely by trial and error. They work, but scientists do not really know why—certainly not well enough to define a way to design a neural network. Whether this neural processing is similar to or different from corresponding operations in the brain remains unknown. “So if we go one level deeper and take information from the brain at the computational level and not just the architectural level, we can enhance those algorithms and get them closer to brain-like performance,” Vogelstein says.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Competition to Quantum computers: NP-complete problem solved with biological motors

Competition to Quantum computers: NP-complete problem solved with biological motors | Amazing Science | Scoop.it

Quantum computers get a lot of people excited because they solve problems in a manner that's fundamentally different from existing hardware. A certain class of mathematical problems, called NP-complete, can seemingly only be solved by exploring every possible solution, which conventional computers have to do one at a time.


Quantum computers, by contrast, explore all possible solutions simultaneously, and so these can provide answers relatively rapidly.

This isn't just an intellectual curiosity; encryption schemes rely on it being too computationally challenging to decrypt a message.

But as you may have noticed, we don't yet have quantum computers, and the technical hurdles between us and them remain substantial. An international team recently decided to try a different approach, using biology to explore a large solution space in parallel. While their computational machine is limited, the basic approach works, and it's 10,000 times more energy-efficient than traditional computers.


The basic setup of their system is remarkably simple. They create a grid of channels with two types of intersections. One allows everything passing down a grid line to continue traveling in a straight line; the second divides things evenly between the two output lines. A mathematical problem can then be encoded in the pattern of these two types of switches on the grid.


One way to envision this is as a series of channels that balls can roll down. To solve the problem, you simply start a large population of balls rolling from the top. They can explore all the possible solution space on their way traveling toward the bottom and should end up collecting at the exits that correspond to any possible solutions.


Of course, doing this with something macroscopic would be rather space, time, and labor consuming. You want something small so that the switches can be placed close together, and you want something that traverses the channels relatively quickly. For this, the authors turned to biology.


Within a cell, there are networks of protein fibers that motor proteins can crawl across using ATP as an energy source. These move things around within the cell, allow the cell itself to move, and drive larger processes like the contraction of muscles. For their computations, the authors flipped this around: the channels of the grid were coated in the motor proteins, and the authors loaded fragments of the fibers at the top of the grid. As long as ATP was provided, the fibers were quickly shuffled down to the bottom of the grid, traveling at speeds of up to 10µm a second. They were tagged with fluorescent molecules, which made it easy to track their progress.


The process wasn't 100 percent precise; tubes didn't go straight through some of the straight junctions some 0.2 percent of the time. Still, when the authors set up the grid to solve an NP-complete problem (the subset sum problem) there was a statistically significant difference between the number of fiber fragments that ended up exiting at solutions vs. those that exited at an incorrect gate.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Physics: Let's unite to build a quantum Internet

Physics: Let's unite to build a quantum Internet | Amazing Science | Scoop.it

One of the greatest challenges for implementing a globally distributed quantum computer or a quantum internet is entangling nodes across the network8. Qubits can then be teleported between any pair and processed by local quantum computers.

 

Ideally, nodes should be entangled either in pairs or by creating a large, multi-entangled 'cluster state' that is broadcast to all nodes. Cluster states that link thousands of nodes have already been created in the laboratory9. The challenges are to demonstrate how they might be deployed over long distances, as well as how to store quantum states at the nodes and update them constantly using quantum codes.

 

Quantum networks require memories to store quantum information, ideally for hours — shielding it from unwanted interactions with the environment. Such memories are needed for quantum computing at nodes and also for the faithful, long-distance distribution of entanglement through quantum repeaters.

 

Quantum memories need to convert electromagnetic radiation into physical changes in matter with near-perfect read–write fidelity and at high capacity. 'Spin ensembles' represent one type of quantum memory. Ultracold atomic gases consisting of about one million atoms of rubidium can convert a single photon into a collective atomic excitation known as a spin wave. Storage times are approaching the 100 milliseconds required to transmit an optical signal across the world.

 

Solid-state quantum memories are even more appealing. Crystalline-solid spin ensembles — created by inserting lattice defects known as nitrogen-vacancy centres into diamonds, or by doping rare-earth crystals — can remain coherent for hours at cryogenic temperatures.

 

Superconducting qubits, which are defined by physical quantities such as the charge of a capacitor or the flux of an inductor, interact within a quantum processor by releasing and absorbing microwave photons. For the successful integration of solid-state quantum memory, reversible storage and retrieval of quantum information must be made possible. This will require an efficient interface between the microwave photons and the atomic spins of a solid-state quantum memory that is attached to the processor. If successful, this hybrid technology would become the most promising architecture to be scaled up into a large, distributed quantum computer.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The biggest Big Data project on Earth

The biggest Big Data project on Earth | Amazing Science | Scoop.it
The biggest amount of data ever gathered and processed passing through the UK, for scientists and SMBs to slice, dice, and turn into innovations and insights. When Big Data becomes Super-Massive Data.

 

Eventually there will be two SKA telescopes. The first, consisting of 130,000 2m dipole low-frequency antennae, is being built in the Shire of Murchison, a remote region about 800km north of Perth, Australia – an area the size of the Netherlands, but with a population of less than 100 people. Construction kicks off in 2018.

 

By Phase 2, said Diamond, the SKA will consist of half-a-million low and mid-frequency antennae, with arrays spread right across southern Africa as well as Australia, stretching all the way from South Africa to Ghana and Kenya – a multibillion-euro project on an engineering scale similar to the Large Hadron Collider. Which brings us to that supermassive data challenge for what, ultimately, will be an ICT-driven science facility. Diamond says: "The antennae will generate enormous volumes of data: even by the mid-2020s, Phase 1 of the project will be looking at 5,000 petabytes – five exabytes – a day of raw data. This will go to huge banks of digital signal processors, which we’re in the process of designing, and then into high-performance computers, and into an archive for scientists worldwide to access."

 

Our archive growth rate will be somewhere will be somewhere between 300 and 500 petabytes a year – science-quality data coming out of the supercomputer.

 

Using the most common element in the universe, neutral hydrogen, as a tracer, the SKA will be able to follow the trail all the way back to the cosmic dawn, a few hundred thousand years after the Big Bang. But over billions of years (a beam of light travelling at 671 million miles an hour would take 46.5 billion years to reach the edge of the observable universe) the wavelength of those ancient hydrogen signatures becomes stretched via the doppler effect, until it falls into the same range as the radiation emitted by mobile phones, aircraft, FM radio, and digital TV. This is why the SKA arrays are being built in remote, sparsely populated regions, says Diamond:

"The aim is to get away from people. It’s not because we’re antisocial – although some of my colleagues probably are a little! – but we need to get away from radio interference, phones, microwaves, and so on, which are like shining a torch in the business end of an optical telescope."

 

Eventually there will be two SKA telescopes. The first, consisting of 130,000 2m dipole low-frequency antennae, is being built in the Shire of Murchison, a remote region about 800km north of Perth, Australia – an area the size of the Netherlands, but with a population of less than 100 people. Construction kicks off in 2018.

 

By Phase 2, said Diamond, the SKA will consist of half-a-million low and mid-frequency antennae, with arrays spread right across southern Africa as well as Australia, stretching all the way from South Africa to Ghana and Kenya – a multibillion-euro project on an engineering scale similar to the Large Hadron Collider.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Popular Science
Scoop.it!

Online collaboration: Scientists and the social network

Online collaboration: Scientists and the social network | Amazing Science | Scoop.it
Giant academic social networks have taken off to a degree that no one expected even a few years ago. A Nature survey explores why.

Via Neelima Sinha
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Let's build a quantum computer!
Understanding the architecture of a quantum processor

Andreas Dewes explains why quantum computing is interesting, how it works and what you actually need to build a working quantum computer. He uses the superconducting two-qubit quantum processor which he built during his PhD thesis as an example to explain its basic building blocks. He shows how this processor can be used to achieve so-called quantum speed-up for a search algorithm that can be run on it. Finally, he gives a short overview of the current state of superconducting quantum computing and Google's recently announced effort to build a working quantum computer in cooperation with one of the leading research groups in this field.

 

Google recently announced that it is partnering up with John Martinis - one of the leading researchers on superconducting quantum computing - to build a working quantum processor. This announcement has sparked a lot of renewed interest in a topic that was mainly of academic interest before. So, if Google thinks it's worth the hassle to build quantum computers then there surely must be something about them after all?

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Physicists demonstrate a quantum Fredkin gate for the first time

Physicists demonstrate a quantum Fredkin gate for the first time | Amazing Science | Scoop.it

Researchers from Griffith University and the University of Queensland have overcome one of the key challenges to quantum computing by simplifying a complex quantum logic operation.

 

"The allure of quantum computers is the unparalleled processing power that they provide compared to current technology," said Dr Raj Patel from Griffith's Centre for Quantum Dynamics.

"Much like our everyday computer, the brains of a quantum computer consist of chains of logic gates, although quantum logic gates harness quantum phenomena."

 

The main stumbling block to actually creating a quantum computer has been in minimising the number of resources needed to efficiently implement processing circuits. "Similar to building a huge wall out lots of small bricks, large quantum circuits require very many logic gates to function. However, if larger bricks are used the same wall could be built with far fewer bricks," said Dr Patel. "We demonstrate in our experiment how one can build larger quantum circuits in a more direct way without using small logic gates."

 

At present, even small and medium scale quantum computer circuits cannot be produced because of the requirement to integrate so many of these gates into the circuits. One example is the Fredkin (controlled- SWAP) gate. This is a gate where two qubits are swapped depending on the value of the third. Usually the Fredkin gate requires implementing a circuit of five logic operations. The research team used the quantum entanglement of photons—particles of light—to implement the controlled-SWAP operation directly.

 

"There are quantum computing algorithms, such as Shor's algorithm for finding prime numbers, that require the controlled-SWAP operation.

 

The quantum Fredkin gate can also be used to perform a direct comparison of two sets of qubits (quantum bits) to determine whether they are the same or not. This is not only useful in computing but is an essential feature of some secure quantum communication protocols where the goal is to verify that two strings, or digital signatures, are the same," said Professor Tim Ralph from the University of Queensland.

 

Professor Geoff Pryde, from Griffith's Centre for Quantum Dynamics, is the project's chief investigator. "What is exciting about our scheme is that it is not limited to just controlling whether qubits are swapped, but can be applied to a variety of different operations opening up ways to control larger circuits efficiently," said Professor Pryde.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Virtual Neurorehabilitation
Scoop.it!

Robotic exoskeleton maps sense-deficits in young stroke patients

Robotic exoskeleton maps sense-deficits in young stroke patients | Amazing Science | Scoop.it

Researchers at the University of Calgary are using robotics technology to try to come up with more effective treatments for children who have had strokes.

 

The robotic device measures a patient's position sense — what doctors call proprioception — the unconscious perception of where the body is while in motion or at rest.

 

"Someone whose position sense has been affected might have difficulty knowing where their hand or arm is in space, adding to their difficulty in using their affected, weaker limb," said one of the study's senior researchers, Dr. Kirton of the Cumming School of Medicine's departments of pediatrics and clinical neurosciences.

 

"We can try to make a hand stronger but, if your brain doesn't know where the hand is, this may not translate into meaningful function in daily life."

 

PhD candidate Andrea Kuczynski is doing ongoing research using the KINARM (Kinesiological Instrument for Normal and Altered Reaching Movements) robotic device.

 

During the test the children sit in the KINARM machine with their arms supported by its exoskeleton, which measured movement as they played video games and did other tasks. All the children also had MRIs, which gave researchers a detailed picture of their brain structures.


Via Daniel Perez-Marcos
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Biology software Cello promises easier way to program living cells

Biology software Cello promises easier way to program living cells | Amazing Science | Scoop.it

Synthetic biologists have created software that automates the design of DNA circuits for living cells. The aim is to help people who are not skilled biologists to quickly design working biological systems, says synthetic biologist Christopher Voigt at the Massachusetts Institute of Technology in Cambridge, who led the work. “This is the first example where we’ve literally created a programming language for cells,” he says.

 

In the new software — called Cello — a user first specifies the kind of cell they are using and what they want it to do: for example, sense metabolic conditions in the gut and produce a drug in response. They type in commands to explain how these inputs and outputs should be logically connected, using a computing language called Verilog that electrical engineers have long relied on to design silicon circuits. Finally, Cello translates this information to design a DNA sequence that, when put into a cell, will execute the demands.

 

Voigt says his team is writing user interfaces that would allow biologists to write a single program and be returned different DNA sequences for different organisms. Anyone can access Cello through a Web-based interface, or by downloading its open-source code from the online repository GitHub.

 

”This paper solves the problem of the automated design, construction and testing of logic circuits in living cells,” says bioengineer Herbert Sauro at the University of Washington in Seattle, who was not involved in the study. The work is published in Science.1

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The momentous advance in artificial intelligence demands a new set of ethics

The momentous advance in artificial intelligence demands a new set of ethics | Amazing Science | Scoop.it

Let us all raise a glass to AlphaGo and mark another big moment in the advance of artificial intelligence (AI) and then perhaps start to worry. AlphaGo, Google DeepMind’s game of Go-playing AI just bested the best Go-playing human currently alive, the renowned Lee Sedol. This was not supposed to happen. At least, not for a while. An artificial intelligence capable of beating the best humans at the game was predicted to be 10 years away.

 

But as we drink to its early arrival, we should also begin trying to understand what the surprise means for the future – with regard, chiefly, to the ethics and governance implications that stretch far beyond a game.

 

As AlphaGo and AIs like it become more sophisticated – commonly outperforming us at tasks once thought to be uniquely human – will we feel pressured to relinquish control to the machines?

 

The number of possible moves in a game of Go is so massive that, in order to win against a player of Lee’s calibre, AlphaGo was designed to adopt an intuitive, human-like style of gameplay. Relying exclusively on more traditional brute-force programming methods was not an option. Designers at DeepMind made AlphaGo more human-like than traditional AI by using a relatively recent development – deep learning.

 

Deep learning uses large data sets, “machine learning” algorithms and deep neural networks – artificial networks of “nodes” that are meant to mimic neurons – to teach the AI how to perform a particular set of tasks. Rather than programming complex Go rules and strategies into AlphaGo, DeepMind designers taught AlphaGo to play the game by feeding it data based on typical Go moves. Then, AlphaGo played against itself, tirelessly learning from its own mistakes and improving its gameplay over time. The results speak for themselves.

 

Possessing a more intuitive approach to problem-solving allows artificial intelligence to succeed in highly complex environments. For example, actions with high levels of unpredictablility – talking, driving, serving as a soldier – which were previously unmanageable for AI are now considered technically solvable, thanks in large part to deep learning.

more...
Leonardo Wild's curator insight, March 27, 6:20 PM
The subject matter of one of my so-far unpublished novels, the third book in the Unemotion series *(Yo Artificial, in Spanish). It's starting to happen and we think Climate Change is big.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Face-tracking software lets you make anyone say anything in real time

You know how they say, "Show me pictures or video, or it didn't happen"? Well, the days when you could trust what you see on video in real time are officially coming to an end thanks to a new kind of face tracking.

 

A team from Stanford, the Max Planck Institute for Informatics and the University of Erlangen-Nuremberg has produced a video demonstrating how its software, called Face2Face, in combination with a common webcam, can make any person on video appear to say anything a source actor wants them to say.

 

In addition to perfectly capturing the real-time talking motions of the actor and placing them seamlessly on the video subject, the software also accounts for real-time facial expressions, including distinct movements such as eyebrow raises.

 

To show off the system, the team used YouTube videos of U.S. President George W. Bush, Russian President Vladimir Putin and Republican presidential candidate Donald Trump. In each case, the facial masking is flawless, effectively turning the video subject into the actor's puppet.

 

It might be fun to mix this up with something like "Say it with Trump," but for now the software is still in the research phase. "Unfortunately, the software is currently not publicly available — it's just a research project," team member Matthias Niessner told Mashable. "However, we are thinking about commercializing it given that we are getting so many requests." We knew this kind of stuff was possible in the special effects editing room, but the ability to do it in real time — without those nagging "uncanny valley" artifacts — could change how we interpret video documentation forever.

more...
Matt Archer's curator insight, March 24, 4:24 PM

What possible reason is there for this technology outside of supporting terrorism..?  Crazy.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Korean Go champ scores surprise victory over supercomputer

Korean Go champ scores surprise victory over supercomputer | Amazing Science | Scoop.it
A South Korean Go grandmaster on Sunday scored his first win over a Google-developed supercomputer, in a surprise victory after three humiliating defeats in a high-profile showdown between man and machine.

 

Lee Se-Dol thrashed AlphaGo after a nail-biting match that lasted for nearly five hours—the fourth of the best-of-five series in which the computer clinched a 3-0 victory on Saturday.

 

Lee struggled in the early phase of the fourth match but gained a lead towards the end, eventually prompting AlphaGo to resign.

The 33-year-old is one of the greatest players in modern history of the ancient board game, with 18 international titles to his name—the second most in the world.

 

"I couldn't be happier today...this victory is priceless. I wouldn't trade it for the world," a smiling Lee said after the match to cheers and applause from the audience. "I can't say I wasn't hurt by the past three defeats...but I still enjoyed every moment of playing so it really didn't damage me greatly," he said.

 

Lee earlier predicted a landslide victory over Artificial Intelligence (AI) but was later forced to concede that the AlphaGo was "too strong". Lee had vowed to try his best to win at least one game after his second defeat.

 

Described as the "match of the century" by local media, the game was closely watched by tens of millions of Go fans mostly in East Asia as well as AI scientists.

 

The most famous AI victory to date came in 1997, when the IBM-developed supercomputer Deep Blue beat the then-world class chess champion Garry Kasparov. But Go, played for centuries mostly in Korea, Japan and China, had long remained the holy grail for AI developers due to its complexity and near-infinite number of potential configurations.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

Xiaoice: The largest Turing test in history is carried out by Microsoft in China

Xiaoice: The largest Turing test in history is carried out by Microsoft in China | Amazing Science | Scoop.it
Microsoft's Xiaoice, which means "little Bing" in Chinese, has been described as the "largest Turing test in history" by a top researcher at the company.

 

In a report for Nautilus, the researcher, Yongdong Wang, outlined what the company was doing with Xiaoice — pronounced "Shao-ice" and translated as "little Bing" — in China. The Turing test, created by British scientist Alan Turing, is a test that looks to see if a human can detect whether the entity answering a question is a human or a computer. The test has become a fixture in pop culture, with astarring role in the film "Ex Machina."

 

According to Wang, Xiaoice is used by millions of people every day and can respond with human-like answers, questions, and "thoughts." If a user sends a picture of a broken ankle, Xiaoice will reply with a question asking about how much pain the injury caused, for example.

 

"Xiaoice can exchange views on any topic," Wang wrote. "If it's something she doesn't know much about, she will try to cover it up. If that doesn't work, she might become embarrassed or even angry, just like a human would."

 

A report from GeekWire said millions of Chinese users were telling Xiaoice that they loved it, without any apparent irony. About 25% of Xiaoice's users — 10 million people — had said "I love you" while using the service.

 

But very few people outside Asia have heard of Xiaoice. According to Google Trends, which tracks what people are searching for, interest spiked in August last year but has fallen.


Via Ben van Lier
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google is using machine learning to teach robots intelligent reactive behaviors

Google is using machine learning to teach robots intelligent reactive behaviors | Amazing Science | Scoop.it

Using your hand to grasp a pen that’s lying on your desk doesn’t exactly feel like a chore, but for robots, that’s still a really hard thing to do. So to teach robots how to better grasp random objects, Google’s research team dedicated 14 robots to the task. The standard way to solve this problem would be for the robot to survey the environment, create a plan for how to grasp the object, then execute on it. In the real world, though, lots of things can change between formulating that plan and executing on it.

 

Google is now using these robots to train a deep convolutional neural network (a technique that’s all the rage in machine learning right now) to help its robots predict the outcome of their grasps based on the camera input and motor commands. It’s basically hand-eye coordination for robots.

 

The team says that it took about 3,000 hours of practice (and 800,000 grasp attempts) before it saw “the beginnings of intelligent reactive behaviors.”

 

“The robot observes its own gripper and corrects its motions in real time. It also exhibits interesting pre-grasp behaviors, like isolating a single object from a group,” the team writes. “All of these behaviors emerged naturally from learning, rather than being programmed into the system.”

 

Google’s researchers say the average failure rate without training was 34 percent on the first 30 picking attempts. After training, that number was down to 18 percent. Still not perfect, but the next time a robot comes running after you and tries to grab you, remember that it now has an 80 percent chance of succeeding.

more...
No comment yet.