Amazing Science
843.3K views | +42 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Nuclear physicists use quantum computing for first simulations of atomic nucleus

Nuclear physicists use quantum computing for first simulations of atomic nucleus | Amazing Science | Scoop.it

Scientists at the Department of Energy's Oak Ridge National Laboratory are the first to successfully simulate an atomic nucleus using a quantum computer. The results, published in Physical Review Letters, demonstrate the ability of quantum systems to compute nuclear physics problems and serve as a benchmark for future calculations.

 

Quantum computing, in which computations are carried out based on the quantum principles of matter, was proposed by American theoretical physicist Richard Feynman in the early 1980s. Unlike normal computer bits, the qubit units used by quantum computers store information in two-state systems, such as electrons or photons, that are considered to be in all possible quantum states at once (a phenomenon known as superposition).

 

"In classical computing, you write in bits of zero and one," said Thomas Papenbrock, a theoretical nuclear physicist at the University of Tennessee and ORNL who co-led the project with ORNL quantum information specialist Pavel Lougovski. "But with a qubit, you can have zero, one, and any possible combination of zero and one, so you gain a vast set of possibilities to store data."

 

In October 2017 the multidivisional ORNL team started developing codes to perform simulations on the IBM QX5 and the Rigetti 19Q quantum computers through DOE's Quantum Testbed Pathfinder project, an effort to verify and validate scientific applications on different quantum hardware types. Using freely available pyQuil software, a library designed for producing programs in the quantum instruction language, the researchers wrote a code that was sent first to a simulator and then to the cloud-based IBM QX5 and Rigetti 19Q systems.

 

The team performed more than 700,000 quantum computing measurements of the energy of a deuteron, the nuclear bound state of a proton and a neutron. From these measurements, the team extracted the deuteron's binding energy—the minimum amount of energy needed to disassemble it into these subatomic particles. The deuteron is the simplest composite atomic nucleus , making it an ideal candidate for the project.  "Qubits are generic versions of quantum two-state systems. They have no properties of a neutron or a proton to start with," Lougovski said. "We can map these properties to qubits and then use them to simulate specific phenomena—in this case, binding energy."

 

A challenge of working with these quantum systems is that scientists must run simulations remotely and then wait for results. ORNL computer science researcher Alex McCaskey and ORNL quantum information research scientist Eugene Dumitrescu ran single measurements 8,000 times each to ensure the statistical accuracy of their results.  "It's really difficult to do this over the internet," McCaskey said. "This algorithm has been done primarily by the hardware vendors themselves, and they can actually touch the machine. They are turning the knobs."

 

The team also found that quantum devices become tricky to work with due to inherent noise on the chip, which can alter results drastically. McCaskey and Dumitrescu successfully employed strategies to mitigate high error rates, such as artificially adding more noise to the simulation to see its impact and deduce what the results would be with zero noise.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google wants to push quantum computing mainstream with ‘Bristlecone’ chip

Google wants to push quantum computing mainstream with ‘Bristlecone’ chip | Amazing Science | Scoop.it

The goal of the Google Quantum AI lab is to build a quantum computer that can be used to solve real-world problems. This strategy is designed to explore near-term applications using systems that are forward compatible to a large-scale universal error-corrected quantum computer. In order for a quantum processor to be able to run algorithms beyond the scope of classical simulations, it requires not only a large number of qubits. Crucially, the processor must also have low error rates on readout and logical operations, such as single and two-qubit gates.

 

Google presented Bristlecone, their new quantum processor, at the annual American Physical Society meeting in Los Angeles (2018). The purpose of this gate-based superconducting system is to provide a testbed for research into system error rates and scalability of Google's qubit technology, as well as applications in quantum simulation, optimization, and machine learning.

 

Google states: "The guiding design principle for this device is to preserve the underlying physics of our previous 9-qubit linear array technology1, 2, which demonstrated low error rates for readout (1%), single-qubit gates (0.1%) and most importantly two-qubit gates (0.6%) as our best result. This device uses the same scheme for coupling, control, and readout, but is scaled to a square array of 72 qubits. We chose a device of this size to be able to demonstrate quantum supremacy in the future, investigate first and second order error-correction using the surface code, and to facilitate quantum algorithm development on actual hardware."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Conservationists use astronomy software to save species

Conservationists use astronomy software to save species | Amazing Science | Scoop.it

Researchers are using astronomical techniques used to study distant stars to survey endangered species. The team of scientists is developing a system to automatically identify animals using a camera that has been mounted on a drone. It is able to identify them from the heat they give off, even when vegetation is in the way. Details of the system were presented at the annual meeting of the European Astronomical Society in Liverpool, UK.

 

The idea was developed by Serge Wich, a conservationist at Liverpool John Moores University, and Dr Steve Longmore, an astrophysicist at the same university. He says that the system has the potential to greatly improve the accuracy of monitoring endangered species and so help save endangered species.

 

"Conservation is not only about the numbers of animals but also about political will and local community supporting conservation. But better data always helps to move good arguments forward. Solid data on what is happening to animal populations is the foundation of all conservation efforts".

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

DDoS attack code powering massive attacks now public - CyberScoop

DDoS attack code powering massive attacks now public - CyberScoop | Amazing Science | Scoop.it

You, too, can now attempt a record-setting denial-of-service attack, as the tools used to launch the attacks were publicly posted to GitHub this week.

 

Proof-of-concept code by Twitter user @037 combined with a list of 17,000 IP addresses of vulnerable memcached servers allows anyone to send forged UDP packets to memcached servers obtained from the Shodan.io computer search engine.

 

It’s been just over a week since the first massive memcache-fueled denial of service attack. The code’s authors says it’s being released “to bring more attention to the flaw and force others into updating their devices.”

 

The era of terabit DDoS attacks was ushered in this month with giant denial of service attacks last week set records with 1.35-terabit-per-second and 1.7 -terabit-per-second attacks. They used unsecured memcached servers to launch the attacks, one of which targeted GitHub itself. The latter attack targeted an unnamed U.S. service provider, according to Arbor Networks.

 

A second tool was released on Monday, BleepingComputer reports, but the author is unknown. Akamai and Cloudflare predicted more attacks following the record-setting efforts. Cloudflare CEO Matthew Prince said he was seeing separate attacks of a similar size last week.


Via Ben van Lier
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Like wine, your next computer could improve with age

Like wine, your next computer could improve with age | Amazing Science | Scoop.it
Artificial intelligence is sweeping industries like medicine and finance. What if the machine you’re reading this on could learn too?

 

In a recent paper, Google describes using deep learning—an AI method that employs a large simulated neural network—to improve prefetching. Although the researchers haven’t shown how much this speeds things up, the boost could be big, given what deep learning has brought to other tasks.

 

“The work that we did is only the tip of the iceberg,” says Heiner Litz of the University of California, Santa Cruz, a visiting researcher on the project. Litz believes it should be possible to apply machine learning to every part of a computer, from the low-level operating system to the software that users interact with.

 

Such advances would be opportune. Moore’s Law is finally slowing down, and the fundamental design of computer chips hasn’t changed much in recent years. Tim Kraska, an associate professor at MIT who is also exploring how machine learning can make computers work better, says the approach could be useful for high-level algorithms, too. A database might automatically learn how to handle financial data as opposed to social-network data, for instance. Or an application could teach itself to respond to a particular user’s habits more effectively.

 

“We tend to build general-purpose systems and hardware,” Kraska says. “Machine learning makes it possible that the system is automatically customized, to its core, to the specific data and access patterns of a user.”

 

Kraska cautions that using machine learning remains computationally expensive, so computer systems won’t change overnight. “However, if it is possible to overcome these limitations,” he says, “the way we develop systems might fundamentally change in the future.”

 

Litz is more optimistic. “The grand vision is a system that is constantly monitoring itself and learning,” he says. “It’s really the start of something really big.”

 
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Fast computer control for molecular machines

Fast computer control for molecular machines | Amazing Science | Scoop.it

Scientists at the Technical University of Munich (TUM) have developed a novel electric propulsion technology for nanorobots. It allows molecular machines to move a hundred thousand times faster than with the biochemical processes used to date. This makes nanobots fast enough to do assembly line work in molecular factories. The new research results will appear as the cover story on 19th January, 2018 in Science.

 

Up and down, up and down. The points of light alternate back and forth in lockstep. They are produced by glowing molecules affixed to the ends of tiny robot arms. Prof. Friedrich Simmel observes the movement of the nanomachines on the monitor of a fluorescence microscope. A simple mouse click is all it takes for the points of light to move in another direction.

 

"By applying electric fields, we can arbitrarily rotate the arms in a plane," explains the head of the Chair of Physics of Synthetic Biological Systems at TU Munich. His team has for the first time managed to control nanobots electrically and has at the same time set a record: The new technique is 100 000 times faster than all previous methods.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Loihi: Intel Presents its Neuromorphic Computing Chip which mimics the way the human brain observes, learns and understands

Intel's neuromorphic computing team is creating a chip that mimics the way the human brain observes, learns and understands. The company's prototype chip "Loihi" is the most recent step in that direction. Neuromorphic computing has the potential to change the future -- the way devices understand and interpret data.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Outsmarting humans? AI cyberattacks will be almost impossible for humans to stop

Outsmarting humans? AI cyberattacks will be almost impossible for humans to stop | Amazing Science | Scoop.it

As cyberattacks become more refined, they will start mimicking our online traits. This will lead to a battle of the machines.

 

As early as 2018, we can expect to see truly autonomous weaponized artificial intelligence that delivers its blows slowly, stealthily and virtually without trace. And 2018 will be the year of the machine-on-machine attack. There is much debate about the possible future of autonomous AI on the battlefield. Once released, these systems are not controlled. They do not wait for orders from base. They learn and make their own decisions often while deep inside enemy territory. And they learn quickly from their environments.

 

However, autonomous AIs are already starting to be deployed on another type of battlefield: digital networks. Today cyber-attackers are using AI technologies that help them not only infiltrate an IT infrastructure, but to stay on that network for months, perhaps years, without getting noticed.

 

In 2018, we can expect these algorithmic presences to use their intelligence to learn about their environments and blend in with the daily commotion of network activity. The drivers of these automated attacks may have a defined target – the blueprint designs of a new type of jet engine, say – or persist opportunistically, where the chance for money- or mischief-making avails itself. As they sustain their presence, they grow stronger in their inside knowledge of the network and its users and they build up control over data and entire systems.

 

Like the HIV virus, which is so pernicious because it uses the body's own defences to replicate itself, these new machine intelligences will target the very defences deployed against it. They will learn how the firewall works, the analytics models used to detect attacks and times of day that the security team is in the office. They will then adapt to avoid and weaken them. All the while, it will use its strength to spread, creating inroads for compromise and contaminating devices with brutal efficiency.

 

AI will also attack us by impersonating people. We already have AI assistants that do our scheduling, email on our behalf and ask us what we'd like to order for lunch. But what happens if your AI assistant gets taken over by a malicious attacker? Or, indeed, what happens when weaponized AI is refined enough to convincingly impersonate a real person who you trust?

 

A stealthy, long-term AI presence on your network will have ample time to learn what your writing style is and how this differs depending on who you email, your contact base and the distinctions in professional and personal relationships based on the language you use and key themes in your conversations. For example, you email your partner five times a day, particularly in the morning and afternoon. They sign their emails "X". Your football team emails weekly with details for Saturday's five-a-side games. They sign emails "Be there!". This is fodder for AI.

 

As to what we should do about these malicious AIs: they will be too clever and stealthy to combat other than with other AIs. This is one arena we'll have to give up control, not take it back.

 
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Physicists Set New Record with 10-Qubit Entanglement and Parallel Logic Operations using a Superconducting Circuit

Physicists Set New Record with 10-Qubit Entanglement and Parallel Logic Operations using a Superconducting Circuit | Amazing Science | Scoop.it

Physicists have experimentally demonstrated quantum entanglement with 10 qubits on a superconducting circuit, surpassing the previous record of nine entangled superconducting qubits. The 10-qubit state is the largest multiqubit entangled state created in any solid-state system and represents a step toward realizing large-scale quantum computing.

 

Lead researcher Jian-Wei Pan and co-workers at the University of Science and Technology of China, Zhejiang University, Fuzhou University, and the Institute of Physics, China, have published a paper on their results in a recent issue of Physical Review Letters. In general, one of the biggest challenges to scaling up multiqubit entanglement is addressing the catastrophic effects of decoherence. One strategy is to use superconducting circuits, which operate at very cold temperatures and consequently have longer qubit coherence times.

 

In the new set-up, the researchers used qubits made of tiny pieces of aluminum, which they connected to each other and arranged in a circle around a central bus resonator. The bus is a key component of the system, as it controls the interactions between qubits, and these interactions generate the entanglement.

 

As the researchers demonstrated, the bus can create entanglement between any two qubits, can produce multiple entangled pairs, or can entangle up to all 10 qubits. Unlike some previous demonstrations, the entanglement does not require a series of quantum logic gates, nor does it involve modifying the physical wiring of the circuit, but instead all 10 qubits can be entangled with a single collective qubit-bus interaction.

 

To measure how well the qubits are entangled, the researchers used quantum tomography to determine the probability of measuring every possible state of the system. Although there are thousands of such states, the resulting probability distribution yielded the correct state about 67% of the time. This fidelity is well above the threshold for genuine multipartite entanglement(generally considered to be about 50%).

 

In the future, the physicists' goal is to develop a quantum simulator that could simulate the behavior of small molecules and other quantum systems, which would allow for a more efficient analysis of these systems compared to what is possible with classical computers.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Programming cells with computer-like logic

Programming cells with computer-like logic | Amazing Science | Scoop.it

Synthetic biologists are converting microbial cells into living devices that are able to perform useful tasks ranging from the production of drugs, fine chemicals and biofuels to detecting disease-causing agents and releasing therapeutic molecules inside the body. To accomplish this, they fit cells with artificial molecular machinery that can sense stimuli such as toxins in the environment, metabolite levels or inflammatory signals. Much like electronic circuits, these synthetic biological circuits can process information and make logic-guided decisions. Unlike their electronic counterparts, however, biological circuits must be fabricated from the molecular components that cells can produce, and they must operate in the crowded and ever-changing environment within each cell.

 

Similar to how computer scientists use logical language to have their programs make accurate AND, OR and NOT decisions towards a final goal, “Ribocomputing Devices” (stylized here in yellow) developed by a team at the Wyss Institute can now be used by synthetic biologists to sense and interpret multiple signals in cells and logically instruct their ribosomes (stylized in blue and green) to produce different proteins.

 

So far, synthetic biological circuits can only sense a handful of signals, giving them an incomplete picture of conditions in the host cell. They are also built out of several moving parts in the form of different types of molecules, such as DNAs, RNAs, and proteins, that must find, bind and work together to sense and process signals. Identifying molecules that cooperate well with one another is difficult and makes development of new biological circuits a time-consuming and often unpredictable process.

 

As reported in Nature, a team at Harvard’s Wyss Institute for Biologically Inspired Engineering is now presenting an all-in-one solution that imbues a molecule of ‘ribo’ nucleic acid or RNA with the capacity to sense multiple signals and make logical decisions to control protein production with high precision. The study’s approach resulted in a genetically encodable RNA nano-device that can perform an unprecedented 12-input logic operation to accurately regulate the expression of a fluorescent reporter protein in E. coli bacteria only when encountering a complex, user-prescribed profile of intra-cellular stimuli. Such programmable nano-devices may allow researchers to construct more sophisticated synthetic biological circuits, enabling them to analyze complex cellular environments efficiently and to respond accurately.

 

“We demonstrate that an RNA molecule can be engineered into a programmable and logically acting “Ribocomputing Device,” said Wyss Institute Core Faculty member Peng Yin, Ph.D., who led the study and is also Professor of Systems Biology at Harvard Medical School. “This breakthrough at the interface of nanotechnology and synthetic biology will enable us to design more reliable synthetic biological circuits that are much more conscious of the influences in their environment relevant to specific goals.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New way to write magnetic info could pave the way for hardware neural networks

New way to write magnetic info could pave the way for hardware neural networks | Amazing Science | Scoop.it

Researchers have shown how to write any magnetic pattern desired onto nanowires, which could help computers mimic how the brain processes information. Much current computer hardware, such as hard drives, use magnetic memory devices. These rely on magnetic states -- the direction microscopic magnets are pointing -- to encode and read information.

 

Exotic magnetic states -- such as a point where three south poles meet -- represent complex systems. These may act in a similar way to many complex systems found in nature, such as the way our brains process information. Computing systems that are designed to process information in similar ways to our brains are known as 'neural networks'. There are already powerful software-based neural networks -- for example one recently beat the human champion at the game 'Go' -- but their efficiency is limited as they run on conventional computer hardware.

 

Now, researchers from Imperial College London have devised a method for writing magnetic information in any pattern desired, using a very small magnetic probe called a magnetic force microscope. With this new writing method, arrays of magnetic nanowires may be able to function as hardware neural networks -- potentially more powerful and efficient than software-based approaches. The team, from the Departments of Physics and Materials at Imperial, demonstrated their system by writing patterns that have never been seen before. They published their results today in Nature Nanotechnology.

 

Dr Jack Gartside, first author from the Department of Physics, said: "With this new writing method, we open up research into 'training' these magnetic nanowires to solve useful problems. If successful, this will bring hardware neural networks a step closer to reality." As well as applications in computing, the method could be used to study fundamental aspects of complex systems, by creating magnetic states that are far from optimal (such as three south poles together) and seeing how the system responds.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from DNA and RNA research
Scoop.it!

Can data storage in DNA solve our massive data storage problem in the future?

Can data storage in DNA solve our massive data storage problem in the future? | Amazing Science | Scoop.it

The latest in high-density ultra-durable data storage has been perfected over billions of years by nature itself.

 

Now ‘Smoke on the Water’ is making history again. This September, it was one of the first items from the Memory Of the World archive to be stored in the form of DNA and then played back with 100% accuracy. The project was a joint effort between the University of Washington, Microsoft and Twist Bioscience, a San Francisco-based DNA manufacturing company.

 

The demonstration was billed as a ‘proof of principle’ – which is shorthand for successful but too expensive to be practical. At least for now. Many pundits predict it’s just a matter of time till DNA pips magnetic tape as the ultimate way to store data. It’s compact, efficient and resilient. After all, it has been tweaked over billions of years into the perfect repository for genetic information. It will never become obsolete, because as long as there is life on Earth, we will be interested in decoding DNA. “Nature has optimised the format,” says Twist Bioscience’s chief technology officer Bill Peck.

 

Players like Microsoft, IBM and Intel are showing signs of interest. In April, they joined other industry, academic and government experts at an invitation-only workshop (cosponsored by the U.S. Intelligence Advanced Research Projects Activity (IARPA)) to discuss the practical potential for DNA to solve humanity’s looming data storage crisis.

 

It’s a big problem that’s getting bigger by the minute. According to a 2016 IBM Marketing Cloud report, 90% of the data that exists today was created in just the past two years. Every day, we generate another 2.5 quintillion (2.5 × 1018) bytes of information. It pours in from high definition video and photos, Big Data from particle physics, genomic sequencing, space probes, satellites, and remote sensing; from think tanks, covert surveillance operations, and Internet tracking algorithms. EVERY DAY, WE GENERATE ANOTHER 2.5 QUINTILLION BYTES OF INFORMATION.

 

Right now all those bits and bytes flow into gigantic server farms, onto spinning hard drives or reels of state-of-the-art magnetic tape. These physical substrates occupy a lot of space. Compare this to DNA. The entire human genome, a code of three billion DNA base pairs, or in data speak, 3,000 megabytes, fits into a package that is invisible to the naked eye – the cell’s nucleus. A gram of DNA — the size of a drop of water on your fingertip — can store at least the equivalent of 233 computer hard drives weighing more than 150 kilograms. To store the all the genetic information in a human body — 150 zettabytes — on tape or hard drives, you’d need a facility covering thousands, if not millions of square feet.

 

And then there’s durability. Of the current storage contenders, magnetic tape has the best lifespan, at about 10-20 years. Hard drives, CDs, DVDs and flash drives are less reliable, often failing within five to ten years. DNA has proven that it can survive thousands of years unscathed. In 2013, for example, the genome of an early horse relative was reconstructed from DNA from a 700,000-year-old bone fragment found in the Alaskan permafrost.


Via Integrated DNA Technologies
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Computers and Cosmos Code Help Probe to Explain Space Oddities

Computers and Cosmos Code Help Probe to Explain Space Oddities | Amazing Science | Scoop.it
XSEDE ECSS program helps optimize astrophysics code for Knights Landing processors on Stampede2 supercomputer.

 

Black holes make for a great space mystery. They're so massive that nothing, not even light, can escape a black hole once it gets close enough. A great mystery for scientists is that there's evidence of powerful jets of electrons and protons that shoot out of the top and bottom of some black holes. Yet no one knows how these jets form.

 

Computer code called Cosmos now fuels supercomputer simulations of black hole jets and is starting to reveal the mysteries of black holes and other space oddities.

 

"Cosmos, the root of the name, came from the fact that the code was originally designed to do cosmology. It's morphed into doing a broad range of astrophysics," explained Chris Fragile, a professor in the Physics and Astronomy Department of the College of Charleston. Fragile helped develop the Cosmos code in 2005 while working as a post-doctoral researcher at the Lawrence Livermore National Laboratory (LLNL), along with Steven Murray (LLNL) and Peter Anninos (LLNL).

 

Fragile pointed out that Cosmos provides astrophysicists an advantage because it has stayed at the forefront of general relativistic magnetohydrodynamics (MHD). MHD simulations, the magnetism of electrically conducting fluids such as black hole jets, add a layer of understanding but are notoriously difficult for even the fastest supercomputers.

 

"The other area that Cosmos has always had some advantage in as well is that it has a lot of physics packages in it," continued Fragile. "This was Peter Anninos' initial motivation, in that he wanted one computational tool where he could put in everything he had worked on over the years." Fragile listed some of the packages that include chemistry, nuclear burning, Newtonian gravity, relativistic gravity, and even radiation and radiative cooling. "It's a fairly unique combination," Fragile said.

 

The current iteration of the code is CosmosDG, which utilizes discontinuous Gelarkin methods. "You take the physical domain that you want to simulate," explained Fragile, "and you break it up into a bunch of little, tiny computational cells, or zones. You're basically solving the equations of fluid dynamics in each of those zones." CosmosDG has allowed much higher order of accuracy than ever before, according to results published in the Astrophysical Journal, August 2017.

 

Since 2008, the Texas Advanced Computing Center (TACC) has provided computational resources for the development of the Cosmos code—about 6.5 million supercomputer core hours on the Ranger system and 3.6 million core hours on the Stampede system. XSEDE, the eXtreme Science and Engineering Discovery Environment funded by the National Science Foundation, awarded Fragile's group with the allocation.

 

Fragile has recently enlisted the help of XSEDE ECSS to optimize the CosmosDG code for Stampede2, a supercomputer capable of 18 petaflops and the flagship of TACC at The University of Texas at Austin. Stampede2 features 4,200 Knights Landing (KNL) nodes and 1,736 Intel Xeon Skylake nodes.

 

"We're living in a golden age of astronomy," Fragile said, referring to the wealth of knowledge generated from space telescopes like Hubble to the upcoming James Webb Space Telescope, to land-based telescopes such as Keck, and more. Computing has helped support the success of astronomy, Fragile said. "What we do in modern-day astronomy couldn't be done without computers," he concluded. "The simulations that I do are two-fold. They're to help us better understand the complex physics behind astrophysical phenomena. But they're also to help us interpret and predict observations that either have been, can be, or will be made in astronomy."

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Science And Wonder
Scoop.it!

New Virtual Reality Experience Drops You Into Hiroshima Right After The Atom Bomb Explosion

New Virtual Reality Experience Drops You Into Hiroshima Right After The Atom Bomb Explosion | Amazing Science | Scoop.it

On August 6, 1945, Shigeru Orimen traveled from his rural home near Itsukaichi-cho to Hiroshima, where he was one of nearly 27,000 students working to prepare the city for impending U.S. airstrikes. For lunch that day, he had brought soybeans, sautéed potatoes and strips of daikon.

 

When the atomic bomb fell on Hiroshima at 8:16 a.m., Shigeru was among the nearly 7,200 students who perished. Three days later, his mother Shigeko would identify his body using his lunch box; the food inside was transformed into coal, but the outside remained intact.

 

Today, his lunch box and Shigeko’s testimony are part of the archives at the Hiroshima Peace Memorial Museum. The object and its story left a haunting impression on filmmakers Saschka Unseld and Gabo Arora who co-directed a new virtual reality experience titled The Day the World Changed. Created in partnership with Nobel Media to commemorate the work of the International Campaign to Abolish Nuclear Weapons (the winner of the 2017 Nobel Peace Prize), the film premiered at the Tribeca Film Festival last week.

 

The immersive experience begins with an explanation of the genesis, development, and deployment of the atomic bomb and then moves to a second chapter focused on the aftermath of the attack. Audience members can walk through the ruins of the city and examine artifacts from the bombing, including Shigeru’s lunch box. In the final chapter, the piece shifts toward the present, describing the frenetic race to create new atomic weapons and the continued threat of nuclear war.

 

It’s hardly the only piece at Tribeca to focus on difficult topics: Among the festival’s 34 immersive titles are pieces that grapple with the legacy of racism, the threat of climate change, AIDS and the ongoing crisis in Syria. Neither is it the first VR installation to achieve popular acclaim. Last November, filmmaker Alejandro G. Iñárritu received an Oscar at the Academy’s Governor’s Awards for his virtual reality installation CARNE y ARENA, which captures the experience of migrants crossing the U.S.-Mexico border.

 

The Day the World Changed differs from these installations in a critical respect: Much of the material already exists in an archival format. Video testimony and radiated relics from the day of devastation come from the museum’s archives and photogrammetry (the creation of 3D models using photography) allowed for digital reproductions of surviving sites. In this sense, the piece shares more with the interpretive projects led by traditional documentarians and historians than the fantastical or gamified recreations that most associate with virtual reality.

 

What makes it different, Arora and Unseld say, is that the storytelling possibilities enabled by immersive technologies allow viewers to experience previously inaccessible locations—for example, the inside of the Atomic Dome, the Unesco World Heritage site directly underneath the explosion of the bomb that remains intact—and engage with existing artifacts in a more visceral way. The future is exciting, though there’s a certain tension given the national conversation on the dangers of technological manipulation. “You have to be very careful,” Arora says. “We think it’s important to figure out the grammar of VR and not just rely on an easy sort of way of horrifying people. Because that doesn’t last.”


Via LilyGiraud
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

FontCode: Hiding information in plain text, unobtrusively and across file types

FontCode: Hiding information in plain text, unobtrusively and across file types | Amazing Science | Scoop.it

Computer scientists at Columbia Engineering have invented FontCode, a new way to embed hidden information in ordinary text by imperceptibly changing, or perturbing, the shapes of fonts in text. FontCode creates font perturbations, using them to encode a message that can later be decoded to recover the message. The method works with most fonts and, unlike other text and document methods that hide embedded information, works with most document types, even maintaining the hidden information when the document is printed on paper or converted to another file type. The paper will be presented at SIGGRAPH in Vancouver, British Columbia, August 12-16.

 

"While there are obvious applications for espionage, we think FontCode has even more practical uses for companies wanting to prevent document tampering or protect copyrights, and for retailers and artists wanting to embed QR codes and other metadata without altering the look or layout of a document," says Changxi Zheng, associate professor of computer science and the paper's senior author.

 

Zheng created FontCode with his students Chang Xiao (PhD student) and Cheng Zhang MS'17 (now a PhD student at UC Irvine) as a text steganographic method that can embed text, metadata, a URL, or a digital signature into a text document or image, whether it's digitally stored or printed on paper. It works with common font families, such as Times Roman, Helvetica, and Calibri, and is compatible with most word processing programs, including Word and FrameMaker, as well as image-editing and drawing programs, such as Photoshop and Illustrator. Since each letter can be perturbed, the amount of information conveyed secretly is limited only by the length of the regular text. Information is encoded using minute font perturbations -- changing the stroke width, adjusting the height of ascenders and descenders, or tightening or loosening the curves in serifs and the bowls of letters like o, p, and b.

 

"Changing any letter, punctuation mark, or symbol into a slightly different form allows you to change the meaning of the document," says Xiao, the paper's lead author. "This hidden information, though not visible to humans, is machine-readable just as barcodes and QR codes are instantly readable by computers. However, unlike barcodes and QR codes, FontCode doesn't mar the visual aesthetics of the printed material, and its presence can remain secret."

 

Data hidden using FontCode can be extremely difficult to detect. Even if an attacker detects font changes between two texts -- highly unlikely given the subtlety of the perturbations -- it simply isn't practical to scan every file going and coming within a company.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

Serious quantum computers are finally here. What are we going to do with them?

Serious quantum computers are finally here. What are we going to do with them? | Amazing Science | Scoop.it

Hello, quantum world! Inside a small laboratory in lush countryside about 50 miles north of New York City, an elaborate tangle of tubes and electronics dangles from the ceiling. This mess of equipment is a computer. Not just any computer, but one on the verge of passing what may, perhaps, go down as one of the most important milestones in the history of the field.

 

Quantum computers promise to run calculations far beyond the reach of any conventional supercomputer. They might revolutionize the discovery of new materials by making it possible to simulate the behavior of matter down to the atomic level. Or they could upend cryptography and security by cracking otherwise invincible codes. There is even hope they will supercharge artificial intelligence by crunching through data more efficiently. 

 

Yet only now, after decades of gradual progress, are researchers finally close to building quantum computers powerful enough to do things that conventional computers cannot. It’s a landmark somewhat theatrically dubbed “quantum supremacy.” Google has been leading the charge toward this milestone, while Intel and Microsoft also have significant quantum efforts. And then there are well-funded startups including Rigetti Computing, IonQ, and Quantum Circuits.

 

“Nature is quantum, goddamn it! So if we want to simulate it, we need a quantum computer.” No other contender can match IBM’s pedigree in this area, though. Starting 50 years ago, the company produced advances in materials science that laid the foundations for the computer revolution. Which is why, last October, I found myself at IBM’s Thomas J. Watson Research Center to try to answer these questions: What, if anything, will a quantum computer be good for? And can a practical, reliable one even be built?


Via Ben van Lier
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Social Foraging
Scoop.it!

'Memtransistor' Forms Foundational Circuit Element to Neuromorphic Computing

'Memtransistor' Forms Foundational Circuit Element to Neuromorphic Computing | Amazing Science | Scoop.it

Computers that operate more like the human brain than computers—a field sometimes referred to as neuromorphic computing—have promised a new era of powerful computing.

 

While this all seems promising, one of the big shortcomings in neuromorphic computing has been that it doesn’t mimic the brain in a very important way. In the brain, for every neuron there are a thousand synapses—the electrical signal sent between the neurons of the brain. This poses a problem because a transistor only has a single terminal, hardly an accommodating architecture for multiplying signals.

 

Now researchers at Northwestern University, led by Mark Hersam, have developed a new device that combines memristors—two-terminal non-volatile memory devices based on resistance switching—with transistors to create what Hersam and his colleagues have dubbed a “memtransistor” that performs both memory storage and information processing.

 

This most recent research builds on work that Hersam and his team conducted back in 2015 in which the researchers developed a three-terminal, gate-tunable memristor that operated like a kind of synapse.

 

While this work was recognized as mimicking the low-power computing of the human brain, critics didn’t really believe that it was acting like a neuron since it could only transmit a signal from one artificial neuron to another. This was far short of a human brain that is capable of making tens of thousands of such connections.

 

“Traditional memristors are two-terminal devices, whereas our memtransistors combine the non-volatility of a two-terminal memristor with the gate-tunability of a three-terminal transistor,” said Hersam to IEEE Spectrum. “Our device design accommodates additional terminals, which mimic the multiple synapses in neurons.”


Via Ashish Umre
more...
Félix Santamaria's curator insight, March 16, 5:15 AM
¿Cada vez más cerca?
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Cloud quantum computing calculates nuclear binding energy of deuterium

Cloud quantum computing calculates nuclear binding energy of deuterium | Amazing Science | Scoop.it

Cloud quantum computing has been used to calculate the binding energy of the deuterium nucleus – the first-ever such calculation done using quantum processors at remote locations. Nuclear physicists led by Eugene Dumitrescu at Oak Ridge National Laboratory in the US used publicly available software to achieve the remote operation of two distant quantum computers. Their work could lead to new opportunities for scientists in many fields who want to use quantum simulations to calculate properties of matter.

 

In previous research, scientists have worked alongside quantum computer hardware developers to create quantum simulations. These typically use between two and six qubits to calculate a quantum property of matter – calculations that can be extremely time-consuming to do with a conventional computer. As the number of qubits available in quantum computers increase, the hope is that quantum simulations will be able to do calculations well beyond the reach of even the most powerful conventional computers. However, doing simulations alongside quantum computer specialists can be an inefficient process and the research would be much more streamlined if scientists were able to operate quantum computers themselves.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

At least three billion computer chips are vulnerable to a security flaw

At least three billion computer chips are vulnerable to a security flaw | Amazing Science | Scoop.it

Companies are rushing out software fixes for Chipmageddon 1.0. Tech companies are still working overtime on patching two critical vulnerabilities in computer chips that were revealed this week. The flaws, dubbed “Meltdown” and “Spectre,” could let hackers get hold of passwords, encryption keys, and other sensitive information from a computer’s core memory via malicious apps running on devices.

 

How many chips are affected? The number is something of a moving target. But from the information released so far by tech companies and estimates from chip industry analysts, it looks as if at least three billion chips in computers, tablets, and phones now in use are vulnerable to attack by Spectre, which is the more widespread of the two flaws.

 

Apple says all its Mac and iOS products are affected, with the exception of the Apple watch. That’s a billion or so devices. Gadgets powered by Google’s Android operating system number more than two billion, the company said last year. Linley Gwennap of the Linley Group, which tracks the chip industry, thinks the security flaws could affect about 500 million of them. As practically all smartphones run on iOS and Android, this pretty much covers the mobile-device landscape.

 

Then there are PCs and servers. These are largely powered by chips from Intel, whose share price has been battered since news of the flaws emerged. Its chief U.S. competitor, AMD, which has been gaining ground on Intel, said in a blog post  that its chips are not vulnerable to Meltdown and there is a “near zero risk” from one variant of Spectre and zero risk from another.

 

Still, if some level of threat from Spectre exists, AMD chips merit inclusion. Between them Intel and AMD account for over a billion PC and server chips. In addition, there are a host of smaller chipmakers such as IBM, which has said at least some of its chips are affected. This brings the total to around three billion processors, though this could change as more information emerges. 

 
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Project MORPHEUS: DARPA-funded ‘unhackable’ computer could avoid future flaws like Spectre and Meltdown

Project MORPHEUS: DARPA-funded ‘unhackable’ computer could avoid future flaws like Spectre and Meltdown | Amazing Science | Scoop.it

A University of Michigan (U-M) team has announced plans to develop an “unhackable” computer, funded by a new $3.6 million grant from the Defense Advanced Research Projects Agency (DARPA). The goal of the project, called MORPHEUS, is to design computers that avoid the vulnerabilities of most current microprocessors, such as the Spectre and Meltdown flaws announced  recently. The $50 million DARPA System Security Integrated Through Hardware and Firmware (SSITH) program aims to build security right into chips’ microarchitecture, instead of relying on software patches. The U-M grant is one of nine that DARPA has recently funded through SSITH.

 

Future-proofing

The idea is to protect against future threats that have yet to be identified. “Instead of relying on software Band-Aids to hardware-based security issues, we are aiming to remove those hardware vulnerabilities in ways that will disarm a large proportion of today’s software attacks,” said Linton Salmon, manager of DARPA’s System Security Integrated Through Hardware and Firmware program.

 

Under MORPHEUS, the location of passwords would constantly change, for example. And even if an attacker were quick enough to locate the data, secondary defenses in the form of encryption and domain enforcement would throw up additional roadblocks.

 

More than 40 percent of the “software doors” that hackers have available to them today would be closed if researchers could eliminate seven classes of hardware weaknesses**, according to DARPA.

 

DARPA is aiming to render these attacks impossible within five years. “If developed, MORPHEUS could do it now,” said Todd Austin, U-M professor of computer science and engineering, who leads the project. Researchers at The University of Texas and Princeton University are also working with U-M.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Alpha Zero’s “Alien Way” of Playing Chess Shows the Power, and the Peculiarity, of AI

Alpha Zero’s “Alien Way” of Playing Chess Shows the Power, and the Peculiarity, of AI | Amazing Science | Scoop.it

Details how Alpha Zero played are here

 

The latest AI program developed by DeepMind is not only brilliant and remarkably flexible—it’s also quite weird.

 

DeepMind published a paper this week describing a game-playing program it developed that proved capable of mastering chess and the Japanese game Shoju, having already mastered the game of Go.

Demis Hassabis, the founder and CEO of DeepMind and an expert chess player himself, presented further details of the system, called Alpha Zero, at an AI conference in California on Thursday. The program often made moves that would seem unthinkable to a human chess player.

“It doesn’t play like a human, and it doesn’t play like a program,” Hassabis said at the Neural Information Processing Systems (NIPS) conference in Long Beach. “It plays in a third, almost alien, way.”

Besides showing how brilliant machine-learning programs can be at a specific task, this shows that artificial intelligence can be quite different from the human kind. As AI becomes more commonplace, we might need to be conscious of such “alien” behavior.

AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help An upgraded version of the game-playing AI teaches itself every trick in the Go book, using a new form of machine learning.

 

Alpha Zero is a more general version of AlphaGo, the program developed by DeepMind to play the board game Go. In 24 hours, Alpha Zero taught itself to play chess well enough to beat one of the best existing chess programs around.

What’s also remarkable, though, Hassabis explained, is that it sometimes makes seemingly crazy sacrifices, like offering up a bishop and queen to exploit a positional advantage that led to victory. Such sacrifices of high-value pieces are normally rare. In another case the program moved its queen to the corner of the board, a very bizarre trick with a surprising positional value. “It’s like chess from another dimension,” Hassabis said.

Hassabis speculates that because Alpha Zero teaches itself, it benefits from not following the usual approach of assigning value to pieces and trying to minimize losses. “Maybe our conception of chess has been too limited,” he said. “It could be an important moment for chess. We can graft it into our own play.”

 

Original Paper from DeepMind

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

AlphaGo Zero: Starting from scratch to becoming the best Go player in the world, all within 40 days

DeepMind's Professor David Silver describes AlphaGo Zero, the latest evolution of AlphaGo, the first computer program to defeat a world champion at Go. There's been way too many fear-mongering news articles around the latest version of DeepMind's AlphaGo. Let's set the record straight, AlphaGo is an incredible technology and it's not terrifying at all. I'll go over the technical details of how AlphaGo really works; a mixture of deep learning and reinforcement learning.

 

Something amazing has happened. A couple of years ago, we closely followed the progress of AlphaGo, the distributed DeepMind algorithm which defeated Lee Sedol in four out of five games of the ancient board game Go. This has since been defeated by a variant,A lphaGo Zero, which is considerably stronger — after 20 days of training, it was able to win 100 out of 100 games against the version of AlphaGo which played against Sedol. After a further 20 days of training, it won 89 out of 100 games against a stronger instantiation of AlphaGo, namely the one which defeated the world champion Ke.

 

However, its superiority over the previous algorithm isn’t the most interesting aspect. What makes it interesting is that, unlike AlphaGo which both trained on human games and made use of hardcoded features (such as ‘liberties’), AlphaGo Zero is remarkably simple:

  • The algorithm has no external inputs, learning only from games against itself;
  • The input to the neural network is just 17 inputs, namely the parity of the turn, the indicator functions of white stones for the last 8 positions, and the indicator functions of black stones for the last 8 positions. (Storing the immediate history is a necessity due to ko rules.)
  • Instead of separate policy and value networks, the algorithm uses only one neural network;
  • Monte Carlo rollouts are ditched in favour of a feedback loop where the tree search evolves together with the network.

 

Read the Nature paper for more details. AlphaGo Zero was trained on just four tensor processing units (TPUs), which are fast hardware implementations of fixed-point limited-precision linear algebra. This is much more efficient (but less numerically precise) than a GPU, which is in turn much more efficient (but less flexible) than a CPU.

 

Find more detailed information here.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Quantum Internet goes Hybrid

Quantum Internet goes Hybrid | Amazing Science | Scoop.it

In a recent study, published in Nature, ICFO researchers Nicolas Maring, Pau Farrera, Dr. Kutlu Kutluer, Dr. Margherita Mazzera, and Dr. Georg Heinze led by ICREA Prof. Hugues de Riedmatten, have achieved an elementary "hybrid" quantum network link and demonstrated for the first time photonic quantum communication between two very distinct quantum nodes placed in different laboratories, using a single photon as information carrier.

 

Today, quantum information networks are ramping up to become a disruptive technology that will provide radically new capabilities for information processing and communication. Recent research suggests that this quantum network revolution might be just around the corner.

 

The key elements of a Quantum Information Network are quantum nodes, that store and process the information, made up of matter systems like cold atomic gases or doped solids, among others, and communicating particles, mainly photons. While photons seem to be perfect information carriers, there is still uncertainty as to which matter system could be used as network node, as each system provides different functionalities. Therefore, the implementation of a hybrid network has been proposed, searching to combine the best capabilities of different material systems.

 

Past studies have documented reliable transfers of quantum information between identical nodes, but this is the first time this has ever been achieved with a "hybrid" network of nodes. The ICFO researchers have been able to come up with a solution to making a hybrid quantum network work and solve the challenge of a reliable transfer of quantum states between different quantum nodes via single photons. A single photon needs to interact strongly and in a noise-free environment with the heterogeneous nodes or matter systems, which generally function at different wavelengths and bandwidths. As Nicolas Maring states "it's like having nodes speaking in two different languages. In order for them to communicate, it is necessary to convert the single photon's properties so it can efficiently transfer all the information between these different nodes."

 

How did they solve the problem?

In their study, the ICFO researchers used two very distinct quantum nodes: the emitting node was a laser-cooled cloud of Rubidium atoms and the receiving node a crystal doped with Praseodymium ions. From the cold gas, they generated a quantum bit (qubit) encoded in a single photon with a very-narrow bandwidth and a wavelength of 780 nm. They then converted the photon to the telecommunication's wavelength of 1552 nm to demonstrate that this network could be completely compatible with the current telecom C-band range. Subsequently, they sent it through an optical fiber from one lab to the other. Once in the second lab, the photon's wavelength was converted to 606 nm in order to interact correctly and transfer the quantum state to the receiving doped crystal node. Upon interaction with the crystal, the photonic qubit was stored in the crystal for approximately 2.5 microseconds and retrieved with very high fidelity.

The results of the study have shown that two very different quantum systems can be connected and can communicate by means of a single photon. As ICREA Prof at ICFO Hugues de Riedmatten comments, "being able to connect quantum nodes with very different functionalities and capabilities and transmitting quantum bits by means of single photons between them represents an important milestone in the development of hybrid quantum networks." The ability to perform back- and forth-conversion of photonic qubits at the telecom C-band wavelength shows that these systems would be completely compatible with the current telecom networks.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Apple's VR Project: Apple's Secret Work on Virtual and Augmented Reality

Apple's VR Project: Apple's Secret Work on Virtual and Augmented Reality | Amazing Science | Scoop.it

Apple is investigating multiple ways virtual and augmented reality could be implemented into future iOS devices or new hardware products.

 

Apple has been exploring virtual reality and augmented reality technologies for more than 10 years based on patent filings, but with virtual and augmented reality exploding in popularity with the launch of ARKit, Apple's dabbling may be growing more serious and could lead to an actual dedicated AR/VR product in the not-too-distant future.

 

Apple is rumored to have a secret research unit comprising hundreds of employees working on AR and VR, exploring ways the emerging technologies could be used in future Apple products. VR/AR hiring has ramped up and Apple has acquiredmultiple AR/VR companies, suggesting something is afoot in Cupertino.

 

There are dozens of possibilities for VR/AR technology in Apple products, and in 2017, Apple is betting big on both AR and VR. VR support is included in Metal 2 in macOS High Sierra, and in iOS 11, Apple has developed an ARKit API that lets developers create impressive AR-based apps and games with little effort. Along with software support for AR/VR, Apple is said to be working on hardware, with the focus currently on an augmented reality headset or "smart glasses." According to rumors, Apple is developing on an augmented reality headset with a dedicated display, a built-in processor, and a new "rOS" or reality operating system. rOS is said to be based on iOS, the operating system that runs on the iPhone. For the AR headset, Apple is developing a "system-on-a-package" chip similar to what's in the Apple Watch.

 

As for input methods, Apple is considering touch panels, voice activation, and head gestures, and a range of applications from mapping to texting are being prototyped. Virtual meeting rooms and 360-degree video playback are also concepts that are being explored.

 

As Apple prepares to launch its new AR headset, the company will introduce a new version of ARKit for developers, perhaps as soon as 2018. The new ARKit will be used to make AR games for multiple players and it will reportedly introduce persistent tracking, aka a feature that remembers where a digital object was placed in a virtual space.

 

Apple may actually be experimenting with several augmented reality headset prototypes as engineers search for the "most compelling application" for such a device. At least one group at Apple is pushing for glasses that feature a 3D camera but no screen, similar to Snap's Spectacles.

 

Apple is aiming to finish work on its augmented reality headset by 2019, and a finished product could be ready to ship as soon as 2020. Apple's timeline is said to be "very aggressive," though, and could change, but the hardware still has a few years of development to go.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The future is quantum - New 50 qubit IBM quantum chip is the next quantum leap

The future is quantum - New 50 qubit IBM quantum chip is the next quantum leap | Amazing Science | Scoop.it

Some of the most important technical advances of the 20th century were enabled by decades of fundamental scientific exploration, whose initial purpose was simply to extend human understanding. When Einstein discovered relativity, he had no idea that one day it would be an important part of modern navigation systems. Such is the story of quantum mechanics that will ultimately enable quantum computers.

 

The first IBM Q systems available online to clients will have a 20 qubit processor. This new device’s advanced design, connectivity and packaging delivers industry-leading coherence times (the amount of time to perform quantum computations), which are double that of IBM’s 5 and 16 qubit processors available to the public on the IBM Q experience.

 

Expansion of IBM’s open-source quantum package QISKit (www.qiskit.org) with new functionalities and tools. The software development kit enables users to create quantum computer programs and execute them on one of IBM’s real quantum processors or quantum simulators along with worked examples of quantum applications. Through the IBM Q experience, over 60,000 users have run over 1.7M quantum experiments and generated over 35 third-party research publications.

 

A 20-qubit machine has double the coherence time, at an average of 90 µs, compared to previous generations of quantum processors with an average of 50 µs. It is also designed to scale; the 50-qubit prototype has similar performance. Our goal with both the IBM Q experience, and our commercial program is to collaborate with our extended community of partners to accelerate the path to demonstrating a quantum advantage for solving real problems that matter.

 

Over the next year, IBM Q scientists will continue to work to improve its devices including the quality of qubits, circuit connectivity, and error rates of operations. For example, within six months, the IBM team was able to extend the coherence times for the 20 qubit processor to be twice that of the publically available 5 and 16 qubit systems on the IBM Q experience.

 

In addition to building working systems, IBM continues to grow its robust quantum computing ecosystem, including open-source software tools, applications for near-term systems, and educational and enablement materials for the quantum community. Through the IBM Q experience, over 60,000 users have run over 1.7M quantum experiments and generated over 35 third-party research publications. Users have registered from over 1500 universities, 300 high schools, and 300 private institutions worldwide, many of whom are accessing the IBM Q experience as part of their formal education. This form of open access and open research is critical for accelerated learning and implementation of quantum computing.

 

To augment this ecosystem of quantum researchers and application development, IBM rolled out earlier this year its QISKit (www.qiskit.org) project, an open-source software developer kit to program and run quantum computers. IBM Q scientists have now expanded QISKit to enable users to create quantum computing programs and execute them on one of IBM’s real quantum processors or quantum simulators available online. Recent additions to QISKit also include new functionality and visualization tools for studying the state of the quantum system, integration of QISKit with the IBM Data Science Experience, a compiler that maps desired experiments onto the available hardware, and worked examples of quantum applications.

 

Quantum computing promises to be able to solve certain problems – such as chemical simulations and types of optimization – that will forever be beyond the practical reach of classical machines. In a recent Nature paper, the IBM Q team pioneered a new way to look at chemistry problems using quantum hardware that could one day transform the way new drugs and materials are discovered. A Jupyter notebook that can be used to repeat the experiments that led to this quantum chemistry breakthrough is available in the QISKit tutorials. Similar tutorials are also provided that detail implementation of optimization problems such as MaxCut and Traveling Salesman on IBM’s quantum hardware.

more...
No comment yet.