Amazing Science
632.6K views | +394 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

IBM scientists achieve storage memory breakthrough of 3 bits per cell

IBM scientists achieve storage memory breakthrough of 3 bits per cell | Amazing Science |
For the first time, scientists at IBM Research have demonstrated reliably storing 3 bits of data per cell using a relatively new memory technology known as phase-change memory (PCM).


The current memory landscape spans from venerable DRAM to hard disk drives to ubiquitous flash. But in the last several years PCM has attracted the industry's attention as a potential universal memory technology based on its combination of read/write speed, endurance, non-volatility and density. For example, PCM doesn't lose data when powered off, unlike DRAM, and the technology can endure at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles.


This research breakthrough provides fast and easy storage to capture the exponential growth of data from mobile devices and the Internet of Things.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Conformable Contacts!

Computer Simulation: How An 8.0 Earthquake Would Rock Los Angeles

Computer Simulation: How An 8.0 Earthquake Would Rock Los Angeles | Amazing Science |

Earlier this week, an expert from the Southern California Earthquake Center spoke at a conference in Long Beach and called the southern San Andreas fault "locked, loaded and ready to go" for a major 'quake. He said that people should be preparing for something around a magnitude 8.0—that's larger than the devastating San Francisco earthquake back in 1906, the LA Times notes, and that one caused about 3,000 deaths from both the shaking and the fires that followed (which LA's former earthquake czar Lucy Jones has said we should be worried about).


What would an earthquake that big even look like? How would it move and where could we expect the shaking to be felt? For that, there's a video from the SCEC that shows where the movement would occur and how far away it could be felt in the event of a 'quake that starts near San Luis Obispo and moves south along the fault.

Via YEC Geo
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory!

Scientists Create a 5-atom Quantum Computer That Could Make Today's Encryption Obsolete

Scientists Create a 5-atom Quantum Computer That Could Make Today's Encryption Obsolete | Amazing Science |

MIT scientists have developed a 5-atom quantum computer, one that is able to render traditional encryption obsolete. The creation of this five atom quantum computer comes in response to a challenge posed in 1994 by Professor Peter Shor of MIT. Professor Shor developed a quantum algorithm that’s able to calculate a large number’s prime factors more efficiently than traditional computers, with 15 being the smallest figure to meaningfully demonstrate the algorithm.


The new system was able to return the correct factors and with a confidence upwards of 99 percent. Professor Isaac Chuan of MIT said: “We show that Shor’s algorithm, the most complex quantum algorithm known to date, is realizable in a way where, yes, all you have to do is go in the lab, apply more technology, and you should be able to make a bigger quantum computer.”


Of course, this may be a little easier said than done. “It might still cost an enormous amount of money to build—you won’t be building a quantum computer and putting it on your desktop anytime soon—but now it’s much more an engineering effort, and not a basic physics question,” Chuang added.


Yet, Chuang has his team are hopeful for the future of quantum computing, saying that they “foresee it being straightforwardly scalable, once the apparatus can trap more atoms and more laser beams can control the pulses…We see no physical reason why that is not going to be in the cards.”


Via Ben van Lier
No comment yet.
Scooped by Dr. Stefan Gruenwald!

How EVE Online's Project Discovery is remapping human biology

How EVE Online's Project Discovery is remapping human biology | Amazing Science |

EVE Online isn't just a game about internet spaceships and sci-fi politics. Since March, developer CCP Games has been running Project Discovery – an initiative to help improve scientific understanding of the human body at the tiniest levels. Run in conjunction with the Human Protein Atlas and Massively Multiplayer Online Science, the project taps into EVE Online's greatest resource – its player base – to help categorise millions of proteins.


"We show them an image, and they can change the colour of it, putting green or red dyes on it to help them analyse it a little bit better," Linzi Campbell, game designer on Project Discovery, tells WIRED. "Then we also show them examples – cytoplasm is their favourite one! We show them what each of the different images should look like, and just get them to pick a few that they identify within the image. The identifications are scrambled each time, so it's not as simple as going 'ok, every time I just pick the one on the right' – they have to really think about it."


The analysis project is worked into EVE Online as a minigame, and works within the context of the game's lore. "We have this NPC organisation called the Drifters – they're like a mysterious entity in New Eden [EVE's interplanetary setting]," Campbell explains. "The players don't know an awful lot about the Drifters at the minute, so we disguised it within the universe as Drifter DNA that they were analysing. I think it just fit perfectly. We branded this as [research being done by] the Sisters of Eve, and they're analysing this Drifter DNA." 


The response has been tremendous. "We've had an amazing number of classifications, way over our greatest expectations," says Emma Lundberg, associate professor at the Human Protein Atlas. "Right now, after six weeks, we've had almost eight million classifications, and the players spent 16.2 million minutes playing the minigame. When we did the math, that translated – in Swedish measures – to 163 working years. It's crazy."


"We had a little guess, internally. We said if we get 40,000+ classifications a day, we're happy. If we get 100,000 per day, then we're amazed," Lundberg adds. "But when it peaked in the beginning, we had 900,000 classifications in one day. Now it's stabilised, but we're still getting around 200,000 a day, so everyone is mind-blown. We never expected it."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Physics: Let's unite to build a quantum Internet

Physics: Let's unite to build a quantum Internet | Amazing Science |

One of the greatest challenges for implementing a globally distributed quantum computer or a quantum internet is entangling nodes across the network8. Qubits can then be teleported between any pair and processed by local quantum computers.


Ideally, nodes should be entangled either in pairs or by creating a large, multi-entangled 'cluster state' that is broadcast to all nodes. Cluster states that link thousands of nodes have already been created in the laboratory9. The challenges are to demonstrate how they might be deployed over long distances, as well as how to store quantum states at the nodes and update them constantly using quantum codes.


Quantum networks require memories to store quantum information, ideally for hours — shielding it from unwanted interactions with the environment. Such memories are needed for quantum computing at nodes and also for the faithful, long-distance distribution of entanglement through quantum repeaters.


Quantum memories need to convert electromagnetic radiation into physical changes in matter with near-perfect read–write fidelity and at high capacity. 'Spin ensembles' represent one type of quantum memory. Ultracold atomic gases consisting of about one million atoms of rubidium can convert a single photon into a collective atomic excitation known as a spin wave. Storage times are approaching the 100 milliseconds required to transmit an optical signal across the world.


Solid-state quantum memories are even more appealing. Crystalline-solid spin ensembles — created by inserting lattice defects known as nitrogen-vacancy centres into diamonds, or by doping rare-earth crystals — can remain coherent for hours at cryogenic temperatures.


Superconducting qubits, which are defined by physical quantities such as the charge of a capacitor or the flux of an inductor, interact within a quantum processor by releasing and absorbing microwave photons. For the successful integration of solid-state quantum memory, reversible storage and retrieval of quantum information must be made possible. This will require an efficient interface between the microwave photons and the atomic spins of a solid-state quantum memory that is attached to the processor. If successful, this hybrid technology would become the most promising architecture to be scaled up into a large, distributed quantum computer.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The biggest Big Data project on Earth

The biggest Big Data project on Earth | Amazing Science |
The biggest amount of data ever gathered and processed passing through the UK, for scientists and SMBs to slice, dice, and turn into innovations and insights. When Big Data becomes Super-Massive Data.


Eventually there will be two SKA telescopes. The first, consisting of 130,000 2m dipole low-frequency antennae, is being built in the Shire of Murchison, a remote region about 800km north of Perth, Australia – an area the size of the Netherlands, but with a population of less than 100 people. Construction kicks off in 2018.


By Phase 2, said Diamond, the SKA will consist of half-a-million low and mid-frequency antennae, with arrays spread right across southern Africa as well as Australia, stretching all the way from South Africa to Ghana and Kenya – a multibillion-euro project on an engineering scale similar to the Large Hadron Collider. Which brings us to that supermassive data challenge for what, ultimately, will be an ICT-driven science facility. Diamond says: "The antennae will generate enormous volumes of data: even by the mid-2020s, Phase 1 of the project will be looking at 5,000 petabytes – five exabytes – a day of raw data. This will go to huge banks of digital signal processors, which we’re in the process of designing, and then into high-performance computers, and into an archive for scientists worldwide to access."


Our archive growth rate will be somewhere will be somewhere between 300 and 500 petabytes a year – science-quality data coming out of the supercomputer.


Using the most common element in the universe, neutral hydrogen, as a tracer, the SKA will be able to follow the trail all the way back to the cosmic dawn, a few hundred thousand years after the Big Bang. But over billions of years (a beam of light travelling at 671 million miles an hour would take 46.5 billion years to reach the edge of the observable universe) the wavelength of those ancient hydrogen signatures becomes stretched via the doppler effect, until it falls into the same range as the radiation emitted by mobile phones, aircraft, FM radio, and digital TV. This is why the SKA arrays are being built in remote, sparsely populated regions, says Diamond:

"The aim is to get away from people. It’s not because we’re antisocial – although some of my colleagues probably are a little! – but we need to get away from radio interference, phones, microwaves, and so on, which are like shining a torch in the business end of an optical telescope."


Eventually there will be two SKA telescopes. The first, consisting of 130,000 2m dipole low-frequency antennae, is being built in the Shire of Murchison, a remote region about 800km north of Perth, Australia – an area the size of the Netherlands, but with a population of less than 100 people. Construction kicks off in 2018.


By Phase 2, said Diamond, the SKA will consist of half-a-million low and mid-frequency antennae, with arrays spread right across southern Africa as well as Australia, stretching all the way from South Africa to Ghana and Kenya – a multibillion-euro project on an engineering scale similar to the Large Hadron Collider.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Popular Science!

Online collaboration: Scientists and the social network

Online collaboration: Scientists and the social network | Amazing Science |
Giant academic social networks have taken off to a degree that no one expected even a few years ago. A Nature survey explores why.

Via Neelima Sinha
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Let's build a quantum computer!
Understanding the architecture of a quantum processor

Andreas Dewes explains why quantum computing is interesting, how it works and what you actually need to build a working quantum computer. He uses the superconducting two-qubit quantum processor which he built during his PhD thesis as an example to explain its basic building blocks. He shows how this processor can be used to achieve so-called quantum speed-up for a search algorithm that can be run on it. Finally, he gives a short overview of the current state of superconducting quantum computing and Google's recently announced effort to build a working quantum computer in cooperation with one of the leading research groups in this field.


Google recently announced that it is partnering up with John Martinis - one of the leading researchers on superconducting quantum computing - to build a working quantum processor. This announcement has sparked a lot of renewed interest in a topic that was mainly of academic interest before. So, if Google thinks it's worth the hassle to build quantum computers then there surely must be something about them after all?

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Physicists demonstrate a quantum Fredkin gate for the first time

Physicists demonstrate a quantum Fredkin gate for the first time | Amazing Science |

Researchers from Griffith University and the University of Queensland have overcome one of the key challenges to quantum computing by simplifying a complex quantum logic operation.


"The allure of quantum computers is the unparalleled processing power that they provide compared to current technology," said Dr Raj Patel from Griffith's Centre for Quantum Dynamics.

"Much like our everyday computer, the brains of a quantum computer consist of chains of logic gates, although quantum logic gates harness quantum phenomena."


The main stumbling block to actually creating a quantum computer has been in minimising the number of resources needed to efficiently implement processing circuits. "Similar to building a huge wall out lots of small bricks, large quantum circuits require very many logic gates to function. However, if larger bricks are used the same wall could be built with far fewer bricks," said Dr Patel. "We demonstrate in our experiment how one can build larger quantum circuits in a more direct way without using small logic gates."


At present, even small and medium scale quantum computer circuits cannot be produced because of the requirement to integrate so many of these gates into the circuits. One example is the Fredkin (controlled- SWAP) gate. This is a gate where two qubits are swapped depending on the value of the third. Usually the Fredkin gate requires implementing a circuit of five logic operations. The research team used the quantum entanglement of photons—particles of light—to implement the controlled-SWAP operation directly.


"There are quantum computing algorithms, such as Shor's algorithm for finding prime numbers, that require the controlled-SWAP operation.


The quantum Fredkin gate can also be used to perform a direct comparison of two sets of qubits (quantum bits) to determine whether they are the same or not. This is not only useful in computing but is an essential feature of some secure quantum communication protocols where the goal is to verify that two strings, or digital signatures, are the same," said Professor Tim Ralph from the University of Queensland.


Professor Geoff Pryde, from Griffith's Centre for Quantum Dynamics, is the project's chief investigator. "What is exciting about our scheme is that it is not limited to just controlling whether qubits are swapped, but can be applied to a variety of different operations opening up ways to control larger circuits efficiently," said Professor Pryde.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Virtual Neurorehabilitation!

Robotic exoskeleton maps sense-deficits in young stroke patients

Robotic exoskeleton maps sense-deficits in young stroke patients | Amazing Science |

Researchers at the University of Calgary are using robotics technology to try to come up with more effective treatments for children who have had strokes.


The robotic device measures a patient's position sense — what doctors call proprioception — the unconscious perception of where the body is while in motion or at rest.


"Someone whose position sense has been affected might have difficulty knowing where their hand or arm is in space, adding to their difficulty in using their affected, weaker limb," said one of the study's senior researchers, Dr. Kirton of the Cumming School of Medicine's departments of pediatrics and clinical neurosciences.


"We can try to make a hand stronger but, if your brain doesn't know where the hand is, this may not translate into meaningful function in daily life."


PhD candidate Andrea Kuczynski is doing ongoing research using the KINARM (Kinesiological Instrument for Normal and Altered Reaching Movements) robotic device.


During the test the children sit in the KINARM machine with their arms supported by its exoskeleton, which measured movement as they played video games and did other tasks. All the children also had MRIs, which gave researchers a detailed picture of their brain structures.

Via Daniel Perez-Marcos
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Biology software Cello promises easier way to program living cells

Biology software Cello promises easier way to program living cells | Amazing Science |

Synthetic biologists have created software that automates the design of DNA circuits for living cells. The aim is to help people who are not skilled biologists to quickly design working biological systems, says synthetic biologist Christopher Voigt at the Massachusetts Institute of Technology in Cambridge, who led the work. “This is the first example where we’ve literally created a programming language for cells,” he says.


In the new software — called Cello — a user first specifies the kind of cell they are using and what they want it to do: for example, sense metabolic conditions in the gut and produce a drug in response. They type in commands to explain how these inputs and outputs should be logically connected, using a computing language called Verilog that electrical engineers have long relied on to design silicon circuits. Finally, Cello translates this information to design a DNA sequence that, when put into a cell, will execute the demands.


Voigt says his team is writing user interfaces that would allow biologists to write a single program and be returned different DNA sequences for different organisms. Anyone can access Cello through a Web-based interface, or by downloading its open-source code from the online repository GitHub.


”This paper solves the problem of the automated design, construction and testing of logic circuits in living cells,” says bioengineer Herbert Sauro at the University of Washington in Seattle, who was not involved in the study. The work is published in Science.1

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The momentous advance in artificial intelligence demands a new set of ethics

The momentous advance in artificial intelligence demands a new set of ethics | Amazing Science |

Let us all raise a glass to AlphaGo and mark another big moment in the advance of artificial intelligence (AI) and then perhaps start to worry. AlphaGo, Google DeepMind’s game of Go-playing AI just bested the best Go-playing human currently alive, the renowned Lee Sedol. This was not supposed to happen. At least, not for a while. An artificial intelligence capable of beating the best humans at the game was predicted to be 10 years away.


But as we drink to its early arrival, we should also begin trying to understand what the surprise means for the future – with regard, chiefly, to the ethics and governance implications that stretch far beyond a game.


As AlphaGo and AIs like it become more sophisticated – commonly outperforming us at tasks once thought to be uniquely human – will we feel pressured to relinquish control to the machines?


The number of possible moves in a game of Go is so massive that, in order to win against a player of Lee’s calibre, AlphaGo was designed to adopt an intuitive, human-like style of gameplay. Relying exclusively on more traditional brute-force programming methods was not an option. Designers at DeepMind made AlphaGo more human-like than traditional AI by using a relatively recent development – deep learning.


Deep learning uses large data sets, “machine learning” algorithms and deep neural networks – artificial networks of “nodes” that are meant to mimic neurons – to teach the AI how to perform a particular set of tasks. Rather than programming complex Go rules and strategies into AlphaGo, DeepMind designers taught AlphaGo to play the game by feeding it data based on typical Go moves. Then, AlphaGo played against itself, tirelessly learning from its own mistakes and improving its gameplay over time. The results speak for themselves.


Possessing a more intuitive approach to problem-solving allows artificial intelligence to succeed in highly complex environments. For example, actions with high levels of unpredictablility – talking, driving, serving as a soldier – which were previously unmanageable for AI are now considered technically solvable, thanks in large part to deep learning.

Leonardo Wild's curator insight, March 27, 6:20 PM
The subject matter of one of my so-far unpublished novels, the third book in the Unemotion series *(Yo Artificial, in Spanish). It's starting to happen and we think Climate Change is big.
Scooped by Dr. Stefan Gruenwald!

Face-tracking software lets you make anyone say anything in real time

You know how they say, "Show me pictures or video, or it didn't happen"? Well, the days when you could trust what you see on video in real time are officially coming to an end thanks to a new kind of face tracking.


A team from Stanford, the Max Planck Institute for Informatics and the University of Erlangen-Nuremberg has produced a video demonstrating how its software, called Face2Face, in combination with a common webcam, can make any person on video appear to say anything a source actor wants them to say.


In addition to perfectly capturing the real-time talking motions of the actor and placing them seamlessly on the video subject, the software also accounts for real-time facial expressions, including distinct movements such as eyebrow raises.


To show off the system, the team used YouTube videos of U.S. President George W. Bush, Russian President Vladimir Putin and Republican presidential candidate Donald Trump. In each case, the facial masking is flawless, effectively turning the video subject into the actor's puppet.


It might be fun to mix this up with something like "Say it with Trump," but for now the software is still in the research phase. "Unfortunately, the software is currently not publicly available — it's just a research project," team member Matthias Niessner told Mashable. "However, we are thinking about commercializing it given that we are getting so many requests." We knew this kind of stuff was possible in the special effects editing room, but the ability to do it in real time — without those nagging "uncanny valley" artifacts — could change how we interpret video documentation forever.

Matt Archer's curator insight, March 24, 4:24 PM

What possible reason is there for this technology outside of supporting terrorism..?  Crazy.

Scooped by Dr. Stefan Gruenwald!

Is Fog Computing The Next Big Thing In Internet of Things?

Is Fog Computing The Next Big Thing In Internet of Things? | Amazing Science |

One of the reasons why IoT has gained momentum in the recent past is the rise of cloud services. Though the concept of M2M existed for over a decade, organizations never tapped into the rich insights derived from the datasets generated by sensors and devices. Existing infrastructure was just not ready to deal with the massive scale demanded by the connected devices architecture. That’s where cloud becomes an invaluable resource for enterprises.


With abundant storage and ample computing power, cloud became an affordable extension to the enterprise data center. The adoption of cloud resulted in increased usage of Big Data platforms and analytics. Organizations are channelizing every bit of data generated from a variety of sources and devices to the cloud where it is stored, processed, and analyzed for deriving valuable insights. The combination of cloud and Big Data is the key enabler of Internet of Things. IoT is all set to become the killer use case for distributed computing and analytics.


Cloud service providers such as Amazon, GoogleIBM, MicrosoftSalesforce, and Oracle are offering managed IoT platforms that deliver the entire IoT stack as a service. Customers can on-board devices, ingest data, define data processing pipelines that analyze streams in real-time, and derive insights from the sensor data. Cloud-based IoT platforms are examples of verticalized PaaS offerings, which are designed for a specific use case.


While cloud is a perfect match for the Internet of Things, not every IoT scenario can take advantage of it. Industrial IoT solutions demand low-latency ingestion and immediate processing of data. Organizations cannot afford the delay caused by the roundtrip between the devices layer and cloud-based IoT platforms. The solution demands instant processing of data streams with quick turnaround. For example, it may be too late before the IoT cloud shuts down an LPG refilling machine after detecting an unusual combination of pressure and temperature thresholds. Instead, the anomaly should be detected locally within milliseconds followed by an immediate action trigged by a rule. The other scenario that demands local processing is healthcare. Given the sensitivity of data, healthcare companies don’t want to stream critical data points generated by life-saving systems. That data needs to be processed locally not only for faster turnaround but also for anonymizing personally identifiable patient data.


The demand for distributing the IoT workloads between the local data center and cloud has resulted in an architectural pattern called Fog computing. Large enterprises dealing with industrial automation will have to deploy infrastructure within the data center that’s specifically designed for IoT. This infrastructure is a cluster of compute, storage, and networking resources delivering sufficient horsepower to deal with the IoT data locally. The cluster that lives on the edge is called the Fog layer. Fog computing mimics cloud capabilities within the edge location, while still taking advantage of the cloud for heavy lifting. Fog computing is to IoT what hybrid cloud is to enterprise IT. Both the architectures deliver best of both worlds.


Cisco is one of the early movers in the Fog computing market. The company is credited with coining the term even before IoT became a buzzword. Cisco positioned Fog as the layer to reduce the latency in hybrid cloud scenarios. With enterprises embracing converged infrastructure in data centers and cloud for distributed computing, Cisco had vested interest in pushing Fog to stay relevant in the data center. Almost after five years of evangelizing Fog computing with little success, Cisco finally found a legitimate use case in the form of IoT.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Teaching assistant wasn't human and nobody guessed it

Teaching assistant wasn't human and nobody guessed it | Amazing Science |
Jill Watson is a virtual teaching assistant. She was one of nine teaching assistants in an artificial intelligence online course. And none of the students guessed she wasn't a human.


College of Computing Professor Ashok Goel teaches Knowledge Based Artificial Intelligence (KBAI) every semester. It's a core requirement of Georgia Tech's online master's of science in computer science program. And every time he offers it, Goel estimates, his 300 or so students post roughly 10,000 messages in the online forums -- far too many inquiries for him and his eight teaching assistants (TA) to handle. That's why Goel added a ninth TA this semester. Her name is Jill Watson, and she's unlike any other TA in the world. In fact, she's not even a "she." Jill is a computer -- a virtual TA -- implemented on IBM's Watson platform.


"The world is full of online classes, and they're plagued with low retention rates," Goel said. "One of the main reasons many students drop out is because they don't receive enough teaching support. We created Jill as a way to provide faster answers and feedback."


Goel and his team of Georgia Tech graduate students started to build her last year. They contacted Piazza, the course's online discussion forum, to track down all the questions that had ever been asked in KBAI since the class was launched in fall 2014 (about 40,000 postings in all). Then they started to feed Jill the questions and answers.


"One of the secrets of online classes is that the number of questions increases if you have more students, but the number of different questions doesn't really go up," Goel said. "Students tend to ask the same questions over and over again."


That's an ideal situation for the Watson platform, which specializes in answering questions with distinct, clear solutions. The team wrote code that allows Jill to field routine questions that are asked every semester. For example, students consistently ask where they can find particular assignments and readings.


Jill wasn't very good for the first few weeks after she started in January, often giving odd and irrelevant answers. Her responses were posted in a forum that wasn't visible to students.

"Initially her answers weren't good enough because she would get stuck on keywords," said Lalith Polepeddi, one of the graduate students who co-developed the virtual TA. "For example, a student asked about organizing a meet-up to go over video lessons with others, and Jill gave an answer referencing a textbook that could supplement the video lessons -- same keywords -- but different context. So we learned from mistakes like this one, and gradually made Jill smarter."


After some tinkering by the research team, Jill found her groove and soon was answering questions with 97 percent certainty. When she did, the human TAs would upload her responses to the students. By the end of March, Jill didn't need any assistance: She wrote the class directly if she was 97 percent positive her answer was correct.


The students, who were studying artificial intelligence, were unknowingly interacting with it. Goel didn't inform them about Jill's true identity until April 26. The student response was uniformly positive. One admitted her mind was blown. Another asked if Jill could "come out and play." Since then some students have organized a KBAI alumni forum to learn about new developments with Jill after the class ends, and another group of students has launched an open source project to replicate her.

Ra's curator insight, May 9, 4:42 PM
Scooped by Dr. Stefan Gruenwald!

Autonomous quantum error correction method greatly increases qubit coherence times

Autonomous quantum error correction method greatly increases qubit coherence times | Amazing Science |

It might be said that the most difficult part of building a quantum computer is not figuring out how to make it compute, but rather finding a way to deal with all of the errors that it inevitably makes. In order to flip the qubits back to their correct states, physicists have been developing an assortment of quantum error correction techniques. Most of them work by repeatedly making measurements on the system to detect errors and then correct the errors before they can proliferate. These approaches typically have a very large overhead, where a large portion of the computing power goes to correcting errors.


In a new paper published in Physical Review Letters, Eliot Kapit, an assistant professor of physics at Tulane University in New Orleans, has proposed a different approach to quantum error correction. His method takes advantage of a recently discovered unexpected benefit of quantum noise: when carefully tuned, quantum noise can actually protect qubits against unwanted noise. Rather than actively measuring the system, the new method passively and autonomously suppresses and corrects errors, using relatively simple devices and relatively little computing power.


"The most interesting thing about my work is that it shows just how simple and small a fully error corrected quantum circuit can be, which is why I call the device the 'Very Small Logical Qubit,'" Kapit told "Also, the error correction is fully passive—unwanted error states are quickly repaired by engineered dissipation, without the need for an external computer to watch the circuit and make decisions. While this paper is a theoretical blueprint, it can be built with current technology and doesn't require any new insights to make it a reality."


The new passive error correction circuit consists of just two primary qubits, in contrast to the 10 or more qubits required in most active approaches. The two qubits are coupled to each other, and each one is also coupled to a "lossy" object, such as a resonator, that experiences photon loss.


"In the absence of any errors, there are a pair of oscillating photon configurations that are the 'good' logical states of the device, and they oscillate at a fixed frequency based on the circuit parameters," Kapit explained. "However, like all qubits, the qubits in the circuit are not perfect and will slowly leak photons into the environment. When a photon randomly escapes from the circuit, the oscillation is broken, at which point a second, passive error correction circuit kicks in and quickly inserts two photons, one which restores the lost photon and reconstructs the oscillating logical state, and the other is dumped to a lossy circuit element and quickly leaks back out of the system. The combination of careful tuning of the resonant frequencies of the circuit and adding photons two at a time to correct losses ensures that the passive error correction circuit can operate continuously but won't do anything to the two good qubits unless their oscillation has been broken by a photon loss."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Researchers create a first frequency comb of time-bin entangled qubits

Researchers create a first frequency comb of time-bin entangled qubits | Amazing Science |
An international team of researchers has built a chip that generates multiple frequencies from a robust quantum system that produces time-bin entangled photons. In contrast to other quantum state realizations, entangled photons don't need bulky equipment to keep them in their quantum state, and they can transmit quantum information across long distances. The new device creates entangled photons that span the traditional telecommunications spectrum, making it appealing for multi-channel quantum communication and more powerful quantum computers.


"The advantages of our chip are that it's compact and cheap. It's also unique that it operates on multiple channels," said Michael Kues, Institut National de la Recherche Scientifique (INRS), University of Quebec, Canada. The researchers will present their results at the Conference on Lasers and Electro-Optics (CLEO), which is held June 5 -10 in San Jose, California.


The basis of quantum communications and computing lies in qubits, the quantum equivalent of classical bits. Instead of representing a one or a zero, qubits can exhibit an unusual property called superposition to represent both numbers simultaneously.


In order to take full advantage of superposition to perform difficult calculations or send information securely, another weird quantum mechanical property called entanglement enters the picture. Entanglement was famously called "spooky action at a distance" by Albert Einstein. It links particles so that measurements on one instantaneously affect the other.


Kues and his colleagues used photons to realize their qubits and entangled them by sending two short laser pulses through an interferometer, a device that directs light beams along different paths and then recombines them, to generate double pulses.


To generate multiple frequencies, Kres and his colleagues sent the pulses through a tiny ring, called a microring resonator. The resonator generates photon pairs on a series of discrete frequencies, using spontaneous form-wave mixing, thus creating a frequency comb.


The interferometer the team used has one long arm and one short arm, and when a single photon comes out of the system, it is in a superposition of time states, as if it traveled through both the long arm and the short arm simultaneously. Time-bin entanglement is a particularly robust form of photon entanglement. Photons can also have their polarization entangled, but waveguides and other types of optical equipment may alter polarization states.


Other research groups have generated time-bin entangled photons, but Kues and his colleagues are the first to create photons with multiple frequencies using the same chip. This feature can enable multiplexed and multi-channel quantum communications and increased quantum computation information capacity. Kues notes that the chip could improve quantum key distribution, a process that lets two parties share a secret key to encrypt messages with theoretically unbreakable security. It could also serve as a component of a future quantum computer.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Europe plans giant billion-euro quantum technologies project

Europe plans giant billion-euro quantum technologies project | Amazing Science |

The European Commission has quietly announced plans to launch a €1-billion (US$1.13 billion) project to boost a raft of quantum technologies — from secure communication networks to ultra-precise gravity sensors and clocks. 


The initiative, to launch in 2018, will be similar in size, timescale and ambition to two existing European flagships, the decade-long Graphene Flagship and the Human Brain Project, although the exact format has yet to be decided, Nathalie Vandystadt, a commission spokesperson, toldNature. Funding will come from a mixture of sources, including the commission, as well as other European and national funders, she added.


The commission is likely to have a “substantial role” in funding the flagship, says Tommaso Calarco, who leads the Integrated Quantum Science and Technology centre at the Universities of Ulm and Stuttgart in Germany. He co-authored a blueprint behind the initiative, which was published in March, called the Quantum Manifesto. Countries around the world are investing in these technologies, says Calarco. Without such an initiative, Europe risks becoming a second-tier player, he says. “The time is really now or never.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Approaching electronic DNA circuits: Making precise graphene pattern with DNA

Approaching electronic DNA circuits: Making precise graphene pattern with DNA | Amazing Science |

DNA’s unique structure is ideal for carrying genetic information, but scientists have recently found ways to exploit this versatile molecule for other purposes: By controlling DNA sequences, they can manipulate the molecule to form many different nanoscale shapes.


Chemical and molecular engineers at MIT and Harvard University have now expanded this approach by using folded DNA to control the nanostructure of inorganic materials. After building DNA nanostructures of various shapes, they used the molecules as templates to create nanoscale patterns on sheets of graphene. This could be an important step toward large-scale production of electronic chips made of graphene, a one-atom-thick sheet of carbon with unique electronic properties.

“This gives us a chemical tool to program shapes and patterns at the nanometer scale, forming electronic circuits, for example,” says Michael Strano, a professor of chemical engineering at MIT and a senior author of a paper describing the technique in the April 9 issue of Nature Communications.


Peng Yin, an assistant professor of systems biology at Harvard Medical School and a member of Harvard’s Wyss Institute for Biologically Inspired Engineering, is also a senior author of the paper, and MIT postdoc Zhong Jin is the lead author. Other authors are Harvard postdocs Wei Sun and Yonggang Ke, MIT graduate students Chih-Jen Shih and Geraldine Paulus, and MIT postdocs Qing Hua Wang and Bin Mu.


Most of these DNA nanostructures are made using a novel approach developed in Yin’s lab. Complex DNA nanostructures with precisely prescribed shapes are constructed using short synthetic DNA strands called single-stranded tiles. Each of these tiles acts like an interlocking toy brick and binds with four designated neighbors. Using these single-stranded tiles, Yin’s lab has created more than 100 distinct nanoscale shapes, including the full alphabet of capital English letters and many emoticons. These structures are designed using computer software and can be assembled in a simple reaction. Alternatively, such structures can be constructed using an approach called DNA origami, in which many short strands of DNA fold a long strand into a desired shape.


However, DNA tends to degrade when exposed to sunlight or oxygen, and can react with other molecules, so it is not ideal as a long-term building material. “We’d like to exploit the properties of more stable nanomaterials for structural applications or electronics,” Strano says. Instead, he and his colleagues transferred the precise structural information encoded in DNA to sturdier graphene. The chemical process involved is fairly straightforward, Strano says: First, the DNA is anchored onto a graphene surface using a molecule called aminopyrine, which is similar in structure to graphene. The DNA is then coated with small clusters of silver along the surface, which allows a subsequent layer of gold to be deposited on top of the silver.


Once the molecule is coated in gold, the stable metallized DNA can be used as a mask for a process called plasma lithography. Oxygen plasma, a very reactive “gas flow” of ionized molecules, is used to wear away any unprotected graphene, leaving behind a graphene structure identical to the original DNA shape. The metallized DNA is then washed away with sodium cyanide.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Quantum computing closer as RMIT drives towards first quantum data bus

Quantum computing closer as RMIT drives towards first quantum data bus | Amazing Science |

RMIT University researchers have trialled a quantum processor capable of routing quantum information from different locations in a critical breakthrough for quantum computing. The work opens a pathway towards the "quantum data bus", a vital component of future quantum technologies.


The research team from the Quantum Photonics Laboratory at RMIT in Melbourne, Australia, the Institute for Photonics and Nanotechnologies of the CNR in Italy and the South University of Science and Technology of China, have demonstrated for the first time the perfect state transfer of an entangled quantum bit (qubit) on an integrated photonic device.


Quantum Photonics Laboratory Director Dr Alberto Peruzzo said after more than a decade of global research in the specialised area, the RMIT results were highly anticipated. "The perfect state transfer has emerged as a promising technique for data routing in large-scale quantum computers," Peruzzo said. "The last 10 years has seen a wealth of theoretical proposals but until now it has never been experimentally realized. "Our device uses highly optimised quantum tunnelling to relocate qubits between distant sites. It's a breakthrough that has the potential to open up quantum computing in the near future."


The difference between standard computing and quantum computing is comparable to solving problems over an eternity compared to a short time. "Quantum computers promise to solve vital tasks that are currently unmanageable on today's standard computers and the need to delve deeper in this area has motivated a worldwide scientific and engineering effort to develop quantum technologies," Peruzzo said.


"It could make the critical difference for discovering new drugs, developing a perfectly secure quantum Internet and even improving facial recognition.'' Peruzzo said a key requirement for any information technology, along with processors and memories, is the ability to relocate data between locations.


Full scale quantum computers will contain millions, if not billions, of quantum bits (qubits) all interconnected, to achieve computational power undreamed of today. While today's microprocessors use data buses that route single bits of information, transferring quantum information is a far greater challenge due to the intrinsic fragility of quantum states.

"Great progress has been made in the past decade, increasing the power and complexity of quantum processors," Peruzzo said.


Robert Chapman, an RMIT PhD student working on the experiment, said the protocol they developed could be implemented in large scale quantum computing architectures, where interconnection between qubits will be essential.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Meraculous: Full Genome Alignment With Supercomputers in Mere Minutes

Meraculous: Full Genome Alignment With Supercomputers in Mere Minutes | Amazing Science |
A team of scientists from Berkeley Lab, JGI and UC Berkeley, simplified and sped up genome assembly, reducing a months-long process to mere minutes. This was primarily achieved by “parallelizing” the code to harness the processing power of supercomputers, such as NERSC’s Edison system.


Genomes are like the biological owner’s manual for all living things. Cells read DNA instantaneously, getting instructions necessary for an organism to grow, function and reproduce. But for humans, deciphering this “book of life” is significantly more difficult.


Nowadays, researchers typically rely on next-generation sequencers to translate the unique sequences of DNA bases (there are only four) into letters: A, G, C and T. While DNA strands can be billions of bases long, these machines produce very short reads, about 50 to 300 characters at a time. To extract meaning from these letters, scientists need to reconstruct portions of the genome—a process akin to rebuilding the sentences and paragraphs of a book from snippets of text.

But this process can quickly become complicated and time-consuming, especially because some genomes are enormous. For example, while the human genome contains about 3 billion bases, the wheat genome contains nearly 17 billion bases and the pine genome contains about 23 billion bases. Sometimes the sequencers will also introduce errors into the dataset, which need to be filtered out. And most of the time, the genomes need to be assembled de novo, or from scratch. Think of it like putting together a ten billion-piece jigsaw puzzle without a complete picture to reference.


By applying some novel algorithms, computational techniques and the innovative programming language Unified Parallel C (UPC) to the cutting-edge de novo genome assembly tool Meraculous, a team of scientists from the Lawrence Berkeley National Laboratory (Berkeley Lab)’s Computational Research Division (CRD), Joint Genome Institute (JGI) and UC Berkeley, simplified and sped up genome assembly, reducing a months-long process to mere minutes. This was primarily achieved by “parallelizing” the code to harness the processing power of supercomputers, such as the National Energy Research Scientific Computing Center’s (NERSC’s) Edison system. Put simply, parallelizing code means splitting up tasks once executed one-by-one and modifying or rewriting the code to run on the many nodes (processor clusters) of a supercomputer all at once.


“Using the parallelized version of Meraculous, we can now assemble the entire human genome in about eight minutes using 15,360 computer processor cores. With this tool, we estimate that the output from the world’s biomedical sequencing capacity could be assembled using just a portion of NERSC’s Edison supercomputer,” says Evangelos Georganas, a UC Berkeley graduate student who led the effort to parallelize Meraculous. He is also the lead author of a paper published and presented at the SC Conference in November 2014.  


“This work has dramatically improved the speed of genome assembly,” says Leonid Oliker computer scientist in CRD. “The new parallel algorithms enable assembly calculations to be performed rapidly, with near linear scaling over thousands of cores. Now genomics researchers can assemble large genomes like wheat and pine in minutes instead of months using several hundred nodes on NERSC’s Edison.”

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science!

Laser technique promises super-fast and super-secure quantum cryptography

Laser technique promises super-fast and super-secure quantum cryptography | Amazing Science |
A new method of implementing an 'unbreakable' quantum cryptographic system is able to transmit information at rates more than ten times faster than previous attempts.


Researchers have developed a new method to overcome one of the main issues in implementing a quantum cryptography system, raising the prospect of a useable 'unbreakable' method for sending sensitive information hidden inside particles of light. By 'seeding' one laser beam inside another, the researchers, from the University of Cambridge and Toshiba Research Europe, have demonstrated that it is possible to distribute encryption keys at rates between two and six orders of magnitude higher than earlier attempts at a real-world quantum cryptography system. The results are reported in the journal Nature Photonics.


Encryption is a vital part of modern life, enabling sensitive information to be shared securely. In conventional cryptography, the sender and receiver of a particular piece of information decide the encryption code, or key, up front, so that only those with the key can decrypt the information. But as computers get faster and more powerful, encryption codes get easier to break.


Quantum cryptography promises 'unbreakable' security by hiding information in particles of light, or photons, emitted from lasers. In this form of cryptography, quantum mechanics are used to randomly generate a key. The sender, who is normally designated as Alice, sends the key via polarised photons, which are sent in different directions. The receiver, normally designated as Bob, uses photon detectors to measure which direction the photons are polarised, and the detectors translate the photons into bits, which, assuming Bob has used the correct photon detectors in the correct order, will give him the key.


The strength of quantum cryptography is that if an attacker tries to intercept Alice and Bob's message, the key itself changes, due to the properties of quantum mechanics. Since it was first proposed in the 1980s, quantum cryptography has promised the possibility of unbreakable security. "In theory, the attacker could have all of the power possible under the laws of physics, but they still wouldn't be able to crack the code," said the paper's first author Lucian Comandar, a PhD student at Cambridge's Department of Engineering and Toshiba's Cambridge Research Laboratory.

Via Mariaschnee
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Bioinformatics, Comparative Genomics and Molecular Evolution!

Web resource: 1000 Fungal Genomes Project (2016)

Web resource: 1000 Fungal Genomes Project (2016) | Amazing Science |

Sequencing unsampled fungal diversity.  Efforts to sequence 1000+ fungal genomes. Also see the Google+ site for more discussion opportunities.


This project is in collaboration with the work of the JGI and you can find links on this site to the nomination page for submitting candidate species to the project.

Via Kamoun Lab @ TSL, Arjen ten Have
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Discover stories within data using SandDance, a new Microsoft Research project

Discover stories within data using SandDance, a new Microsoft Research project | Amazing Science |

Data can be daunting. But within those numbers and spreadsheets is a wealth of information. There are also stories that the data can tell, if you’re able to see them. SandDance, a new Microsoft Garage project from Microsoft Research, helps you visually explore data sets to find stories and extract insights. It uses a free Web and touch-based interface to help users dynamically navigate through complex data they upload into the tool.


While data science experts will find that SandDance is a powerful tool, its ease of use can help people who aren’t experts in data science or programming the ability to analyze information – and present it – in a way that is accessible to a wider audience.


“We had this notion that a lot of visualization summarized data, and that summary is great, but sometimes you need the individual elements of your data set too,” says Steven Drucker, a principal researcher who’s focused on information visualization and data collections.


“We don’t want to lose sight of the trees because of the forest, but we also want to see the forest and the overall shape of the data. With this, you’ll see information about individuals and how they’re relative to each other. Most tools show one thing or the other. With SandDance, you can look at data from many different angles.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

These new quantum dot crystals could replace silicon in super-fast, next-gen computers

These new quantum dot crystals could replace silicon in super-fast, next-gen computers | Amazing Science |

Solid, crystalline structures of incredibly tiny particles known as quantum dots have been developed by engineers in the US, and they're so close to perfect, they could be a serious contender for a silicon alternative in the super-fast computers of the future.


Just as single-crystal silicon wafers revolutionised computing technology more than 60 years ago (your phone, laptop, PC, and iPad wouldn’t exist without one), quantum dot solids could change everything about how we transmit and process information in the decades to come.


But despite the incredible potential of quantum dot crystals in computing technology, researchers have been struggling for years to organise each individual dot into a perfectly structured solid - something that’s crucial if you want to install it in a processor and run an electric charge through it.


The problem? Past efforts to build something out of quantum dots - which are made up of a mere 5,000 atoms each - have failed, because researchers couldn’t figure out how to 'glue' them together without using another type of material that messes with their performance.


"Previously, they were just thrown together, and you hoped for the best," lead researcher Tobias Hanrath from Cornell University told The Christian Science Monitor. "It was like throwing a couple thousand batteries into a bathtub and hoping you get charge flowing from one end to the other."


Instead of pursuing different chemicals and materials that could work as the 'glue' but hinder the quantum dot’s electrical properties, Hanrath and his team have figured out how to ditch the glue and stick the quantum dots to each other, Lego-style.

"If you take several quantum dots, all perfectly the same size, and you throw them together, they’ll automatically align into a bigger crystal," Hanrath says.


To achieve this, the researchers first made nanocrystals from lead and selenium, and built these into crystalline fragments. These fragments were then used to form two-dimensional, square-shaped 'superstructures' - tiny building blocks that attach to each other without the help of other atoms. 


Publishing the results in Nature Materials, the team claims that the electrical properties of these superstructures are potentially superior to all other existing semiconductor nanocrystals, and they could be used in new types of devices for super-efficient energy absorption and light emission. The structures aren’t entirely perfect though, which is a key limitation of using quantum dots as your building blocks. While every silicon atom is exactly the same size, each quantum dot can vary by about 5 percent, and even when we’re talking about something that’s a few thousand atoms small, that 5 percent size variability is all it takes to prevent perfection.


Hanrath says that’s a good and a bad thing - good because they managed to hit the limits of what can be done with quantum dot solids, but bad, because they’ve hit the limits of what can be done with quantum dot solids.


"It's the equivalent of saying, 'Now we've made a really large single-crystal wafer of silicon, and you can do good things with it,'" he says in a press release. "That's the good part, but the potentially bad part of it is, we now have a better understanding that if you wanted to improve on our results, those challenges are going to be really, really difficult.”

No comment yet.