Amazing Science
1.1M views | +50 today
Follow
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Rescooped by Dr. Stefan Gruenwald from Popular Science
Scoop.it!

Online collaboration: Scientists and the social network

Online collaboration: Scientists and the social network | Amazing Science | Scoop.it
Giant academic social networks have taken off to a degree that no one expected even a few years ago. A Nature survey explores why.

Via Neelima Sinha
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Tracking the Future
Scoop.it!

Computer science: The learning machines

Computer science: The learning machines | Amazing Science | Scoop.it

Using massive amounts of data to recognize photos and speech, deep-learning computers are taking a big step towards true artificial intelligence. Three years ago, researchers at the secretive Google X lab in Mountain View, California, extracted some 10 million still images from YouTube videos and fed them into Google Brain — a network of 1,000 computers programmed to soak up the world much as a human toddler does. After three days looking for recurring patterns, Google Brain decided, all on its own, that there were certain repeating categories it could identify: human faces, human bodies and … cats1.

 

Google Brain's discovery that the Internet is full of cat videos provoked a flurry of jokes from journalists. But it was also a landmark in the resurgence of deep learning: a three-decade-old technique in which massive amounts of data and processing power help computers to crack messy problems that humans solve almost intuitively, from recognizing faces to understanding language.

 

Deep learning itself is a revival of an even older idea for computing: neural networks. These systems, loosely inspired by the densely interconnected neurons of the brain, mimic human learning by changing the strength of simulated neural connections on the basis of experience. Google Brain, with about 1 million simulated neurons and 1 billion simulated connections, was ten times larger than any deep neural network before it. Project founder Andrew Ng, now director of the Artificial Intelligence Laboratory at Stanford University in California, has gone on to make deep-learning systems ten times larger again.

 

Such advances make for exciting times in artificial intelligence (AI) — the often-frustrating attempt to get computers to think like humans. In the past few years, companies such as Google, Apple and IBM have been aggressively snapping up start-up companies and researchers with deep-learning expertise. For everyday consumers, the results include software better able to sort through photos, understand spoken commands and translate text from foreign languages. For scientists and industry, deep-learning computers can search for potential drug candidates, map real neural networks in the brain or predict the functions of proteins.


Via Szabolcs Kósa
R Schumacher & Associates LLC's curator insight, January 15, 2014 1:43 PM

The monikers such as "deep learning" may be new, but Artificial Intelligence has always been the Holy Grail of computer science.  The applications are many, and the path is becoming less of an uphill climb.  

luiy's curator insight, February 26, 2014 6:19 AM

Deep learning itself is a revival of an even older idea for computing: neural networks. These systems, loosely inspired by the densely interconnected neurons of the brain, mimic human learning by changing the strength of simulated neural connections on the basis of experience. Google Brain, with about 1 million simulated neurons and 1 billion simulated connections, was ten times larger than any deep neural network before it. Project founder Andrew Ng, now director of the Artificial Intelligence Laboratory at Stanford University in California, has gone on to make deep-learning systems ten times larger again.

 

Such advances make for exciting times in artificial intelligence (AI) — the often-frustrating attempt to get computers to think like humans. In the past few years, companies such as Google, Apple and IBM have been aggressively snapping up start-up companies and researchers with deep-learning expertise. For everyday consumers, the results include software better able to sort through photos, understand spoken commands and translate text from foreign languages. For scientists and industry, deep-learning computers can search for potential drug candidates, map real neural networks in the brain or predict the functions of proteins.

Rescooped by Dr. Stefan Gruenwald from Tracking the Future
Scoop.it!

Aliens, computers and synthetic biology

Our capacity to partner with biology to make useful things is limited by the tools that we can use to specify, design, prototype, test, and analyze natural or engineered biological systems. However, biology has typically been engaged as a "technology of last resort" in attempts to solve problems that other more mature technologies cannot. This lecture will examine some recent progress on virus genome redesign and hidden DNA messages from outer space, building living data storage, logic, and communication systems, and how simple but old and nearly forgotten engineering ideas are helping make biology easier to engineer.


Via Szabolcs Kósa
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Citizen science enters a new era

Citizen science enters a new era | Amazing Science | Scoop.it
From China to the Congo, a new wave of volunteer projects aims to make amateur participants actively conduct research that benefits their communities.
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Now Online: The Royal Society's 350-Year-Long Scientific Archive

Now Online: The Royal Society's 350-Year-Long Scientific Archive | Amazing Science | Scoop.it

The Royal Society just uploaded every article older than 70 years, and the entire collection is searchable online. Along with Newton’s first research paper, the Philosophical Transactions of the Royal Society contain roughly 69,000 articles, including original research by Robert Boyle, William Herschel, Joseph Lister, Michael Faraday and others; Benjamin Franklin’s famous kite-lightning experiment; bizarre accounts of students hit by lightning; and ruminations on what Moon Citizens would glimpse as they looked at Earth, among many other tales.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Tracking the Future
Scoop.it!

How do you build a large-scale quantum computer?

How do you build a large-scale quantum computer? | Amazing Science | Scoop.it

How do you build a universal quantum computer? Turns out, this question was addressed by theoretical physicists about 15 years ago. The answer was laid out in a research paper and has become known as the DiVincenzo criteriaThe prescription is pretty clear at a glance; yet in practice the physical implementation of a full-scale universal quantum computer remains an extraordinary challenge.


To glimpse the difficulty of this task, consider the guts of a would-be quantum computer. The computational heart is composed of multiple quantum bits, or qubits, that can each store 0 and 1 at the same time. The qubits can become “entangled,” or correlated in ways that are impossible in conventional devices. A quantum computing device must create and maintain these quantum connections in order to have a speed and storage advantage over any conventional computer. That’s the upside. The difficulty arises because harnessing entanglement for computation only works when the qubits are almost completely isolated from the outside world. Isolation and control becomes much more difficult as more and more qubits are added into the computer. Basically, as quantum systems are made bigger, they generally lose their quantum-ness.  


In pursuit of a quantum computer, scientists have gained amazing control over various quantum systems. One leading platform in this broad field of research is trapped atomic ions, where nearly 20 qubits have been juxtaposed in a single quantum register. However, scaling this or any other type of qubit to much larger numbers while still contained in a single register will become increasingly difficult, as the connections will become too numerous to be reliable.


Physicists led by ion-trapper Christopher Monroe at the JQI have now proposed a modular quantum computer architecture that promises scalability to much larger numbers of qubits. This research is described in the journal Physical Review A (reference below), a topical journal of the American Physical Society. The components of this architecture have individually been tested and are available, making it a promising approach. In the paper, the authors present expected performance and scaling calculations, demonstrating that their architecture is not only viable, but in some ways, preferable when compared to related schemes.

Individual qubit modules are at the computational center of this design, each one consisting of a small crystal of perhaps 10-100 trapped ions confined with electromagnetic fields. Qubits are stored in each atomic ion’s internal energy levels. Logical gates can be performed locally within a single module, and two or more ions can be entangled using the collective properties of the ions in a module.


One or more qubits from the ion trap modules are then networked through a second layer of optical fiber photonic interconnects. This higher-level layer hybridizes photonic and ion-trap technology, where the quantum state of the ion qubits is linked to that of the photons that the ions themselves emit. Photonics is a natural choice as an information bus as it is proven technology and already used for conventional information flow. In this design, the fibers are directed to a reconfigurable switch, so that any set of modules could be connected.


The switch system, which incorporates special micro-electromechanical mirrors (MEMs) to direct light into different fiber ports, would allow for entanglement between arbitrary modules and on-demand distribution of quantum information.


Via Szabolcs Kósa
Andreas Pappas's curator insight, March 28, 2014 4:40 AM

This article shows how scientists can increase the scale of quantum machine while still making them behave quantum mechanically by reading the qu-bits with lasers instead of conventional wiring.

Rescooped by Dr. Stefan Gruenwald from Tracking the Future
Scoop.it!

Brainlike Computers Are Learning From Experience

Brainlike Computers Are Learning From Experience | Amazing Science | Scoop.it

Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.


The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.


The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.


In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.


Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.


“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.


Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of 1s and 0s. They generally store that information separately in what is known, colloquially, as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.


The data — for instance, temperatures for a climate model or letters for word processing — are shuttled in and out of the processor’s short-term memory while the computer carries out the programmed action. The result is then moved to its main memory.


The new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.


They are not “programmed.” Rather the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” That generates a signal that travels to other components and, in reaction, changes the neural network, in essence programming the next actions much the same way that information alters human thoughts and actions.


“Instead of bringing data to computation as we do today, we can now bring computation to data,” said Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort. “Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.”


Via Szabolcs Kósa
VendorFit's curator insight, December 31, 2013 3:27 PM

Artificial intelligence is the holy grail of technological achievment, creating an entity that can learn from its own mistakes and can (independently of programmer intervention) develop new routines and programs.  The New York Times claims that the first ever "learning" computer chip is to be released in 2014, an innovation that has profound consequences for the tech market.  When these devices become cheaper, this should allow for robotics and device manufacture that incorporates more detailed sensory input and can parse real objects, like faces, from background noise. 

Laura E. Mirian, PhD's curator insight, January 10, 2014 1:16 PM

The Singularity is not far away

Scooped by Dr. Stefan Gruenwald
Scoop.it!

World Science Festival - Webcasts

World Science Festival - Webcasts | Amazing Science | Scoop.it

The World Science Festival is a production of the Science Festival Foundation, a 501(c)(3) non-profit organization headquartered in New York City. The Foundation’s mission is to cultivate a general public informed by science, inspired by its wonder, convinced of its value, and prepared to engage with its implications for the future.

 

The World Science Festival’s signature event is an annual celebration and exploration of science that launched in 2008. Hailed a “new cultural institution,” by the New York Times, the Festival has featured such luminaries as: Stephen Hawking, E.O. Wilson, Sir Paul Nurse, Harold Varmus, Daniel Dennett, Eric Lander, Steven Chu, Richard Leakey, Sylvia Earle, Yo-Yo Ma, Oliver Sacks, Mary-Claire King, Chuck Close, Philip Glass, Charlie Kaufman, Glenn Close, Anna Deavere Smith, Bobby McFerrin, Maggie Gyllenhaal, Liev Schreiber, John Lithgow, Bill T. Jones, Charlie Rose, John Hockenberry, Elizabeth Vargas and Walter Isaacson.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Singularity Scoops
Scoop.it!

2011 year in review

2011 year in review | Amazing Science | Scoop.it
2011 was an amazing year of accelerating developments in science and technology. These articles offer a good summary. - Ed.

Via Frederic Emam-Zade Gerardino
No comment yet.