Amazing Science
765.0K views | +29 today
Follow
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Facebook's new AI software can recognize you in photos even if you're not looking

Facebook's new AI software can recognize you in photos even if you're not looking | Amazing Science | Scoop.it

Thanks to the latest advances in computer vision, we now have machines that can pick you out of a line-up. But what if your face is hidden from view? An experimental algorithm out of Facebook's artificial intelligence lab can recognise people in photographs even when it can't see their faces. Instead it looks for other unique characteristics like your hairdo, clothing, body shape and pose.


Modern face-recognition algorithms are so good they've already found their way into social networks, shops and even churches. Yann LeCun, head of artificial intelligence at Facebook, wanted to see they could be adapted to recognise people in situations where someone's face isn't clear, something humans can already do quite well.


"There are a lot of cues we use. People have characteristic aspects, even if you look at them from the back," LeCun says. "For example, you can recognize Mark Zuckerberg very easily, because he always wears a gray T-shirt."


The research team pulled almost 40,000 public photos from Flickr - some of people with their full face clearly visible, and others where they were turned away - and ran them through a sophisticated neural network.


The final algorithm was able to recognise individual people's identities with 83 per cent accuracy. It was presented earlier this month at the Computer Vision and Pattern Recognition conference in Boston, Massachusetts. An algorithm like this could one day help power photo apps like Facebook's Moments, released last week.


Moments scours through a phone's photos, sorting them into separate events like a friend's wedding or a trip to the beach and tagging whoever it recognises as a Facebook friend. LeCun also imagines such a tool would be useful for the privacy-conscious - alerting someone whenever a photo of themselves, however obscured, pops up on the internet.


The flipside is also true: the ability to identify someone even when they are not looking at the camera raises some serious privacy implications. Last week, talks over rules governing facial recognition collapsed after privacy advocates and industry groups could not agree.


"If, even when you hide your face, you can be successfully linked to your identify, that will certainly concern people," says Ralph Gross at Carnegie Mellon University in Pittsburgh, Pennsylvania, who says the algorithm is impressive. "Now is a time when it's important to discuss these questions."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Network model for tracking Twitter memes sheds light on information spreading in the brain

Network model for tracking Twitter memes sheds light on information spreading in the brain | Amazing Science | Scoop.it

An international team of researchers from Indiana University and Switzerland is using data mapping methods created to track the spread of information on social networks to trace its dissemination across a surprisingly different system: the human brain.


The research team from the IU Bloomington College of Arts and Sciences' Department of Psychological and Brain Sciences and the IU Bloomington School of Informatics and Computing found that applying social network models to the brain reveals specific connections and nodes that may be responsible for higher forms of cognition. The results are reported in the journal Neuron.


"This study suggests that answers about where in the brain higher cognition occurs may lie in the way that these areas are embedded in the network," said IU Distinguished Professor Olaf Sporns, who is senior author on the study. "You can't see this just by looking at a static network. You need to look at dynamic patterns.


"Each thought or action involves multiple signals, cascading through the brain, turning on other nodes as they spread. Where these cascades come together, that's where integration of multiple signals can occur. We think that this sort of integration is a hallmark of higher cognition."


Other lead researchers on the paper are Yong-Yeol Ahn and Alessandro Flammini, both of the IU Bloomington School of Informatics and Computing. An expert on complex networks, Ahn had previously used data from Twitter to track information spreading through social networks, including constructing analyses that predict which memes will go viral.


To conduct the brain study, the team performed diffusion spectrum imaging on the brains of 40 research volunteers at University Hospital Lausanne in Switzerland. The team then created a composite map of regions and long-range connections in the brain and applied a dynamic model for information spreading based in part upon Ahn's model for tracking viral memes.


"Like information in social networks, information in the brain is traveling along connections that form complex networks," said Sporns, who is a co-founder of the emerging field of connectomics, which aims to produce comprehensive network maps of the neural elements in the brain and their interconnections. "It was not too far a stretch for us to think of the brain this way."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Futuristic Components on Silicon Chips Fabricated Successfully

Futuristic Components on Silicon Chips Fabricated Successfully | Amazing Science | Scoop.it

A team of IBM researchers in Zurich, Switzerland with support from colleagues in Yorktown Heights, New York has developed a relatively simple, robust and versatile process for growing crystals made from compound semiconductor materials that will allow them be integrated onto silicon wafers -- an important step toward making future computer chips that will allow integrated circuits to continue shrinking in size and cost even as they increase in performance.

Appearing this week on the cover of the journal Applied Physics Letters, from AIP Publishing, the work may allow an extension to Moore's Law, the famous observation by Gordon Moore that the number of transistors on an integrated circuit double about every two years. In recent years some in the industry have speculated that our ability to keep pace with Moore's Law may become exhausted eventually unless new technologies come along that will lend it leash.

"The whole semiconductor industry wants to keep Moore’s Law going. We need better performing transistors as we continue down-scaling, and transistors based on silicon won’t give us improvements anymore," said Heinz Schmid, a researcher with IBM Research GmbH at Zurich Research Laboratory in Switzerland and the lead author on the paper.

For consumers, extending Moore's Law will mean continuing the trend of new computer devices having increasing speed and bandwidth at reduced power consumption and cost. The new technique may also impact photonics on silicon, with active photonic components integrated seamlessly with electronics for greater functionality.

How the Work was Done: The IBM team fabricated single crystal nanostructures, such as nanowires, nanostructures containing constrictions, and cross junctions, as well as 3-D stacked nanowires, made with so-called III–V materials. Made from alloys of indium, gallium and arsenide, III-V semiconductors are seen as a possible future material for computer chips, but only if they can be successfully integrated onto silicon. So far efforts at integration have not been very successful.

The new crystals were grown using an approach called template-assisted selective epitaxy (TASE) using metal organic chemical vapor deposition, which basically starts from a small area and evolves into a much larger, defect-free crystal. This approach allowed them to lithographically define oxide templates and fill them via epitaxy, in the end making nanowires, cross junctions, nanostructures containing constrictions and 3-D stacked nanowires using the already established scaled processes of Si technology.

"What sets this work apart from other methods is that the compound semiconductor does not contain detrimental defects, and that the process is fully compatible with current chip fabrication technology," said Schmid. "Importantly the method is also economically viable."  

He added that more development will be required to achieve the same control over performance in III-V devices as currently exists for silicon. But the new method is the key to actually integrating the stacked materials on the silicon platform, Schmid said.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Brillo as an underlying operating system for the "Internet of Things"

Brillo as an underlying operating system for the "Internet of Things" | Amazing Science | Scoop.it

The Project Brillo announcement was one of the event's highlights making news at Google's I/O conference last week. Brillo fundamentally is Google's answer to the Internet of Things operating system. Brillo is designed to run on and connect various IoT low-power devices. If Android was Google's answer for a mobile operating system, Brillo is a mini, or lightweight, Android OS–and part of The Register's headline on the announcement story was "Google puts Android on a diet".

Brillo was developed to connect IoT objects from "washing machine to a rubbish bin and linking in with existing Google technologies," according to The Guardian.


As The Guardian also pointed out, they are not just talking about your kitchen where the fridge is telling the phone that it's low on milk; the Brillo vision goes beyond home systems to farms or to city systems where a trashbin could tell the council when it is full and needs collecting. "Bins, toasters, roads and lights will be able to talk to each other for automatic, more efficient control and monitoring."


Brillo is derived from Android. Commented Peter Bright, technology editor, Ars Technica: "Brillo is smaller and slimmer than Android, providing a kernel, hardware abstraction, connectivity, and security infrastructure." The Next Web similarly explained Brillo as "a stripped down version of Android that can run on minimal system requirements."


The Brillo debut is accompanied by another key component, Weave. This is the communications layer, and it allows the cloud, mobile, and Brillo to speak to one another. AnandTech described Weave as "an API framework meant to standardize communications between all these devices."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Beyond Moore's law: Even after Moore’s law ends, chip costs could still halve every few years

Beyond Moore's law: Even after Moore’s law ends, chip costs could still halve every few years | Amazing Science | Scoop.it

There is a popular misconception about Moore’s law (that the number of transistors on a chip doubles every two years) which has led many to conclude that the 50-year-old prognostication is due to end shortly. This doubling of processing power, for the same cost, has continued apace since Gordon Moore, one of Intel's founders, observed the phenomenon in 1965. At the time, a few hundred transistors could be crammed on a sliver of silicon. Today’s chips can carry billions.


Whether Moore’s law is coming to an end is moot. As far as physical barriers to further shrinkage are concerned, there is no question that, having been made smaller and smaller over the decades, crucial features within transistors are approaching the size of atoms. Indeed, quantum and thermodynamic effects that occur at such microscopic dimensions have loomed large for several years.


Until now, integrated circuits have used a two-dimensional (planar) structure, with a metal gate mounted across a flat, conductive channel of silicon. The gate controls the current flowing from a source electrode at one end of the channel to a drain electrode at the other end. A small voltage applied to the gate lets current flow through the transistor. When there is no voltage on the gate, the transistor is switched off. These two binary states (on and off) are the ones and zeros that define the language of digital devices.


However, when transistors are shrunk beyond a certain point, electrons flowing from the source can tunnel their way through the insulator protecting the gate, instead of flowing direct to the drain. This leakage current wastes power, raises the temperature and, if excessive, can cause the device to fail. Leakage becomes a serious problem when insulating barriers within transistors approach thicknesses of 3 nanometres (nm) or so. Below that, leakage increases exponentially, rendering the device pretty near useless.


Intel, which sets the pace for the semiconductor industry, started preparing for the leakage problem several “nodes” (changes in feature size) ago. At the time, it was still making 32nm chips. The solution adopted was to turn a transistor’s flat conducting channel into a vertical fence (or fin) that stood proud of the substrate. Instead of just one small contact patch, this gave the gate straddling the fence three contact areas (a large one on either side of the fence and a smaller one across the top). With more control over the current flowing through the channel, leakage is reduced substantially. Intel reckons “Tri-Gate” processors switch 37% faster and use 50% less juice than conventional ones.


Having introduced the Tri-Gate transistor design (now known generically as FinFET) with its 22nm node, Intel is using the same three-dimensional architecture in its current 14nm chips, and expects to do likewise with its 10nm ones, due out later this year and in mainstream production by the middle of 2016. Beyond that, Intel says it has some ideas about how to make 7nm devices, but has yet to reveal details. The company’s road map shows question marks next to future 7nm and 5nm nodes, and peters out shortly thereafter.


At a recent event celebrating the 50th anniversary of Moore’s law, Intel’s 86-year-old chairman emeritus said his law would eventually collapse, but that “good engineering” might keep it afloat for another five to ten years. Mr Moore was presumably referring to further refinements in Tri-Gate architecture. No doubt he was also alluding to advanced fabrication processes, such as “extreme ultra-violet lithography” and “multiple patterning”, which seemingly achieve the impossible by being able to print transistor features smaller than the optical resolution of the printing system itself.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Chinese Search Company Baidu Built a Giant Artificial-Intelligence Supercomputer

Chinese Search Company Baidu Built a Giant Artificial-Intelligence Supercomputer | Amazing Science | Scoop.it

Chinese search giant Baidu says it has invented a powerful supercomputer that brings new muscle to an artificial-intelligence technique giving software more power to understand speech, images, and written language.


The new computer, called Minwa and located in Beijing, has 72 powerful processors and 144 graphics processors, known as GPUs. Late Monday, Baidu released a paper claiming that the computer had been used to train machine-learning software that set a new record for recognizing images, beating a previous mark set by Google.


“Our company is now leading the race in computer intelligence,” said Ren Wu, a Baidu scientist working on the project, speaking at the Embedded Vision Summit on Tuesday. Minwa’s computational power would probably put it among the 300 most powerful computers in the world if it weren’t specialized for deep learning, said Wu. “I think this is the fastest supercomputer dedicated to deep learning,” he said. “We have great power in our hands—much greater than our competitors.”


Computing power matters in the world of deep learning, which has produced breakthroughs in speech, image, and face recognition and improved the image-search and speech-recognition services offered by Google and Baidu.


The technique is a souped-up version of an approach first established decades ago, in which data is processed by a network of artificial neurons that manage information in ways loosely inspired by biological brains. Deep learning involves using larger neural networks than before, arranged in hierarchical layers, and training them with significantly larger collections of data, such as photos, text documents, or recorded speech.


So far, bigger data sets and networks appear to always be better for this technology, said Wu. That’s one way it differs from previous machine-learning techniques, which had begun to produce diminishing returns with larger data sets. “Once you scaled your data beyond a certain point, you couldn’t see any improvement,” said Wu. “With deep learning, it just keeps going up.” Baidu says that Minwa makes it practical to create an artificial neural network with hundreds of billions of connections—hundreds of times more than any network built before.


A paper released Monday is intended to provide a taste of what Minwa’s extra oomph can do. It describes how the supercomputer was used to train a neural network that set a new record on a standard benchmark for image-recognition software. The ImageNet Classification Challenge, as it is called, involves training software on a collection of 1.5 million labeled images in 1,000 different categories, and then asking that software to use what it learned to label 100,000 images it has not seen before.


Software is compared on the basis of how often its top five guesses for a given image miss the correct answer. The system trained on Baidu’s new computer was wrong only 4.58 percent of the time. The previous best was 4.82 percent, reported by Google in March. One month before that, Microsoft had reported achieving 4.94 percent, becoming the first to better average human performance of 5.1 percent.

more...
LEONARDO WILD's curator insight, May 15, 2015 11:57 AM

Question: What IS intelligence?

I guess we're still mistaken about this elusive term so many use on a daily basis—either to degrade or upgrade your status as a human being–without really knowing what it is. Now we're going to have "stupid" 'puters vs. "intelligent" ones. Ah, yet the question remains: Psychopaths, those "snakes in suits" in high places, they are intelligent, aren't they? Yes, of course! Otherwise they wouldn't have been able to get where they are (high places). Empathy is clearly not part of the equation.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

New algorithm for 3D structures from 2D images will speed up protein structure discovery 100,000 fold

New algorithm for 3D structures from 2D images will speed up protein structure discovery 100,000 fold | Amazing Science | Scoop.it

One of the great challenges in molecular biology is to determine the three-dimensional structure of large biomolecules such as proteins. But this is a famously difficult and time-consuming task. The standard technique is x-ray crystallography, which involves analyzing the x-ray diffraction pattern from a crystal of the molecule under investigation. That works well for molecules that form crystals easily.


But many proteins, perhaps most, do not form crystals easily. And even when they do, they often take on unnatural configurations that do not resemble their natural shape. So finding another reliable way of determining the 3-D structure of large biomolecules would be a huge breakthrough. Today, Marcus Brubaker and a couple of pals at the University of Toronto in Canada say they have found a way to dramatically improve a 3-D imaging technique that has never quite matched the utility of x-ray crystallography.


The new technique is based on an imaging process called electron cryomicroscopy. This begins with a purified solution of the target molecule that is frozen into a thin film just a single molecule thick. This film is then photographed using a process known as transmission electron microscopy—it is bombarded with electrons and those that pass through are recorded. Essentially, this produces two-dimensional “shadowgrams” of the molecules in the film. Researchers then pick out each shadowgram and use them to work out the three-dimensional structure of the target molecule.


This process is hard for a number of reasons. First, there is a huge amount of noise in each image so even the two-dimensional shadow is hard to make out. Second, there is no way of knowing the orientation of the molecule when the shadow was taken so determining the 3-D shape is a huge undertaking.


The standard approach to solving this problem is little more than guesswork. Dream up a potential 3-D structure for the molecule and then rotate it to see if it can generate all of the shadowgrams in the dataset. If not, change the structure, test it, and so on.


Obviously, this is a time-consuming process. The current state-of-the-art algorithm running on 300 cores takes two weeks to find the 3-D structure of a single molecule from a dataset of 200,000 images.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

1 Billion Total Websites - Internet Live Stats

1 Billion Total Websites - Internet Live Stats | Amazing Science | Scoop.it
How many websites are there on the Web? Number of websites by year and growth from 1991 to 2015. Historical count and popular websites starting from the first website until today. Charts, real time counter, and interesting info.


After reaching 1 billion websites in September of 2014, a milestone confirmed by NetCraft in its October 2014 Web Server Survey and that Internet Live Stats was the first to announce - as attested by this tweet from the inventor of the World Wide Web himself, Tim Berners-Lee - the number of websites in the world has subsequently declined, reverting back to a level below 1 billion. This is due to the monthly fluctuations in the count of inactive websites. We do expect, however, to exceed 1 billion websites again sometime in 2015 and to stabilize the count above this historic milestone in 2016.


Curious facts
  • The first-ever website (info.cern.ch) was published on August 6, 1991 by British physicist Tim Berners-Lee while at CERN, in Switzerland. [2] On April 30, 1993 CERN made World Wide Web ("W3" for short) technology available on a royalty-free basis to the public domain, allowing the Web to flourish.[3]
  • The World Wide Web was invented in March of 1989 by Tim Berners-Lee (see the original proposal). He also introduced the first web server, the first browser and editor (the “WorldWideWeb.app”), the Hypertext Transfer Protocol (HTTP) and, in October 1990, the first version of the "HyperText Markup Language" (HTML).[4]
  • In 2013 alone, the web has grown by more than one third: from about 630 million websites at the start of the year to over 850 million by December 2013 (of which 180 million were active).
  • Over 50% of websites today are hosted on either Apache or nginx (54% of the total as of February 2015, according to NetCraft), both open source web servers.[5] After getting very close and even briefly taking the lead in July of 2014 in terms of server market share, Microsoft has recently got back behind Apache. As of February of 2015, 39% of server are hosted on Apache and 29% on Microsoft. However, if the overall trend continues in the future, in a few years from now Microsoft could become the leading web server developer for the first time in history.
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Probabilistic programming does in 50 lines of code what used to take thousands

Probabilistic programming does in 50 lines of code what used to take thousands | Amazing Science | Scoop.it
Most recent advances in artificial intelligence—such as mobile apps that convert speech to text—are the result of machine learning, in which computers are turned loose on huge data sets to look for patterns.


To make machine-learning applications easier to build, computer scientists have begun developing so-called probabilistic programming languages, which let researchers mix and match machine-learning techniques that have worked well in other contexts. In 2013, the U.S. Defense Advanced Research Projects Agency, an incubator of cutting-edge technology, launched a four-year program to fund probabilistic-programming research.


At the Computer Vision and Pattern Recognition conference in June, MIT researchers will demonstrate that on some standard computer-vision tasks, short programs—less than 50 lines long—written in a probabilistic programming language are competitive with conventional systems with thousands of lines of code. "This is the first time that we're introducing probabilistic programming in the vision area," says Tejas Kulkarni, an MIT graduate student in brain and cognitive sciences and first author on the new paper. "The whole hope is to write very flexible models, both generative and discriminative models, as short probabilistic code, and then not do anything else. General-purpose inference schemes solve the problems."


By the standards of conventional computer programs, those "models" can seem absurdly vague. One of the tasks that the researchers investigate, for instance, is constructing a 3-D model of a human face from 2-D images. Their program describes the principal features of the face as being two symmetrically distributed objects (eyes) with two more centrally positioned objects beneath them (the nose and mouth). It requires a little work to translate that description into the syntax of the probabilistic programming language, but at that point, the model is complete. Feed the program enough examples of 2-D images and their corresponding 3-D models, and it will figure out the rest for itself.


"When you think about probabilistic programs, you think very intuitively when you're modeling," Kulkarni says. "You don't think mathematically. It's a very different style of modeling." The new work, Kulkarni says, revives an idea known as inverse graphics, which dates from the infancy of artificial-intelligence research. Even though their computers were painfully slow by today's standards, the artificial intelligence pioneers saw that graphics programs would soon be able to synthesize realistic images by calculating the way in which light reflected off of virtual objects. This is, essentially, how Pixar makes movies.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Could analog computing accelerate highly complex scientific computer simulations?

Could analog computing accelerate highly complex scientific computer simulations? | Amazing Science | Scoop.it

DARPA announced today, March 19, a Request for Information (RFI) on methods for using analog approaches to speed up computation of the complex mathematics that characterize scientific computing. “The standard digital computer cluster equipped with multiple central processing units (CPUs), each programmed to tackle a particular piece of a problem, is just not designed to solve the kinds of equations at the core of large-scale simulations, such as those describing complex fluid dynamics and plasmas,” said Vincent Tang, program manager in DARPA’s Defense Sciences Office.


These critical equations, known as partial differential equations, describe fundamental physical principles like motion, diffusion, and equilibrium, he notes. But they involve continuous rates of change over a large range of physical parameters relating to the problems of interest, so they don’t lend themselves to being broken up and solved in discrete pieces by individual CPUs. Examples of such problems include predicting the spread of an epidemic, understanding the potential impacts of climate change, or modeling the acoustical signature of a newly designed ship hull.


What if there were a processor specially designed for such equations? What might it look like? Analog computers solve equations by manipulating continuously changing values instead of discrete digital measurements, and have been around for more than a century. In the 1930s, for example, Vannevar Bush—who a decade later would help initiate and administer the Manhattan Project—created an analog “differential analyzer” that computed complex integrations through the use of a novel wheel-and-disc mechanism.


Their potential to excel at dynamical problems too challenging for today’s digital processors may today be bolstered by other recent breakthroughs, including advances in microelectromechanical systems, optical engineering, microfluidics, metamaterials and even approaches to using DNA as a computational platform. So it’s conceivable, Tang said, that novel computational substrates could exceed the performance of modern CPUs for certain specialized problems, if they can be scaled and integrated into modern computer architectures.


DARPA’s RFI is called Analog and Continuous-variable Co-processors for Efficient Scientific Simulation (ACCESS), available here: http://go.usa.gov/3CV43. The RFI seeks new processing paradigms that have the potential to overcome current barriers in computing performance. “In general, we’re interested in information on all approaches, analog, digital, or hybrid ones, that have the potential to revolutionize how we perform scientific simulations,” Tang said.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers Create A Simulated Mouse Brain in a Virtual Mouse Body

Researchers Create A Simulated Mouse Brain in a Virtual Mouse Body | Amazing Science | Scoop.it

scientist Marc-Oliver Gewaltig and his team at the Human Brain Project (HBP) built a model mouse brain and a model mouse body, integrating them both into a single simulation and providing a simplified but comprehensive model of how the body and the brain interact with each other. "Replicating sensory input and motor output is one of the best ways to go towards a detailed brain model analogous to the real thing," explains Gewaltig.


As computing technology improves, their goal is to build the tools and the infrastructure that will allow researchers to perform virtual experiments on mice and other virtual organisms. This virtual neurorobotics platform is just one of the collaborative interfaces being developed by the HBP. A first version of the software will be released to collaborators in April. The HBP scientists used biological data about the mouse brain collected by the Allen Brain Institute in Seattle and the Biomedical Informatics Research Network in San Diego. These data contain detailed information about the positions of the mouse brain's 75 million neurons and the connections between different regions of the brain. They integrated this information with complementary data on the shapes, sizes and connectivity of specific types of neurons collected by the Blue Brain Project in Geneva.


A simplified version of the virtual mouse brain (just 200,000 neurons) was then mapped to different parts of the mouse body, including the mouse's spinal cord, whiskers, eyes and skin. For instance, touching the mouse's whiskers activated the corresponding parts of the mouse sensory cortex. And they expect the models to improve as more data comes in and gets incorporated. For Gewaltig, building a virtual organism is an exercise in data integration. By bringing together multiple sources of data of varying detail into a single virtual model and testing this against reality, data integration provides a way of evaluating – and fostering – our own understanding of the brain. In this way, he hopes to provide a big picture of the brain by bringing together separated data sets from around the world. Gewaltig compares the exercise to the 15th century European data integration projects in geography, when scientists had to patch together known smaller scale maps. These first attempts were not to scale and were incomplete, but the resulting globes helped guide further explorations and the development of better tools for mapping the Earth, until reaching today's precision.


Read more: https://www.humanbrainproject.eu
Human Brain Project: http://www.humanbrainproject.eu
NEST simulator software : http://nest-simulator.org/
Largest neuronalnetwork simulation using NEST : http://bit.ly/173mZ5j

Open Source Data Sets:
Allen Institute for Brain Science: http://www.brain-map.org
Bioinformatics Research Network (BIRN): http://www.birncommunity.org

The Behaim Globe : 
Germanisches National Museum, http://www.gnm.de/
Department of Geodesy and Geoinformation, TU Wien, http://www.geo.tuwien.ac.at

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Dr. Google joins Mayo Clinic

Dr. Google joins Mayo Clinic | Amazing Science | Scoop.it
The deal to produce clinical summaries under the Mayo Clinic name for Google searches symbolizes the medical priesthood's acceptance that information technology has reshaped the doctor-patient relationship. More disruptions are already on the way.


If information is power, digitized information is distributed power. While “patient-centered care” has been directed by professionals towards patients, collaborative health – what some call “participatory medicine” or “person-centric care” ­– shifts the perspective from the patient outwards.


Collaboration means sharing. At places like Mayo and Houston’s MD Anderson Cancer Center, the doctor’s detailed notes, long seen only by other clinicians, are available through a mobile app for patients to see when they choose and share how they wish. mHealth makes the process mundane, while the content makes it an utterly radical act.


About 5 million patients nationwide currently have electronic access to open notes. Boston’s Beth Israel Deaconess Medical Center and a few other institutions are planning to allow patients to make additions and corrections to what they call “OurNotes.” Not surprisingly, many doctors remain mortified by this medical sacrilege.


Even more threatening is an imminent deluge of patient-generated health data churned out by a growing list of products from major consumer companies. Sensors are being incorporated into wearables, watches, smartphones and (in a Ford prototype) even a “car that cares” with biometric sensors in the seat and steering wheel. Sitting in your suddenly becomes telemedicine.


To be sure, traditional information channels remain. For example, a doctor-prescribed, Food and Drug Administration-approved app uses sensors and personalized analytics to prevent severe asthma attacks. Increasingly common, though, is digitized data that doesn’t need a doctor at all. For example, a Microsoft fitness band not only provides constant heart rate monitoring, according to a New York Times review, but is part of a health “platform” employing algorithms to deliver “actionable information” and contextual analysis. By comparison, “Dr. Google” belongs in a Norman Rockwell painting.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Brain makes decisions with same method used to break WW2 Enigma code

Brain makes decisions with same method used to break WW2 Enigma code | Amazing Science | Scoop.it

When making simple decisions, neurons in the brain apply the same statistical trick used by Alan Turing to help break Germany’s Enigma code during World War II, according to a new study in animals by researchers at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute and Department of NeuroscienceResults of the study were published Feb. 5 in Neuron.


As depicted in the film “The Imitation Game,” Alan Turing and his team of codebreakers devised the statistical technique to help them decipher German military messages encrypted with the Enigma machine. The technique today is called Wald’s sequential probability ratio test, after Columbia professor Abraham Wald, who independently developed the test to determine if batches of munitions should be shipped to the front or if they contained too many duds.


Finding pairs of messages encrypted with the same Enigma settings was critical to unlocking the code. Turing’s statistical test, in essence, decided as efficiently as possible if any two messages were a pair.


The test evaluated corresponding pairs of letters from the two messages, aligned one above the other (in the film, codebreakers are often pictured doing this in the background, sliding messages around on grids). Although the letters themselves were gibberish, Turing realized that Enigma would preserve the matching probabilities of the original messages, as some letters are more common than others.

The codebreakers assigned values to aligned pairs of letters in the two messages. Unmatched pairs were given a negative value, matched pairs a positive value.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Erba Volant - Applied Plant Science
Scoop.it!

With Google’s Support, Plant Biologists Build First Online Database Of All The World’s Plant Species

With Google’s Support, Plant Biologists Build First Online Database Of All The World’s Plant Species | Amazing Science | Scoop.it
Four leading botanical gardens from around the world want to make it easier for researchers to identify plants in the field.


When plant biologists and field researchers come across a species they’ve never seen before, they turn to thick encyclopedia-like volumes called monographs with titles such as Flora Braseliensis that characterize each species in a region in great detail. But not every species has been well-described in this literature. Thomas estimates that only 10 percent of species in the American tropics have been properly characterized. And the reference materials that do exist sometimes don’t match, or are inaccessible to anyone who doesn’t have access through a university.


Four of the world’s leading botanical gardens would like to change that. Since 2012, they have been working toward building a free online database called World Flora Online of the world’s plant species – all 350,000 of them – so that scientists can more easily identify plants and share information about them. Thomas calls it “the WebMD” for plant biology. With a fresh new round of funding this spring including a $1.2 million grant from the Alfred P. Sloan Foundation and a $600,000 commitment from Google accompanied by a pledge to provide cloud storage for the project, the consortium has expanded to include 35 affiliates from around the world.


“Plants are hugely, hugely important for us,” says Doron Weber, vice president at the Sloan Foundation. “Plant research is very promising -- it's necessary for food, for medicines, for various materials. It's also the basis of healthy ecosystems and habitats. You can be completely bottom line about this.”


Via Meristemi
more...
Ra's curator insight, June 23, 2015 5:14 PM

This is amazing. A huge project with implications for a range of industry. Not wikipedia but wouldn't you like to be part of it!

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Simulating 1 second of human brain activity takes 82,944 processors

Simulating 1 second of human brain activity takes 82,944 processors | Amazing Science | Scoop.it

The brain is a deviously complex biological computing device that even the fastest supercomputers in the world fail to emulate. Researchers at the Okinawa Institute of Technology Graduate University in Japan and Forschungszentrum Jülich in Germany have managed to simulate a single second of human brain activity in a very, very powerful computer. This feat of computational might was made possible by the open source simulation software known as NEST. Of course, some serious computing power was needed as well. Luckily, the team had access to the fourth fastest supercomputer in the world — the K computer at the Riken research institute in Kobe, Japan.


Using the NEST software framework, the team led by Markus Diesmann and Abigail Morrison succeeded in creating an artificial neural network of 1.73 billion nerve cells connected by 10.4 trillion synapses. While impressive, this is only a fraction of the neurons every human brain contains. Scientists believe we all carry 80-100 billion nerve cells, or about as many stars as there are in the Milky Way.


Knowing this, it shouldn’t come as a surprise that the researchers were not able to simulate the brain’s activity in real time. It took 40 minutes with the combined muscle of 82,944 processors in K computer to get just 1 second of biological brain processing time. While running, the simulation ate up about 1PB of system memory as each synapse was modeled individually.


Computing power will continue to ramp up while transistors scale down, which could make true neural simulations possible in real time with supercomputers. Eventually scientists without access to one of the speediest machines in the world will be able to use cluster computing to accomplish similar feats. Maybe one day a single home computer will be capable of the same thing.


Perhaps all we need for artificial intelligence is a simulation of the brain at least as complex as ours. That raises the question, if you build a brain, does it have a mind? For that matter, what happens if you make a simulated brain MORE complex than the human brain? It may not be something we want to know.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers create first neural-network chip built with memristors only

Researchers create first neural-network chip built with memristors only | Amazing Science | Scoop.it

A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has.


Memristors may sound like something from a sci-fi movie, but they actually exist—they are electronic analog memory devices that are modeled on human neurons and synapses. Human consciousness, some believe, is in reality, nothing more than an advanced form of memory retention and processing, and it is analog, as opposed to computers, which of course are digital. The idea for memristors was first dreamed up by University of California professor Leon Chua back in 1971, but it was not until a team working at Hewlett-Packard in 2008, first built one. Since then, a lot of research has gone into studying the technology, but until now, no one had ever built a neural-network chip based exclusively on them.


Up till now, most neural networks have been software based, Google, Facebook and IBM, for example, are all working on computer systems running such learning networks, mostly meant to pick faces out of a crowd, or return an answer based on a human phrased question. While the gains in such technology have been obvious, the limiting factor is the hardware—as neural networks grow in size and complexity, they begin to tax the abilities of even the fastest computers. The next step, most in the field believe, is to replace transistors with memristors—each on its own is able to learn, in ways similar to the way neurons in the brain learn when presented with something new. Putting them on a chip would of course reduce the overhead needed to run such a network.


The new chip, the team reports, was created using transistor-free metal-oxide memristor crossbars and represents a basic neural network able to perform just one task—to learn and recognize patterns in very simple 3 × 3-pixel black and white images. The experimental chip, they add, is an important step towards the creation of larger neural networks that tap the real power of remristors. It also makes possible the idea of building computers in lock-step with advances in research looking into discovering just how exactly our neurons work at their most basic level.


More information: Training and operation of an integrated neuromorphic network based on metal-oxide memristors, Nature 521, 61–64 (07 May 2015) doi:10.1038/nature14441


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Biodegradable computer chips made from wood

Biodegradable computer chips made from wood | Amazing Science | Scoop.it

Portable electronics -- typically made of non-renewable, non-biodegradable and potentially toxic materials -- are discarded at an alarming rate in consumers' pursuit of the next best electronic gadget.


In an effort to alleviate the environmental burden of electronic devices, a team of University of Wisconsin-Madison researchers has collaborated with researchers in the Madison-based U.S. Department of Agriculture Forest Products Laboratory (FPL) to develop a surprising solution: a semiconductor chip made almost entirely of wood.


The research team, led by UW-Madison electrical and computer engineering professor Zhenqiang "Jack" Ma, described the new device in a paper published today (May 26, 2015) by the journal Nature Communications. The paper demonstrates the feasibility of replacing the substrate, or support layer, of a computer chip, with cellulose nanofibril (CNF), a flexible, biodegradable material made from wood.


"The majority of material in a chip is support. We only use less than a couple of micrometers for everything else," Ma says. "Now the chips are so safe you can put them in the forest and fungus will degrade it. They become as safe as fertilizer." Zhiyong Cai, project leader for an engineering composite science research group at FPL, has been developing sustainable nanomaterials since 2009.


"If you take a big tree and cut it down to the individual fiber, the most common product is paper. The dimension of the fiber is in the micron stage," Cai says. "But what if we could break it down further to the nano scale? At that scale you can make this material, very strong and transparent CNF paper."


Working with Shaoqin "Sarah" Gong, a UW-Madison professor of biomedical engineering, Cai's group addressed two key barriers to using wood-derived materials in an electronics setting: surface smoothness and thermal expansion.


"You don't want it to expand or shrink too much. Wood is a natural hydroscopic material and could attract moisture from the air and expand," Cai says. "With an epoxy coating on the surface of the CNF, we solved both the surface smoothness and the moisture barrier."

Gong and her students also have been studying bio-based polymers for more than a decade. CNF offers many benefits over current chip substrates, she says.


"The advantage of CNF over other polymers is that it's a bio-based material and most other polymers are petroleum-based polymers. Bio-based materials are sustainable, bio-compatible and biodegradable," Gong says. "And, compared to other polymers, CNF actually has a relatively low thermal expansion coefficient."

more...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google Tests First Error Correction in Quantum Computing

Google Tests First Error Correction in Quantum Computing | Amazing Science | Scoop.it

Quantum computers won’t ever outperform today’s classical computers unless they can correct for errors that disrupt the fragile quantum states of their qubits. A team at Google has taken the next huge step toward making quantum computing practical by demonstrating the first system capable of correcting such errors.


Google’s breakthrough originated with the hiring of a quantum computing research group from the University California, Santa Barbara in the autumn of 2014. The UCSB researchers had previously built a system of superconducting quantum circuits that performed with enough accuracy tomake error correction a possibility. That earlier achievement paved the way for the researchers—many now employed at Google—to build a system that can correct the errors that naturally arise during quantum computing operations. Their work is detailed in the 4 March 2015 issue of the journal Nature.


“This is the first time natural errors arising from the qubit environment were corrected,” said Rami Barends, a quantum electronics engineer at Google. “It’s the first device that can correct its own errors.”


Quantum computers have the potential to perform many simultaneous calculations by relying upon quantum bits, or qubits, that can represent information as both 1 and 0 at the same time. That gives quantum computing a big edge over today’s classical computers that rely on bits that can only represent either 1 or 0.


But a huge challenge in building practical quantum computers involves preserving the fragile quantum states of qubits long enough to run calculations. The solution that Google and UCSB have demonstrated is a quantum error-correction code that uses simple classical processing to correct the errors that arise during quantum computing operations.


Such codes can’t directly detect errors in qubits without disrupting the fragile quantum states. But they get around that problem by relying on entanglement, a physics phenomenon that enables a single qubit to share its information with many other qubits through a quantum connection. The codes exploit entanglement with an architecture that includes “measurement” qubits entangled with neighboring “data” qubits.


The Google and UCSB team has been developing a specific quantum error-correction code called “surface code.” They eventually hope to build a 2-D surface code architecture based on a checkerboard arrangement of qubits, so that “white squares” would represent the data qubits that perform operations and “black squares” would represent measurement qubits that can detect errors in neighboring qubits.


For now, the researchers have been testing the surface code in a simplified “repetition code” architecture that involves a linear, 1-D array of qubits. Their unprecedented demonstration of error correction used a repetition code architecture that included nine qubits. They tested the repetition code through the equivalent of 90,000 test runs to gather the necessary statistics about its performance.

more...
Investors Europe Stock Brokers's curator insight, May 23, 2015 2:25 AM

Quantum computers have the potential to perform many simultaneous calculations by relying upon quantum bits, or qubits, that can represent information as both 1 and 0 at the same time. That gives quantum computing a big edge over today’s classical computers that rely on bits that can only represent either 1 or 0.


But a huge challenge in building practical quantum computers involves preserving the fragile quantum states of qubits long enough to run calculations. The solution that Google and UCSB have demonstrated is a quantum error-correction code that uses simple classical processing to correct the errors that arise during quantum computing operations.

Pablo Vicente Munuera's curator insight, May 23, 2015 4:05 AM

Quantum computers are coming... :D

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Mystery botnet hijacks broadband routers to offer DDoS-for-hire

Mystery botnet hijacks broadband routers to offer DDoS-for-hire | Amazing Science | Scoop.it

A rival hacker group to the infamous Lizard Squad has been discovered quietly using a previously unknown global botnet of compromised broadband routers to carry out DDoS and Man-in-the-Middle (MitM) attacks.


The discovery was made by security firm Incapsula (recently acquired by Imperva), which first noticed attacks against a few dozen of its customers in December 2014 since when the firm estimates its size to exceed 40,000 IPs across 1,600 ISPs with at least 60 command and control (C2) nodes.


Almost all of the compromised routers appear to be unidentified ARM-based models from a single US vendor, Ubiquiti, which is sold across the world, including in the UK. Incapsula detected traffic from compromised devices in 109 countries, overwhelmingly in Thailand and router compromise hotspot, Brazil.


The compromise that allowed the Ubiquiti routers to be botted in the first place appears to be connected to one of two vulnerabilities. The first is simply that the devices have been left with their vendor username and password in its default state – perhaps a sign that some of these devices are older – allowing the attackers easy access.


The second and more unexpected flaw is that the routers also allow remote access to HTTP and SSH via default ports, a configuration issue which would be open sesame to attackers. Once compromised, the attacks appear to have been used to inject a number of pieces of malware, mainly the Linux Spike Trojan, aka, ‘MrBlack’, used to configure DDoS attacks. The firm inspected 13,000 malware samples and found evidence of other DDoS tools, including Dorfloo and Mayday.


The C2s for these tools were found to be in several countries, with 73 percent in China and 21 percent in the US. This doesn’t mean the attackers were based there, simply using infrastructure on hosts in those locations.


“Given how easy it is to hijack these devices, we expect to see them being exploited by additional perpetrators. Even as we conducted our research, the Incapsula security team documented numerous new malware types being added—each compounding the threat posed by the existence of these botnet devices,” said the firm’s researchers.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The Current State of Machine Intelligence

The Current State of Machine Intelligence | Amazing Science | Scoop.it

A few years ago, investors and startups were chasing “big data”. Now we’re seeing a similar explosion of companies calling themselves artificial intelligence, machine learning, or collectively “machine intelligence”. The Bloomberg Beta fund, which is focused on the future of work, has been investing in these approaches.


Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus).


Machine intelligence technologies cut across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks). All of these technologies are reflected on this landscape.


What this landscape doesn’t include, however important, is “big data” technologies. Some have used this term interchangeably with machine learning and artificial intelligence, but I want to focus on the intelligence methods rather than data, storage, and computation pieces of the puzzle for this landscape (though of course data technologies enable machine intelligence).


We’ve seen a few great articles recently outlining why machine intelligence is experiencing a resurgence, documenting the enabling factors of this resurgence. Kevin Kelly, for example chalks it up to cheap parallel computing, large datasets, and better algorithms.


Machine intelligence is enabling applications we already expect like automated assistants (Siri), adorable robots (Jibo), and identifying people in images (like the highly effective but unfortunately named DeepFace). However, it’s also doing the unexpected: protecting children from sex trafficking, reducing the chemical content in the lettuce we eat, helping us buy shoes online that fit our feet precisely, anddestroying 80's classic video games.


Big companies have a disproportionate advantage, especially those that build consumer products. The giants in search (Google, Baidu), social networks (Facebook, LinkedIn, Pinterest), content (Netflix, Yahoo!), mobile (Apple) and e-commerce (Amazon) are in an incredible position. They have massive datasets and constant consumer interactions that enable tight feedback loops for their algorithms (and these factors combine to create powerful network effects) — and they have the most to gain from the low hanging fruit that machine intelligence bears.

Best-in-class personalization and recommendation algorithms have enabled these companies’ success (it’s both impressive and disconcerting that Facebook recommends you add the person you had a crush on in college and Netflix tees up that perfect guilty pleasure sitcom).

Now they are all competing in a new battlefield: the move to mobile. Winning mobile will require lots of machine intelligence: state of the art natural language interfaces (like Apple’s Siri), visual search (like Amazon’s “FireFly”), and dynamic question answering technology that tells you the answer instead of providing a menu of links (all of the search companies are wrestling with this).Large enterprise companies (IBM and Microsoft) have also made incredible strides in the field, though they don’t have the same human-facing requirements so are focusing their attention more on knowledge representation tasks on large industry datasets, like IBM Watson’s application to assist doctors with diagnoses.
more...
John Vollenbroek's curator insight, April 25, 2015 2:53 AM

I like this overview

pbernardon's curator insight, April 26, 2015 2:33 AM

Une infographie et une cartographie claire et très intéressante sur l'intelligence artificielle et les usages induits que les organisations vont devoir s'approprier.

 

#bigdata 

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Photon afterglow could transmit information without transmitting energy

Photon afterglow could transmit information without transmitting energy | Amazing Science | Scoop.it

Physicists have theoretically shown that it is possible to transmit information from one location to another without transmitting energy. Instead of using real photons, which always carry energy, the technique uses a small, newly predicted quantum afterglow of virtual photons that do not need to carry energy. Although no energy is transmitted, the receiver must provide the energy needed to detect the incoming signal—similar to the way that an individual must pay to receive a collect call.


The physicists, Robert H. Jonsson, Eduardo Martín-Martínez, and Achim Kempf, at the University of Waterloo (Martín-Martínez and Kempf are also with the Perimeter Institute), have published a paper on the concept in a recent issue of Physical Review LettersCurrently, any information transmission protocol also involves energy transmission. This is because these protocols use real photons to transmit information, and all real photons carry energy, so the information and energy are inherently intertwined.


Most of the time when we talk about electromagnetic fields and photons, we are talking about real photons. The light that reaches our eyes, for example, consists only of real photons, which carry both information and energy. However, all electromagnetic fields contain not only real photons, but also virtual photons, which can be thought of as "imprints on the quantum vacuum." The new discovery shows that, in certain circumstances, virtual photons that do not carry energy can be used to transmit information.


The physicists showed how to achieve this energy-less information transmission by doing two things: "First, we use quantum antennas, i.e., antennas that are in a quantum superposition of states," Kempf told Phys.org. "For example, with current quantum optics technology, atoms can be used as such antennas. Secondly, we use the fact that, when real photons are emitted (and propagate at the speed of light), the photons leave a small afterglow of virtual photons that propagate slower than light. This afterglow does not carry energy (in contrast to real photons), but it does carry information about the event that generated the light. Receivers can 'tap' into that afterglow, spending energy to recover information about light that passed by a long time ago."


The proposed protocol has another somewhat unusual requirement: it can only take place in spacetimes with dimensions in which virtual photons can travel slower than the speed of light. For instance, the afterglow would not occur in our 3+1 dimensional spacetime if spacetime were completely flat. However, our spacetime does have some curvature, and that makes the afterglow possible.


These ideas also have implications for cosmology. In a paper to be published in a future issue of Physical Review Letters, Martín-Martínez and collaborators A. Blasco, L. Garay, and M. Martin-Benito have investigated these implications. "In that work, it is shown that the afterglow of events that happened in the early Universe carries more information than the light that reaches us from those events," Martín-Martínez said. "This is surprising because, up until now, it has been believed that real quanta, such as real photons of light, are the only carriers of information from the early Universe."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

'Google Maps' for the body: A biomedical revolution down to a single cell

'Google Maps' for the body: A biomedical revolution down to a single cell | Amazing Science | Scoop.it
Scientists are using previously top-secret technology to zoom through the human body down to the level of a single cell. Scientists are also using cutting-edge microtome and MRI technology to examine how movement and weight bearing affects the movement of molecules within joints, exploring the relationship between blood, bone, lymphatics and muscle.


UNSW biomedical engineer Melissa Knothe Tate is using previously top-secret semiconductor technology to zoom through organs of the human body, down to the level of a single cell.


A world-first UNSW collaboration that uses previously top-secret technology to zoom through the human body down to the level of a single cell could be a game-changer for medicine, an international research conference in the United States has been told.


The imaging technology, developed by high-tech German optical and industrial measurement manufacturer Zeiss, was originally developed to scan silicon wafers for defects.


UNSW Professor Melissa Knothe Tate, the Paul Trainor Chair of Biomedical Engineering, is leading the project, which is using semiconductor technology to explore osteoporosis and osteoarthritis.


Using Google algorithms, Professor Knothe Tate -- an engineer and expert in cell biology and regenerative medicine -- is able to zoom in and out from the scale of the whole joint down to the cellular level "just as you would with Google Maps," reducing to "a matter of weeks analyses that once took 25 years to complete."


Her team is also using cutting-edge microtome and MRI technology to examine how movement and weight bearing affects the movement of molecules within joints, exploring the relationship between blood, bone, lymphatics and muscle. "For the first time we have the ability to go from the whole body down to how the cells are getting their nutrition and how this is all connected," said Professor Knothe Tate. "This could open the door to as yet unknown new therapies and preventions."


Professor Knothe Tate is the first to use the system in humans. She has forged a pioneering partnership with the US-based Cleveland Clinic, Brown and Stanford Universities, as well as Zeiss and Google to help crunch terabytes of data gathered from human hip studies. Similar research is underway at Harvard University and Heidelberg in Germany to map neural pathways and connections in the brains of mice.


The above story is based on materials provided by University of New South Wales.

more...
CineversityTV's curator insight, March 30, 2015 8:53 PM

What happens with the metadata? In the public domain? Or in the greed hands of the elite.

Courtney Jones's curator insight, April 2, 2015 4:49 AM

,New advances in biomedical technology

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Brain in your pocket: Smartphone replaces thinking, study shows

Brain in your pocket: Smartphone replaces thinking, study shows | Amazing Science | Scoop.it

In the ancient world — circa, say, 2007 — terabytes of information were not available on sleekly designed devices that fit in our pockets. While we now can turn to iPhones and Samsung Galaxys to quickly access facts both essential and trivial — the fastest way to grandmother’s house, how many cups are in a gallon, the name of the actor who played Newman on “Seinfeld” — we once had to keep such tidbits in our heads or, perhaps, in encyclopedia sets.


With the arrival of the smartphone, such dusty tomes are unnecessary. But new research suggests our devices are more than a convenience — they may be changing the way we think. In “The brain in your pocket: Evidence that Smartphones are used to supplant thinking,” forthcoming from the journal Computers in Human Behavior, lead authors Nathaniel Barr and Gordon Pennycook of the psychology department at the University of Waterloo in Ontario said those who think more intuitively and less analytically are more likely to rely on technology.


“That people typically forego effortful analytic thinking in lieu of fast and easy intuition suggests that individuals may allow their Smartphones to do their thinking for them,” the authors wrote.


What’s the difference between intuitive and analytical thinking? In the paper, the authors cite this problem: “A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?”


The brain-teaser evokes an intuitive response: The ball must cost 10 cents, right? This response, unfortunately, is obviously wrong — 10 cents plus $1.10 equals $1.20, not $1.10. Only through analytic thinking can one arrive at the correct response: The ball costs 5 cents. (Confused? Five cents plus $1.05 equals $1.10.)


It’s just this sort of analytical thinking that avid smartphone users seem to avoid. For the paper, researchers asked subjects how much they used their smartphones, then gave them tests to measure not just their intelligence, but how they processed information.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google collaborates with UCSB to build a quantum device that detects and corrects its own errors

Google collaborates with UCSB to build a quantum device that detects and corrects its own errors | Amazing Science | Scoop.it

Google launches an effort to build its own quantum computer that has the potential to change computing forever. Google is about to begin designing and building hardware for a quantum computer, a type of machine that can exploit quantum physics to solve problems that would take a conventional computer millions of years. Since 2009, Google has been working with controversial startup D-Wave Systems, which claims to make “the first commercial quantum computer.” Last year, Google purchased one of D-Wave’s machines to be able to test the machine thoroughly. But independent tests published earlier this year found no evidence that D-Wave’s computer uses quantum physics at all to solve problems more efficiently than a conventional machine.


Now, John Martinis, a professor at University of California, Santa Barbara, has joined Google to establish a new quantum hardware lab near the university. He will try to make his own versions of the kind of chip inside a D-Wave machine. Martinis has spent more than a decade working on a more proven approach to quantum computing, and built some of the largest, most error-free systems of qubits, the basic building blocks that encode information in a quantum computer.


“We would like to rethink the design and make the qubits in a different way,” says Martinis of his effort to improve on D-Wave’s hardware. “We think there’s an opportunity in the way we build our qubits to improve the machine.” Martinis has taken a joint position with Google and UCSB that will allow him to continue his own research at the university.


Quantum computers could be immensely faster than any existing computer at certain problems. That’s because qubits working together can use the quirks of quantum mechanics to quickly discard incorrect paths to a solution and home in on the correct one. However, qubits are tricky to operate because quantum states are so delicate.


Chris Monroe, a professor who leads a quantum computing lab at the University of Maryland, welcomed the news that one of the leading lights in the field was going to work on the question of whether designs like D-Wave’s can be useful. “I think this is a great development to have legitimate researchers give it a try,” he says.


Since showing off its first machine in 2007, D-Wave has irritated academic researchers by making claims for its computers without providing the evidence its critics say is needed to back them up. However, the company has attracted over $140 million in funding and sold several of its machines (see “The CIA and Jeff Bezos Bet on Quantum Computing”).


There is no question that D-Wave’s machine can perform certain calculations. And research published in 2011 showed that the machine’s chip harbors the right kind of quantum physics needed for quantum computing. But evidence is lacking that it uses that physics in the way needed to unlock the huge speedups promised by a quantum computer. It could be solving problems using only ordinary physics.


Martinis’s previous work has been focused on the conventional approach to quantum computing. He set a new milestone in the field this April, when his lab announced that it could operate five qubits together with relatively low error rates. Larger systems of such qubits could be configured to run just about any kind of algorithm depending on the problem at hand, much like a conventional computer. To be useful, a quantum computer would probably need to be built with tens of thousands of qubits or more.


Martinis was a coauthor on a paper published in Science earlier this year that took the most rigorous independent look at a D-Wave machine yet. It concluded that in the tests run on the computer, there was “no evidence of quantum speedup.” Without that, critics say, D-Wave is nothing more than an overhyped, and rather weird, conventional computer. The company counters that the tests of its machine involved the wrong kind of problems to demonstrate its benefits.


Martinis’s work on D-Wave’s machine led him into talks with Google, and to his new position. Theory and simulation suggest that it might be possible for annealers to deliver quantum speedups, and he considers it an open question. “There’s some really interesting science that people are trying to figure out,” he says.

more...
Benjamin Chiong's curator insight, March 23, 2015 7:23 PM

Looking at Amdahl's law, it is not only the data storage that matters but every component of computer. As each piece of hardware advances, the rest of the parts should be able to keep up as well. Quantum Computing forges a world that allows massive processing power to analyze Big Data. This gives us an idea how the future would look like.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

First general learning system that can learn directly from experience to master a wide range of challenging tasks

First general learning system that can learn directly from experience to master a wide range of challenging tasks | Amazing Science | Scoop.it

The gamer punches in play after endless play of the Atari classic Space Invaders. Though an interminable chain of failures, the gamer adapts the gameplay strategy to reach for the highest score. But this is no human with a joystick in a 1970s basement. Artificial intelligence is learning to play Atari games. The Atari addict is a deep-learning algorithm called DQN.


This algorithm began with no previous information about Space Invaders—or, for that matter, the other 48 Atari 2600 games it is learning to play and sometimes master after two straight weeks of gameplay. In fact, it wasn't even designed to take on old video games; it is general-purpose, self-teaching computer program. Yet after watching the Atari screen and fiddling with the controls over two weeks, DQN is playing at a level that would humiliate even a professional flesh-and-blood gamer.


Volodymyr Mnih and his team of computer scientists at Google, who have just unveiled DQN in the journal Nature, says their creation is more than just an impressive gamer. Mnih says the general-purpose DQN learning algorithm could be the first rung on a ladder to artificial intelligence.


"This is the first time that anyone has built a single general learning system that can learn directly from experience to master a wide range of challenging tasks," says Demis Hassabis, a member of Google's team. The algorithm runs on little more than a powerful desktop PC with a souped up graphics card. At its core, DQN combines two separate advances in machine learning in a fascinating way. The first advance is a type of positive-reinforcement learning method called Q-learning. This is where DQN, or Deep Q-Network, gets its middle initial. Q-learning means that DQN is constantly trying to make joystick and button-pressing decisions that will get it closer to a property that computer scientists call "Q." In simple terms, Q is what the algorithm approximates to be biggest possible future reward for each decision. For Atari games, that reward is the game score.


Knowing what decisions will lead it to the high scorer's list, though, is no simple task. Keep in mind that DQN starts with zero information about each game it plays. To understand how to maximize your score in a game like Space Invaders, you have to recognize a thousand different facts: how the pixilated aliens move, the fact that shooting them gets you points, when to shoot, what shooting does, the fact that you control the tank, and many more assumptions, most of which a human player understands intuitively. And then, if the algorithm changes to a racing game, a side-scroller, or Pac-Man, it must learn an entirely new set of facts. That's where the second machine learning advance comes in. DQN is also built upon a vast and partially human brain-inspired artificial neural network. Simply put, the neural network is a complex program built to process and sort information from noise. It tells DQN what is and isn't important on the screen.


Nature Video of DQN AI

more...
No comment yet.