Cyborgs_Transhumanism
8.0K views | +1 today
Follow
Cyborgs_Transhumanism
Trends about the next generation
Curated by luiy
Your new post is loading...
Your new post is loading...
Rescooped by luiy from Tracking the Future
Scoop.it!

Computer science: The learning machines | #deepLearning

Computer science: The learning machines | #deepLearning | Cyborgs_Transhumanism | Scoop.it

Using massive amounts of data to recognize photos and speech, deep-learning computers are taking a big step towards true artificial intelligence.


Via Szabolcs Kósa
luiy's insight:

Deep learning itself is a revival of an even older idea for computing: neural networks. These systems, loosely inspired by the densely interconnected neurons of the brain, mimic human learning by changing the strength of simulated neural connections on the basis of experience. Google Brain, with about 1 million simulated neurons and 1 billion simulated connections, was ten times larger than any deep neural network before it. Project founder Andrew Ng, now director of the Artificial Intelligence Laboratory at Stanford University in California, has gone on to make deep-learning systems ten times larger again.

 

Such advances make for exciting times in artificial intelligence (AI) — the often-frustrating attempt to get computers to think like humans. In the past few years, companies such as Google, Apple and IBM have been aggressively snapping up start-up companies and researchers with deep-learning expertise. For everyday consumers, the results include software better able to sort through photos, understand spoken commands and translate text from foreign languages. For scientists and industry, deep-learning computers can search for potential drug candidates, map real neural networks in the brain or predict the functions of proteins.

more...
R Schumacher & Associates LLC's curator insight, January 15, 2014 1:43 PM

The monikers such as "deep learning" may be new, but Artificial Intelligence has always been the Holy Grail of computer science.  The applications are many, and the path is becoming less of an uphill climb.  

Rescooped by luiy from The future of medicine and health
Scoop.it!

Jumper Cables for the Mind | #cyborgs

Jumper Cables for the Mind | #cyborgs | Cyborgs_Transhumanism | Scoop.it
Would you give your brain a jolt if a Harvard scientist said it could make you smarter, more creative and less depressed?

-

This couldn’t possibly be a good idea. On Friday the 13th of September, in an old brick building on 13th Street in Boston’s Charlestown neighborhood, a pair of electrodes was attached to my forehead, one over my brain’s left prefrontal cortex, the other just above my right eye socket. I was about to undergo transcranial direct-current stimulation, or tDCS, an experimental technique for delivering extremely low dose electrical stimulation to the brain. Using less than 1 percent of the electrical energy necessary for electroconvulsive therapy, powered by an ordinary nine-volt battery, tDCS has been shown in hundreds of studies to enhance an astonishing, seemingly implausible variety of intellectual, emotional and movement-related brain functions. And its side effects appear limited to a mild tingling at the site of the electrode, sometimes a slight reddening of the skin, very rarely a headache and certainly no seizures or memory loss. Still, I felt more than a bit apprehensive as I prepared to find out if a little bit of juice could amp up my cognitive reserves and make me, in a word, smarter.


Via Wildcat2030
luiy's insight:

The first modern experiments with tDCS came in fits and starts. In 1981, Niels Birbaumer, a neuroscientist at the University of Tübingen, Germany, reported that by applying extremely low doses of direct-current electricity — one-third of a milliamp, not enough to power a hearing aid — to the heads of healthy volunteers, he could speed their response on a simple test of reaction time. The Italian neurophysiologist Alberto Priori began his own experiments in 1992, applying just a tiny bit more electricity, about half a milliamp. He found that enough of the electricity crossed through volunteers’ skulls — electrons flowing from the cathodal electrode to the anodal electrode — to cause brain cells near the anodal to become excited. Despite repeating the experiment multiple times to be sure of the results, it took Priori six years to get his findings published in a scientific journal, in 1998. As he told me, “People kept telling me it can’t be true, it’s too easy and simple.”

more...
No comment yet.
Rescooped by luiy from Tracking the Future
Scoop.it!

Hyping Artificial Intelligence, Yet Again | #AI #science

Hyping Artificial Intelligence, Yet Again | #AI #science | Cyborgs_Transhumanism | Scoop.it

Some advances are genuinely exciting, but whether they will really produce human-level A.I. is unclear.


Via Szabolcs Kósa
luiy's insight:

..... but, examined carefully, the articles seem more enthusiastic than substantive. As I wrote before, the story about Watson was off the mark factually. The deep-learning piece had problems, too. Sunday’s story is confused at best; there is nothing new in teaching computers to learn from their mistakes. Instead, the article seems to be about building computer chips that use “brainlike” algorithms, but the algorithms themselves aren’t new, either. As the author notes in passing, “the new computing approach” is “already in use by some large technology companies.” Mostly, the article seems to be about neuromorphic processors—computer processors that are organized to be somewhat brainlike—though, as the piece points out, they have been around since the nineteen-eighties. In fact, the core idea of Sunday’s article—nets based “on large groups of neuron-like elements … that learn from experience”—goes back over fifty years, to the well-known Perceptron, built by Frank Rosenblatt in 1957. (If you check the archives, the Times billed it as a revolution, with the headline “NEW NAVY DEVICE LEARNS BY DOING.” The New Yorker similarly gushed about the advancement.) The only new thing mentioned is a computer chip, as yet unproven but scheduled to be released this year, along with the claim that it can “potentially [make] the term ‘computer crash’ obsolete.” Steven Pinker wrote me an e-mail after reading the Timesstory, saying “We’re back in 1985!”—the last time there was huge hype in the mainstream media about neural networks.

more...
No comment yet.
Rescooped by luiy from Tracking the Future
Scoop.it!

A #Multiverse of Exploration: The Future of #Science 2021 | #data

A #Multiverse of Exploration: The Future of #Science 2021 | #data | Cyborgs_Transhumanism | Scoop.it

Invisibility cloaks. The search for extraterrestrial intelligence. A Facebook for genes. These were just a few of the startling topics IFTF explored at our recent Technology Horizons Program conference on the "Future of Science." More than a dozen scientists from UC Berkeley, Stanford, UC Santa Cruz, Scripps Research Institute, SETI, and private industry shared their edgiest research driving transformations in science. MythBusters' Adam Savage weighed in on the future of science education.

All of their presentations were signals supporting IFTF's new "Future of Science" forecast, laid out in a new map titled "A Multiverse of Exploration: The Future of Science 2021." The map focuses on six big stories of science that will play out over the next decade: Decrypting the Brain, Hacking Space, Massively Multiplayer Data, Sea the Future, Strange Matter, and Engineered Evolution. Those stories are emerging from a new ecology of science shifting toward openness, collaboration, reuse, and increased citizen engagement in scientific research.

by Institute For The Future


Via Szabolcs Kósa
luiy's insight:

The map focuses on six big stories of science that we think will play out over the next decade:

 

1. Decrypting the Brain,

2. Hacking Space,

3. Massively Multiplayer Data,

4. Sea the Future,

5. Strange Matter, and

6. Engineered Evolution. 

 

- See more at: http://www.iftf.org/our-work/people-technology/technology-horizons/the-future-of-science/#sthash.J6Ga3QHn.dpuf

more...
No comment yet.