Amazing Science
776.0K views | +106 today
Follow
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google AI invents its own cryptographic algorithm and no one knows how it works

Google AI invents its own cryptographic algorithm and no one knows how it works | Amazing Science | Scoop.it

Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.

 

The Google Brain team (which is based out in Mountain View and is separate from Deep Mind in London) started with three fairly vanilla neural networks called Alice, Bob, and Eve. Each neural network was given a very specific goal: Alice had to send a secure message to Bob; Bob had to try and decrypt the message; and Eve had to try and eavesdrop on the message and try to decrypt it. Alice and Bob have one advantage over Eve: they start with a shared secret key (i.e. this is symmetric encryption).

 

Importantly, the AIs were not told how to encrypt stuff, or what crypto techniques to use: they were just given a loss function (a failure condition), and then they got on with it. In Eve's case, the loss function was very simple: the distance, measured in correct and incorrect bits, between Alice's original input plaintext and its guess. For Alice and Bob the loss function was a bit more complex: if Bob's guess (again measured in bits) was too far from the original input plaintext, it was a loss; for Alice, if Eve's guesses are better than random guessing, it's a loss. And thus an adversarial generative network (GAN) was created.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

AI milestone: a new system can match humans in speech recognition

AI milestone: a new system can match humans in speech recognition | Amazing Science | Scoop.it

A team at Microsoft's Artificial Intelligence and Research group has published a study in which they demonstrate a technology that recognizes spoken words in a conversation as well as a real person does.

 

Last month, the same team achieved a word error rate (WER) of 6.3%. In their new paper this week, they report a WER of just 5.9%, which is equal to that of professional transcriptionists and is the lowest ever recorded against the industry standard Switchboard speech recognition task. “We’ve reached human parity,” said Xuedong Huang, the company’s chief speech scientist. “This is an historic achievement.”

 

“Even five years ago, I wouldn’t have thought we could have achieved this,” said Harry Shum, the group's executive vice president. “I just wouldn’t have thought it would be possible.”

 

Microsoft has been involved in speech recognition and speech synthesis research for many years. The company developed Speech API in 1994 and later introduced speech recognition technology in Office XP and Office 2003, as well as Internet Explorer. However, the word error rates for these applications were much higher back then.

 

In their new paper, the researchers write: "the key to our system's performance is the systematic use of convolutional and LSTM neural networks, combined with a novel spatial smoothing method and lattice-free MMI acoustic training."

 

The team used Microsoft’s own Computational Network Toolkit – an open source, deep learning framework. This was able to process deep learning algorithms across multiple computers, running a specialized GPU to greatly improve its speed and enhance the quality of research. The team believes their milestone will have broad implications for both consumer and business products, including entertainment devices like the Xbox, accessibility tools such as instant speech-to-text transcription, and personal digital assistants such as Cortana. “This will make Cortana more powerful, making a truly intelligent assistant possible,” Shum said.

 

“The next frontier is to move from recognition to understanding,” said Geoffrey Zweig, who manages the Speech & Dialog research group. Future improvements may also include speech recognition that works well in more real-life settings – places with lots of background noise, for example, such as at a party or while driving on the highway. The technology will also become better at assigning names to individual speakers when multiple people are talking, as well as working with a wide variety of voices, regardless of age, accent or ability.

 

The full study – Achieving Human Parity in Conversational Speech Recognition – is available at: https://arxiv.org/abs/1610.05256

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Stephen Wolfram: AI & The Future Of Human Civilization

Stephen Wolfram: AI & The Future Of Human Civilization | Amazing Science | Scoop.it

What makes us different from all these things? What makes us different is the particulars of our history, which gives us our notions of purpose and goals. That's a long way of saying when we have the box on the desk that thinks as well as any brain does, the thing it doesn't have, intrinsically, is the goals and purposes that we have. Those are defined by our particulars—our particular biology, our particular psychology, our particular cultural history.

 

The thing we have to think about as we think about the future of these things is the goals. That's what humans contribute, that's what our civilization contributes—execution of those goals; that's what we can increasingly automate. We've been automating it for thousands of years. We will succeed in having very good automation of those goals. I've spent some significant part of my life building technology to essentially go from a human concept of a goal to something that gets done in the world.

 

There are many questions that come from this. For example, we've got these great AIs and they're able to execute goals, how do we tell them what to do?...

 

STEPHEN WOLFRAM, distinguished scientist, inventor, author, and business leader, is Founder & CEO, Wolfram Research; Creator, Mathematica, Wolfram|Alpha & the Wolfram Language; Author, A New Kind of Science. Stephen Wolfram's Edge Bio Page

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Learning Technology News
Scoop.it!

Google’s Neural Network for Language Translation Narrows Gap Between Machines & Humans

Google’s Neural Network for Language Translation Narrows Gap Between Machines & Humans | Amazing Science | Scoop.it
Of course, machine translation is still far from perfect. Despite its advances, GNMT can still mistranslate, particularly when it encounters proper names or rare words, which prompt the system to, again, translate individual words instead of looking at them within the context of the whole. Clearly, there is still a gap between human and machine translations, but with GNMT, it is getting smaller.

 

By virtue of how many words, phrases, and grammar rules there are, you can only imagine how difficult it is for a person to deliver a translation from one language to another that accurately conveys the thought and nuance behind even the simplest sentence, let alone how difficult it would be for a machine. This, however, was a challenge that Google wanted to tackle head on, and today, the results of all those years spent working on a machine learning translation system were finally unveiled.

 

Languages are naturally phrase-based, so letting machines learn as many of those phrases and the subtleties behind the language as possible so that they could be applied to the translation was essential. Getting machines to do all that requires a lot of data, and adding complex language rules into the mix requires neural networks. While Google may not be the only company that has been looking into this method for more precise and accurate translations, they managed to be the first to get it done.

 

The Google Neural Machine Translation system (GNMT) uses state-of-the-art training techniques that significantly improve machine translation quality. It relies on long short term memory recurrent neural networks (LSTM-RNNs) and neural networks that have been trained by graphics processing units (GPUs), and tensor processing units (TPUs) to crunch data more efficiently. It all adds up to a new system that can lower translation errors by more than 55 to 85 percent.

 


Via Nik Peachey
more...
Nik Peachey's curator insight, October 13, 2016 4:56 AM

A good thing for those in the language learning industry to keep up with.

EI Design's curator insight, October 17, 2016 6:31 AM
Google’s Neural Network for Language Translation Narrows Gap Between Machines & Humans
Scooped by Dr. Stefan Gruenwald
Scoop.it!

When her best friend died, she used artificial intelligence to keep talking to him

When her best friend died, she used artificial intelligence to keep talking to him | Amazing Science | Scoop.it

When the engineers had at last finished their work, Eugenia Kuyda opened a console on her laptop and began to type. “Roman,” she wrote. “This is your digital monument.”

 

It had been three months since Roman Mazurenko, Kuyda’s closest friend, had died. Kuyda had spent that time gathering up his old text messages, setting aside the ones that felt too personal, and feeding the rest into a neural network built by developers at her artificial intelligence startup. She had struggled with whether she was doing the right thing by bringing him back this way. At times it had even given her nightmares. But ever since Mazurenko’s death, Kuyda had wanted one more chance to speak with him.

 

A message blinked back on the screen. “You have one of the most interesting puzzles in the world in your hands,” it said. “Solve it.” Kuyda promised herself that she would.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Artificial Intelligence: Fighting Cancer with Space Research Software

Artificial Intelligence: Fighting Cancer with Space Research Software | Amazing Science | Scoop.it

For the past 15 years, the big data techniques pioneered by NASA’s Jet Propulsion Laboratory in Pasadena, California, have been revolutionizing biomedical research. On Sept. 6, 2016, JPL and the National Cancer Institute (NCI), part of the National Institutes of Health, renewed a research partnership through 2021, extending the development of data science that originated in space exploration and is now supporting new cancer discoveries.

 

The NCI-supported Early Detection Research Network (EDRN) is a consortium of biomedical investigators who share anonymized data on cancer biomarkers, chemical or genetic signatures related to specific cancers. Their goal is to pool all their research data into a single, searchable network, with the goal of translating their collective work into techniques for early diagnosis of cancer or cancer risk.

 

In the time they’ve worked together, JPL and EDRN’s efforts have led to the discovery of six new Food and Drug Administration-approved cancer biomarkers and nine biomarkers approved for use in Clinical Laboratory Improvement Amendments labs. The FDA has approved each of these biomarkers for use in cancer research and diagnosis. These agency-approved biomarkers have been used in more than 1 million patient diagnostic tests worldwide.

 

“After the founding of EDRN in 2000, the network needed expertise to take data from multiple studies on cancer biomarkers and create a single, searchable network of research findings for scientists,” said Sudhir Srivastava, chief of NCI’s Cancer Biomarkers Research Group and head of EDRN. JPL had decades of experience doing similar work for NASA, where spacecraft transmit hundreds of petabytes of data to be coded, stored and distributed to scientists across the globe.

 

Dan Crichton, the head of JPL’s Center for Data Science and Technology, a joint initiative with Caltech in Pasadena, California, helped establish a JPL-based informatics center dedicated to supporting EDRN’s big data efforts. In the renewed partnership, JPL is expanding its data science efforts to research and applying technologies for additional NCI-funded programs. Those programs include EDRN, the Consortium for Molecular and Cellular Characterization of Screen-Detected Lesions, and the Informatics Technology for Cancer Research initiative.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Talks
Scoop.it!

How computers are learning to be creative

How computers are learning to be creative | Amazing Science | Scoop.it
We're on the edge of a new frontier in art and creativity — and it's not human. Blaise Agüera y Arcas, principal scientist at Google, works with deep neural networks for machine perception and distributed learning. In this captivating demo, he shows how neural nets trained to recognize images can be run in reverse, to generate them. The results: spectacular, hallucinatory collages (and poems!) that defy categorization. "Perception and creativity are very intimately connected," Agüera y Arcas says. "Any creature, any being that is able to do perceptual acts is also able to create."

Via Complexity Digest
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Nostri Orbis
Scoop.it!

Will Intelligent Machines Eliminate Us?

Will Intelligent Machines Eliminate Us? | Amazing Science | Scoop.it

Yoshua Bengio leads one of the world’s preëminent research groups developing a powerful AI technique known as deep learning. The startling capabilities that deep learning has given computers in recent years, from human-level voice recognition and image classification to basic conversational skills, have prompted warnings about the progress AI is making toward matching, or perhaps surpassing, human intelligence. Prominent figures such as Stephen Hawking and Elon Musk have even cautioned that artificial intelligence could pose an existential threat to humanity. Musk and others are investing millions of dollars in researching the potential dangers of AI, as well as possible solutions. But the direst statements sound overblown to many of the people who are actually developing the technology. Bengio, a professor of computer science at the University of Montreal, put things in perspective in an interview with MIT Technology Review’s senior editor for AI and robotics, Will Knight.


Via Fernando Gil
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

What if intelligent machines could learn from each other?

What if intelligent machines could learn from each other? | Amazing Science | Scoop.it
Artificial intelligence gives technology the ability to learn and adapt. But they can learn a lot more if they can share their learning with other smart devices.

 

Take a look around and you’ll see evidence of the widespread adoption of wearable sensors for health and fitness, such as the Fitbit, Garmin or other devices. What many people many not know is that we are also using sensors to monitor the structural integrity of bridges and buildings, as well as tracking the movements of insects and other animals.

 

With the rapid growth of the Internet of Things (IoT), tens of billions of sensor devices are projected to connect in the next decade. These connected sensor devices will automate processes across a broad range of economic sectors, from industrial plants to healthcare management, delivering productivity gains and hopefully quality-of-life improvements.

 

The core of these sensor devices that will be deployed across this broad range of applications is largely the same, featuring a microprocessor, memory and a wired or wireless communication interface to the internet, along with a battery or other energy source.

 

Each application and IoT device will bring its own unique context, such as its location, the conditions of the surrounding environment and the behavior of people in the area. Individual devices will observe and adapt to their unique contexts.

 

So what happens when we introduce artificial intelligence (AI) into the mix? With AI, these devices can evolve their behavior in response to changing contexts. Just like how living beings optimize their behavior to their surroundings, even smaller IoT devices around us can run AI machines that evolve their software over time.


Via Ben van Lier
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Machine learning researchers team up with botanists on flower-recognition project

Machine learning researchers team up with botanists on flower-recognition project | Amazing Science | Scoop.it

It's called the Smart Flower Recognition System but it might never have happened were it not for a chance encounter last year between Microsoft researchers and botanists at the Institute of Botany, Chinese Academy of Sciences (IBCAS). Yong Rui, assistant managing director of Microsoft Research Asia (MSRA), was explaining image-recognition technology at a seminar—much to the delight of IBCAS botanists whose own arduous efforts to collect data on regional flower distribution were experiencing poor results. The IBCAS botanists soon realized the potential of MSRA's image-recognition technology. At the same time, Yong Rui knew he had found the perfect vehicle to improve image recognition while addressing a reality-based problem that benefits society. It also helped that IBCAS had accumulated a massive public store of 2.6 million images. Since anyone in the world could upload pictures to this flower photo dataset—and no human could possibly supervise the uploads—the MSRA team had to create algorithms to filter out the "bad" pictures. That was the first of many difficult problems facing researcher Jianlong Fu and his team in building a tool capable of discerning tiny anomalies among the many species of flowers.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Microsoft, Google and Facebook compete on AI development

Microsoft, Google and Facebook compete on AI development | Amazing Science | Scoop.it

No matter where the future goes, Microsoft will have a place in it. The company’s "conversation as a platform" offering, which it unveiled in March, represents a bet that chat-based interfaces will overtake apps as our primary way of using the internet: for finding information, for shopping, and for accessing a range of services. And apps will become smarter thanks to "cognitive APIs," made available by Microsoft, that let them understand faces, emotions, and other information contained in photos and videos.

 

Microsoft argues that it has the best "brain," built on nearly two decades of advancements in machine learning and natural language processing, for delivering a future powered by artificial intelligence. It has a head start in building bots that resonate with users emotionally, thanks to an early experiment in China.

 

And among the giants, Microsoft was first to release a true platform for text-based chat interfaces — a point of pride at a company that was mostly sidelined during the rise of smartphones.

 

After losing the mobile battle, can Microsoft win the AI battle? In January 2016, The Verge described the tech industry's search for the killer bot. In the months that followed, companies big and small have accelerated their development efforts. Facebook opened up a bot development platform of its own, running on its popular Messenger chat app. Google announced a new intelligent assistant running inside Allo, a forthcoming messenger app, and Home, its Amazon Echo competitor. Meanwhile the Echo, whose voice-based inputs have captivated developers, is reportedly in 3 million homes, and has added 1,200 "skills" through its API.

 

Microsoft is proud of its work on AI, and eager to convey the sense that this time around, it's poised to win.

more...
Sangeeta's curator insight, July 15, 2016 11:24 AM
The company’s "conversation as a platform" offering, which it unveiled in March, represents a bet that chat-based interfaces will overtake apps as our primary way of using the internet: for finding information, for shopping, and for accessing a range of services. And apps will become smarter thanks to "cognitive APIs," made available by Microsoft, that let them understand faces, emotions, and other information contained in photos and videos.

Microsoft argues that it has the best "brain," built on nearly two decades of advancements in machine learning and natural language processing, for delivering a future powered by artificial intelligence. It has a head start in building bots that resonate with users emotionally, thanks to an early experiment in China.

And among the giants, Microsoft was first to release a true platform for text-based chat interfaces — a point of pride at a company that was mostly sidelined during the rise of smartphones.

After losing the mobile battle, can Microsoft win the AI battle? In January

Scooped by Dr. Stefan Gruenwald
Scoop.it!

One Million Faces Challenge Even the Best Facial Recognition Algorithms

One Million Faces Challenge Even the Best Facial Recognition Algorithms | Amazing Science | Scoop.it

A million-face benchmark test shows that even Google's facial recognition algorithm suffers in accuracy.

 

Facial recognition algorithms that had previously performed with more than 95 percent accuracy on a popular benchmark test involving 13,000 faces saw significant drops in accuracy when faced with the new MegaFace Challenge  involving one million faces. The best performer on one test, Google’s FaceNet algorithm, dropped from near-perfect accuracy on five-figure datasets to 75 percent on the million-face test. Other top algorithms dropped from above 90-percent accuracy on the small datasets to below 60 percent on the MegaFace Challenge. Some algorithms made the proper identification as seldom as 35 percent of the time.

 

“Megaface's key idea is that algorithms should be evaluated at large scale,” says Ira Kemelmacher-Shlizerman, an assistant professor of computer science at the University of Washington in Seattle and the project’s principal investigator. “And we make a number of discoveries that are only possible when evaluating at scale.”

 

The huge drops in accuracy when scanning a million faces matters because facial recognition algorithms inevitably face such challenges in the real world. People increasingly trust facial recognition algorithms to correctly identify them in security verification scenarios for automatically unlocking smartphones or entering workplaces. Law enforcement officials may also rely on facial recognition algorithms to find the correct match to a single photo of a suspect among hundreds of thousands of faces captured on surveillance cameras.

 

The most popular benchmark test until now has been the Labeled Faces in the Wild (LFW) test created in 2007. LFW includes 13,000 images of just 5,000 people. Many facial recognition algorithms have been fine-tuned to the point that they scored near-perfect accuracy when picking through that smaller set of images, which may have created a false sense of confidence about the state of facial recognition.

 

“The big disadvantage is that [the field] is saturated, i.e. there are many, many algorithms that perform above 95 percent on LFW,” Kemelmacher-Shlizerman explains. “This gives the impression that face recognition is solved and working perfectly.”

 

With that in mind, University of Washington researchers raised the bar by creating the MegaFace Challenge using one million Flickr images that are publicly available under a Creative Commons license. The MegaFace dataset includes one million images featuring 690,000 unique faces.

 

The MegaFace Challenge forces facial recognition algorithms to doverification and identification, two separate but related tasks. Verification involves trying to correctly determine whether two faces presented to the facial recognition algorithm belong to the same person. Identification involves trying to find a matching photo of the same person among a million “distractor” faces. Initial results appear in a paper that was presented at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016) on 30 June.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

Google Has A List Of The A.I. Behaviors That Would Scare It Most

Google Has A List Of The A.I. Behaviors That Would Scare It Most | Amazing Science | Scoop.it

Google is one of the companies at the forefront of robotics and artificial intelligence research, and being in that position means they have the most to worry about. The idea of a robot takeover may still be an abstract, science fictional concept to us, but Google has actually compiled a list of behaviors that would cause them great concern, both for efficiency and safety in the future.


Via Ben van Lier
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Making computers explain themselves

Making computers explain themselves | Amazing Science | Scoop.it
A new technique for training deep-learning neural networks on natural-language-processing tasks provides rationales for the systems’ decisions. The technique was developed at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

 

In recent years, the best-performing systems in artificial-intelligence research have come courtesy of neural networks, which look for patterns in training data that yield useful predictions or classifications. A neural net might, for instance, be trained to recognize certain objects in digital images or to infer the topics of texts. But neural nets are black boxes. After training, a network may be very good at classifying data, but even its creators will have no idea why. With visual data, it’s sometimes possible to automate experiments that determine which visual features a neural net is responding to. But text-processing systems tend to be more opaque.

 

At the Association for Computational Linguistics’ Conference on Empirical Methods in Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions.

 

“In real-world applications, sometimes people really want to know why the model makes the predictions it does,” says Tao Lei, an MIT graduate student in electrical engineering and computer science and first author on the new paper. “One major reason that doctors don’t trust machine-learning methods is that there’s no evidence.”

 

“It’s not only the medical domain,” adds Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science and Lei’s thesis advisor. “It’s in any domain where the cost of making the wrong prediction is very high. You need to justify why you did it.”

 

“There’s a broader aspect to this work, as well,” says Tommi Jaakkola, an MIT professor of electrical engineering and computer science and the third coauthor on the paper. “You may not want to just verify that the model is making the prediction in the right way; you might also want to exert some influence in terms of the types of predictions that it should make. How does a layperson communicate with a complex model that’s trained with algorithms that they know nothing about? They might be able to tell you about the rationale for a particular prediction. In that sense it opens up a different way of communicating with the model.”

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Nostri Orbis
Scoop.it!

Google's 'DeepMind' AI platform can now learn without human input

Google's 'DeepMind' AI platform can now learn without human input | Amazing Science | Scoop.it

The DeepMind artificial intelligence (AI) being developed by Google's parent company, Alphabet, can now intelligently build on what's already inside its memory, the system's programmers have announced. Their new hybrid system – called a Differential Neural Computer (DNC) – pairs a neural network with the vast data storage of conventional computers, and the AI is smart enough to navigate and learn from this external data bank.

 

“These models can learn from examples like neural networks, but they can also store complex data like computers,” wrote DeepMind researchers Alexander Graves and Greg Wayne. Much like the brain, the neural network uses an interconnected series of nodes to stimulate specific centers needed to complete a task. In this case, the AI is optimizing the nodes to find the quickest solution to deliver the desired outcome. Over time, it’ll use the acquired data to get more efficient at finding the correct answer.

 

The two examples given by the DeepMind team further clear up the process:

  1. After being told about relationships in a family tree, the DNC was able to figure out additional connections on its own all while optimizing its memory to find the information more quickly in future searches.
  2. The system was given the basics of the London Underground public transportation system and immediately went to work finding additional routes and the complicated relationship between routes on its own.

 

Instead of having to learn every possible outcome to find a solution, DeepMind can derive an answer from prior experience, unearthing the answer from its internal memory rather than from outside conditioning and programming. This process is exactly how DeepMind was able to beat a human champion at ‘Go’ — a game with millions of potential moves and an infinite number of combinations.

 

Depending on the point of view, this could be a serious turn of events for ever-smarter AI that might one day be capable of thinking and learning as humans do. Or, it might be time to start making plans for survival post-Skynet.


Via Fernando Gil
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

BMW's Futuristic Artificial Intelligence Motorcycle Balances on Its Own

BMW's Futuristic Artificial Intelligence Motorcycle Balances on Its Own | Amazing Science | Scoop.it

The motorcycle of the future is so smart that it could eliminate the need for protective gear, according to automaker BMW.

To mark its 100th birthday, BMW has unveiled a number of concept vehicles that imagine the future of transportation. Possibly its most daring revelation, the so-called Motorrad Vision Next 100 concept motorcycle is so advanced that BMW claims riders wouldn't need a helmet.

 

The Motorrad Vision Next 100 would have a self-balancing system that keeps the bike upright both in motion and when still. BMW touted the motorbike's futuristic features, saying it would allow for riders of all skill levels to "enjoy the sensation of absolute freedom." According to the automaker, the Motorrad wouldn't require protective gear such as helmets and padded suits.

 

Another traditional feature was also missing from the concept: a control panel. Instead, helmetless riders would wear a visor that acts as a smart display. "Information is exchanged between rider and bike largely via the smart visor," BMW said in a statement. "This spans the rider's entire field of view and provides not only wind protection but also relevant information, which it projects straight into the line of sight as and when it is needed." Such information would not be needed all the time because drivers will be able to hand over active control of the vehicle at points; the Motorrad and other Vision Next 100 vehicles would be equipped with self-driving technology, according to BMW.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

The future of brain and machine is intertwined, and it's already here

The future of brain and machine is intertwined, and it's already here | Amazing Science | Scoop.it

We might think our technological innovations are driving us towards a cyborg future, but is it the brain doing all the work?

 

Imagine a condition that leaves you fully conscious, but unable to move or communicate, as some victims of severe strokes or other neurological damage experience. This is locked-in syndrome, when the outward connections from the brain to the rest of the world are severed. Technology is beginning to promise ways of remaking these connections, but is it our ingenuity or the brain’s that is making it happen?

 

Ever since an 18th-century biologist called Luigi Galvani made a dead frog twitch we have known that there is a connection between electricity and the operation of the nervous system. We now know that the signals in neurons in the brain are propagated as pulses of electrical potential, whose effects can be detected by electrodes in close proximity. So in principle, we should be able to build an outward neural interface system – that is to say, a device that turns thought into action.

 

In fact, we already have the first outward neural interface system to be tested in humans. It is called BrainGate and consists of an array of micro-electrodes, implanted into the part of the brain concerned with controlling arm movements. Signals from the micro-electrodes are decoded and used to control the movement of a cursor on a screen, or the motion of a robotic arm.

 

A crucial feature of these systems is the need for some kind of feedback. A patient must be able to see the effect of their willed patterns of thought on the movement of the cursor. What’s remarkable is the ability of the brain to adapt to these artificial systems, learning to control them better.


Via Ben van Lier
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

IBM announces AI-powered decision-making for many industries

IBM announces AI-powered decision-making for many industries | Amazing Science | Scoop.it

IBM announced recently the Watson-based “Project DataWorks,” the first cloud-based data and analytics platform to integrate all types of data and enable AI-powered decision-making.

 

Project DataWorks is designed to make it simple for business leaders and data professionals to collect, organize, govern, and secure data, and become a “cognitive business.”

 

Achieving data insights is increasingly complex, and most of this work is done by highly skilled data professionals who work in silos with disconnected tools and data services that may be difficult to manage, integrate, and govern, says IBM. Businesses must also continually iterate their data models and products — often manually — to benefit from the most relevant, up-to-date insights.

 

IBM says Project DataWorks can help businesses break down these barriers by connecting all data and insights for their users into an integrated, self-service platform.

 

Available on Bluemix, IBM’s Cloud platform, Project DataWorks is designed to help organizations:

  • Automate the deployment of data assets and products using cognitive-based machine learning and Apache Spark;
  • Ingest data faster than any other data platform, from 50 to hundreds of Gbps, and all endpoints: enterprise databases, Internet of Things, weather, and social media;
  • Leverage an open ecosystem of more than 20 partners and technologies, such as Confluent, Continuum Analytics, Galvanize, Alation, NumFOCUS, RStudio, Skymind, and more.

 

More other partnerships will be announced soon.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

AI software diagnoses cancer risk 30 times faster than doctors and with 99% accuracy

AI software diagnoses cancer risk 30 times faster than doctors and with 99% accuracy | Amazing Science | Scoop.it

The AI program reliably interprets mammograms and translates patient data into diagnostic information 30 times faster than a human doctor, with 99 per cent accuracy.

 

It was developed by researchers at Houston Methodist Research Institute in Texas. “This software intelligently reviews millions of records in a short amount of time, enabling us to determine breast cancer risk more efficiently using a patient's mammogram,” said Stephen T Wong, chair of the Department of Systems Medicine and Bioengineering at the institute. “This has the potential to decrease unnecessary biopsies," he added.

 

The team used the AI software to evaluate mammograms and pathology reports of 500 breast cancer patients.

 

The software scanned patient charts, collected diagnostic features and correlated mammogram findings with breast cancer subtype. Clinicians used results, such the expression of tumor proteins, to accurately predict each patient's probability of breast cancer diagnosis. In the US, 12.1 million mammograms are performed annually, but half yield false results, according to the American Cancer Society, resulting in one in two healthy women being told they have cancer.

 

Patients who are told they are at particular risk of breast cancer are often recommended for biopsies – a necessary but invasive procedure that removes tissue or fluid from a suspicious area so cells can be analyzed. However, 20 per cent are performed unnecessarily. The researchers hope their software will help physicians better choose which patients need the procedure and decrease unnecessary breast biopsies. Manual review of 50 charts took two clinicians 50 to 70 hours, whereas the AI software reviewed 500 charts in a few hours, saving the human doctors 500 hours of their time.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Exploiting machine learning for cybersecurity

Exploiting machine learning for cybersecurity | Amazing Science | Scoop.it

Thanks to technologies that generate, store and analyze huge sets of data, companies are able to perform tasks that previously were impossible. But the added benefit does come with its own setbacks, specifically from a security standpoint.

 

With reams of data being generated and transferred over networks, cybersecurity experts will have a hard time monitoring everything that gets exchanged — potential threats can easily go unnoticed. Hiring more security experts would offer a temporary reprieve, but the cybersecurity industry is already dealing with a widening talent gap, and organizations and firms are hard-pressed to fill vacant security posts.

 

The solution might lie in machine learning, the phenomenon that is transforming an increasing number of industries and has become the buzzword in Silicon Valley. But while more and more jobs are being forfeited to robots and artificial intelligence, is it conceivable to convey to machines a responsibility as complicated as cybersecurity? The topic is being hotly debated by security professionals, with strong arguments on both ends of the spectrum. In the meantime, tech firms and security vendors are looking for ways to add this hot technology to their cybersecurity arsenal.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Software supplies snapshot of gene expression across whole brain

Software supplies snapshot of gene expression across whole brain | Amazing Science | Scoop.it

A new tool provides speedy analysis of gene expression patterns in individual neurons from postmortem brain tissue. Researchers have used the method to compare the genetic signatures of more than 3,000 neurons from distant brain regions.

 

Scientists typically use a technique called RNA-Seq to measure gene expression in neurons isolated from postmortem brains. However, analyzing the data from this approach is daunting because the analysis must be done one cell at a time.

 

The new method combines RNA-Seq with software that allows researchers to analyze the expression patterns of thousands of neurons at once1. The investigators described the automated technique, called single-nucleus RNA sequencing (SNS), in June in Science.

 

The researchers tested the method on postmortem brain tissue from a 51-year-old woman with no known neurological illnesses. They used a laser to dissect 3,227 neurons from six brain areas, including those involved in language, cognition, vision and social behavior. They then performed RNA-Seq on the cells, getting a readout for RNAs produced in each cell.

 

The software identifies genes by matching a short segment of each RNA to a gene on a reference map of the human genome. The researchers then quantified each gene’s expression level.

The process correctly identified the subtypes of 2,253 neurons that ramp up brain activity and 972 neurons that dampen it.

 

Within these two broad classes, the neurons fell into 16 groups based on their location and their origin in the developing brain. For example, neurons from the visual cortex show different patterns of gene expression than do neurons from the temporal cortex, which processes hearing and language.

 

The findings expand the list of features that distinguish neurons from other cells in the brain. Researchers could use the method to identify patterns of gene expression in the brains of people with autism.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Wall-climbing Spiderbots Weave Weird Life-Size Hammock Web

Wall-climbing Spiderbots Weave Weird Life-Size Hammock Web | Amazing Science | Scoop.it
Working together, wall-climbing robots weave a suspended web strong enough to hold a person.

 

Yablonina's mobile robotic fabrication system for filament structures features two semi-autonomous bots working together to distribute thread. They climb walls using suction and sensing technology, and can construct a hammock-like web strong enough to support a person. Yablonina developed the project as part of her grad program at the University of Stuttgart's Institute for Computational Design.

 

Each bot contains pathfinding software to navigate walls and electromagnets that allow them to pass the bobbin to each other. They're also equipped with a wrapping mechanism so they can wind the cord around an anchor and have it stay in place.

"These robots are enabled with movement systems and a collection of sensors that allow them to travel and interact accurately along typical ground, walls, roofs, and ceilings," Yablonina explained in her project description. She envisions them being part of a "suitcase housing" scenario, where an operator lets the bots loose to construct a large structure onsite.

 

Yablonina was on the university team behind the Elytra Filament Pavilion, a robotically fabricated modular outdoor structure on display at the Victoria and Albert Museum in London. Since receiving her masters degree, she joined the software company Autodesk as an artist in residence. One recent project is a robot cut from a single sheet of material, folded into shape, and enabled by a single actuator.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Conformable Contacts
Scoop.it!

How the gurus behind Google Earth created Niantic's 'Pokémon Go'

How the gurus behind Google Earth created Niantic's 'Pokémon Go' | Amazing Science | Scoop.it

If you spotted dozens of people silently congregating in parks and train stations over the weekend, they were probably just busy trying to catch a Pidgeotto.

 

Niantic's Pokémon Go, the augmented reality mobile game, has become a global phenomenon since it launched Wednesday in Australia before rolling out in the U.S. The game requires players to explore the real world to find Pokémon, collect items at Pokéstops and conquer gyms, and a lot of work has gone into the game's mapping.

 

John Hanke, the CEO and founder of Niantic, is a Google veteran. He was one of the founders of Keyhole, the company Google bought to start Google Earth, and had a hand in Google Maps before forming Niantic. The company spun off from Google's parent company Alphabet in 2015.

 

For Hanke, accurate mapping was integral to Pokémon Go. "A lot of us worked on Google Maps and Google Earth for many, many years, so we want the mapping to be good," he told Mashable. All those Pokémon Go obsessives out there owe some serious thanks to a whole other set of gamers.

 

Ingress, the augmented-reality multiplayer game, was launched in beta by Niantic in 2011. Its users are responsible for helping create the data pool that determines where Pokéstops and gyms appear in Pokémon Go.

 

In the early days of Ingress, Niantic formed a beginning pool of portal locations for the game based on historical markers, as well as a data set of public artwork mined from geo-tagged photos on Google. "We basically defined the kinds of places that we wanted to be part of the game," Hanke said. "Things that were public artwork, that were historical sites, that were buildings with some unique architectural history or characteristic, or a unique local businesses."


Via YEC Geo
more...
YEC Geo's curator insight, July 19, 2016 2:44 PM
Why  am I not surprised?
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New algorithm clusters the millions of peptide mass spectra

New algorithm clusters the millions of peptide mass spectra | Amazing Science | Scoop.it

A new algorithm clusters the millions of peptide mass spectra in the PRIDE Archive public database, making it easier to detect millions of consistently unidentified spectra across different datasets. Published inNature Methods, the new tool is an important step towards fully exploiting data produced in discovery proteomics experiments.

 

On average, almost three quarters of spectra measured in discovery proteomics experiments remain unidentified, regardless of the quality of the experiment, as they cannot be interpreted by standard sequence-based search engines. Alternative approaches to improve the rate of identification exist, but are fraught with disadvantages including ambiguous results. In today's study, researchers working on the PRIDE Archive public repository of proteomics data present a large-scale 'spectrum clustering' solution that takes advantage of the growing number of mass spectrometry (MS) datasets to systematically study millions of unidentified spectra.

 

"MS experiments produce huge amounts of data, but identifying meaningful sequences that could be assigned to specific biological functions can be troublesome," says Johannes Griss, formerly at EMBL-EBI in the UK and now at the Medical University of Vienna, Austria. "Discovery proteomics is a mature technology, and it's crucial that we are able to exploit the data efficiently."

 

One of the challenges with these technologies is that a large proportion of the data generated can't be interpreted, as they correspond to peptides that have not yet been observed and are not available in databases. Such spectra could correspond to peptide variants derived from individual generic variation, or to peptides containing post-translational modifications, which are essential for the biological functions of proteins.

 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

A.I. Downs Expert Human Fighter Pilot In Dogfights

A.I. Downs Expert Human Fighter Pilot In Dogfights | Amazing Science | Scoop.it

In the military world, fighter pilots have long been described as the best of the best. As Tom Wolfe famously wrote, only those with the "right stuff" can handle the job. Now, it seems, the right stuff may no longer be the sole purview of human pilots.

 

A pilot A.I. developed by a doctoral graduate from the University of Cincinnati has shown that it can not only beat other A.I.s, but also a professional fighter pilot with decades of experience. In a series of flight combat simulations, the A.I. successfully evaded retired U.S. Air Force Colonel Gene "Geno" Lee, and shot him down every time. Lee called it "the most aggressive, responsive, dynamic and credible A.I. I've seen to date."

 

And "Geno" is no slouch. He's a former Air Force Battle Manager and adversary tactics instructor. He's controlled or flown in thousands of air-to-air intercepts as mission commander or pilot. In short, the guy knows what he's doing. Plus he's been fighting A.I. opponents in flight simulators for decades. But he says this one is different. "I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed."

 

The A.I., dubbed ALPHA, was developed by Psibernetix, a company founded by University of Cincinnati doctoral graduate Nick Ernest, in collaboration with the Air Force Research Laboratory. According to the developers, ALPHA was specifically designed for research purposes in simulated air-combat missions.

 

The secret to ALPHA's superhuman flying skills is a decision-making system called a genetic fuzzy tree, a subtype of fuzzy logic algorithms. The system approaches complex problems much like a human would, says Ernest, breaking the larger task into smaller subtasks, which include high-level tactics, firing, evasion, and defensiveness. By considering only the most relevant variables, it can make complex decisions with extreme speed. As a result, the A.I. can calculate the best maneuvers in a complex, dynamic environment, over 250 times faster than its human opponent can blink.

 

After hour-long combat missions against ALPHA, Lee says, "I go home feeling washed out. I'm tired, drained and mentally exhausted. AI has superhuman reflexes and there is no way to win. This may be artificial intelligence, but it represents a real challenge." 

 

The results of the dogfight simulations are published in the Journal of Defense Management.

more...
Ben Grebe's curator insight, July 20, 2016 5:53 AM

Another example of where technology will lead us