Amazing Science
751.2K views | +164 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Henn-na Hotel: World’s First Fully Robot-Staffed Hotel Opens in Japan

Henn-na Hotel: World’s First Fully Robot-Staffed Hotel Opens in Japan | Amazing Science | Scoop.it
In Japan there’s now a hotel you can stay in without ever having to deal with another human being.

 

Henn-na Hotel is located within the Huis Ten Bosch amusement park, which recreates the Netherlands by displaying real size copies of old Dutch buildings. It’s not immediately clear what the connection between the theme park and it’s new gimmicky hotel have in common, but it will certainly help bring tourists to the park, which once declared bankruptcy in 2003. After much buzz in the media, which wasn’t hard to generate, the hotel finally opened last week and we got a peak at what this robotically ‘manned’ hotel actually looks like.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

Is 2017 the Chinese year of AI?

Is 2017 the Chinese year of AI? | Amazing Science | Scoop.it
The country’s Internet giants are focusing on AI research, and domestic venture capital funding is pouring into the field.

 

The nation’s search giant, Baidu, is leading the charge. Already making serious headway in AI research, it has now announced that former Microsoft executive Qi Lu will take over as its chief operating officer. Lu ran the applications and services division at Microsoft, but, according to the Verge, a large part of his remit was developing strategies for artificial intelligence and chatbots. In a statement, Baidu cites hiring Lu as part of its plan to become a “global leader in AI.”

 

Meanwhile, Baidu’s chief scientist, Andrew Ng, has announced that the company is opening a new augmented reality lab in Beijing. Baidu has already made progress in AR, using  computer vision and deep learning to add an extra layer to the real world for millions of people. But the new plans aim to use a 55-strong lab to increase revenue by building AR marketing tools—though it’s thought that the company will also consider health-care and education applications in the future.

 

But Baidu isn’t alone in pushing forward. Late last year, Chinese Internet giant Tencent—the company behind the hugely successful mobile app WeChat, which has 846 million active users—said that it was determined to build a formidable AI lab. It plans to start publishing its work at conferences this year. Smaller players could also get a shot in the arm. According to KPMG, Chinese venture capital investment looks set to pour into AI research in the coming year. Speaking to Fintech Innovation, KPMG partner Egidio Zarrella explained that “the amount being invested in artificial intelligence in Asia is growing by the day.”

 

Similar growth is already underway in China's research community. Astudy by Japan's National Institute of Science and Technology Policy found China to be a close second to the U.S. in terms of the number of AI studies presented at top academic conferences in 2015. And a U.S.government report says that the number of papers published by Chinese researchers mentioning "deep learning" already exceeds the number published by U.S. researchers.

 

All of which has seen South China Morning Post label AI and AR as “must haves” in any self-respecting Chinese investment portfolio. No kidding. This year, it seems, many U.S. tech companies might find themselves looking East to identify competition.


Via Ben van Lier
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

FDA clearance for AI-assisted cardiac imaging system

FDA clearance for AI-assisted cardiac imaging system | Amazing Science | Scoop.it

A San Francisco startup has landed Food and Drug Administration approval for artificial intelligence-assisted cardiac imaging in the cloud.

 

Arterys Inc.’s Cardio DL program applies deep learning, a form of artificial intelligence, to automate tasks that radiologists have been performing manually. It represents the first FDA-cleared, zero-footprint use of cloud computing and deep learning through AI in a clinical setting, the company said.

 

Arterys developed the technology by mining a data set of more than 3,000 cardiac cases. Cardio DL produces editable, automated contours, according to a company statement. It can provide accurate and consistent cardiac measurements in seconds, as opposed to one hour for manual processing.

 

Obtaining an image of a heart through MRI is a complex, time-consuming process that Arterys is working to improve, according to Arterys CEO Fabien Beckers.

 

Radiologists have traditionally used software to segment and draw contours around the ventricle to determine how the heart is functioning, Becker said. The new, AI-assisted software can provide deep learning-generated contours of the insides and outsides of the heart’s ventricles to speed up the process and improve accuracy.

 

“It’s the new way of doing medical imaging, a cloud medical imaging application that can have AI embedded in it,” he said. “It has the potential to make sure that physicians benefit from the work of thousands of other physicians and can be transforming healthcare in a positive fashion.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Kindred AI: Building machines with human intelligence

Kindred AI: Building machines with human intelligence | Amazing Science | Scoop.it

Suzanne Gildert is a founder and CTO of Kindred AI – a company pursuing the modest vision of “building machines with human-like intelligence.” Her startup just came out of stealth mode and I am both proud and humbled to say that this is the first ever long-form interview that Suzanne has done. Kindred AI has raised 15 million dollars from notable investors and currently employs 35 experts in their offices in Toronto, Vancouver and San Francisco. Even better, Suzanne is a long term Singularity.FM podcast fan, total tech geek, Gothic artist, PhD in experimental physics and former D-Wave Quantum Computer maker. Right now I honestly can’t think of a more interesting person to have a conversation with.

During our 100 min discussion with Suzanne Gildert we cover a wide variety of interesting topics such as: why she sees herself as a scientist, engineer, maker and artist; the interplay between science and art; the influence of Gothic art in general and the images of angels and demons in particular; her journey from experimental physics into quantum computers and embodied AI; building tools to answer questions versus intelligent machines that can ask questions; the importance of massively transformative purpose; the convergence of robotics, the ability to move large data across networks and advanced machine learning algorithms; her dream of a world with non-biological intelligences living among us; whether she fears AI or not; the importance of embodying intelligence and providing human-like sensory perception; whether consciousness is classical Newtonian emergent properly or a Quantum phenomenon; ethics and robot rights; self-preservation and Asimov’s Laws of Robotics; giving robots goals and values; the magnifying mirror of technology and the importance of asking questions…

 

Video is here

more...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

DeepMind: The Future of Artificial Intelligence

Artificial Intelligence is set to be a major part of the future of technology. Laboratories all over the world are making algorithms and machines that can do amazing feats, like make music or actually listen and speak like a human.

But with this level of progress, many are sounding warning bells regarding the development of super-intelligent AI. What will the future hold and why are some many companies and institutions working on improving AI at a rapid pace?

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

Google's AI just created its own universal 'language'

Google's AI just created its own universal 'language' | Amazing Science | Scoop.it

The technology, used in Google Translate, can translate unseen material between languages. Google has previously taught its artificial intelligence to play games, and it's even capable of creating its own encryption. Now, its language translation tool has used machine learning to create a 'language' all of its own.

 

In September 2016, the search giant turned on its Google Neural Machine Translation (GNMT) system to help it automatically improve how it translates languages. The machine learning system analyses and makes sense of languages by looking at entire sentences – rather than individual phrases or words.

Following several months of testing, the researchers behind the AI have seen it be able to blindly translate languages even if it's never studied one of the languages involved in the translation. "An example of this would be translations between Korean and Japanese where Korean <> Japanese examples were not shown to the system," the Mike Schuster, from Google Brain wrote in a blogpost.

 

The team said the system was able to make "reasonable" translations of the languages it had not been taught to translate. In one instance, a research paper published alongside the blog, says the AI was taught Portuguese→English and English→Spanish. It was then able to make translations between Portuguese→Spanish.

 

"To our knowledge, this is the first demonstration of true multilingual zero-shot translation," the paper explains. To make the system more accurate, the computer scientists then added additional data to the system about the languages.

 

However, the most remarkable feat of the research paper isn't that an AI can learn to translate languages without being shown examples of them first; it was the fact it used this skill to create its own 'language'. "Visual interpretation of the results shows that these models learn a form of interlingua representation for the multilingual model between all involved language pairs," the researchers wrote in the paper.

 

An interlingua is a type of artificial language, which is used to fulfill a purpose. In this case, the interlingua was used within the AI to explain how unseen material could be translated. "Using a 3-dimensional representation of internal network data, we were able to take a peek into the system as it translated a set of sentences between all possible pairs of the Japanese, Korean, and English languages," the team's blogpost continued. The data within the network allowed the team to interpret that the neural network was "encoding something" about the semantics of a sentence rather than comparing phrase-to-phrase translations.


Via Ben van Lier
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science
Scoop.it!

Quantum computers can talk to each other via a photon translator

Quantum computers can talk to each other via a photon translator | Amazing Science | Scoop.it

Different kinds of quantum computers encode information using their own wavelengths of light, but a device that modifies their photons could allow them to network.

 

Quantum computers are theoretically capable of running calculations exponentially faster than classical computers, and can be made by exploiting atoms, superconductors, diamond crystals and more. Each of these has its own strengths: atoms are better at storing information, while superconductors are better at processing it. A device linking these diverse systems together would combine their strengths and compensate for their weaknesses. Once linked, these systems would talk to each other by sending and receiving photons. The photons would encode quantum states but, unlike the voltages and currents interpreted by a classical computer chip, they cannot be transmitted via copper wires.

 

What’s more, quantum rules require that a single photon must essentially carry a spread of frequencies, rather than a single frequency. For different components to talk to each other using photons, the spread of the sender’s photons must therefore be converted to the spread that the receiver can handle. That requires a device in the middle that can convert photons from one spread of frequencies to another, while still preserving their delicate quantum state.

 

Christine Silberhorn of the University of Paderborn in Germany and her colleagues have designed such a system. It includes a converter that “translates” photons emitted from one component into the infrared. That infrared photon is then transmitted over a fibre optic cable connected to a second component. Finally, the photon is translated into another frequency that the receiving component can read.

 

Only part of the system has been built so far: the researchers have managed to convert infrared photons to a visible wavelength – while leaving their quantum state intact – with a success rate of about 75 per cent. But the technique could be adapted to build the full system, Silberhorn says.

 

Once that is done, the next step would be to figure out how to fit the device on a chip that could be manufactured easily and cheaply in large quantities, says Arka Majumdar of the University of Washington in Seattle. “The science works,” he says. “But scalability is the biggest problem. Making the same device 1000 times is extremely difficult.”


Via Mariaschnee
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Conformable Contacts
Scoop.it!

Google accidentally developed one of the ultimate content libraries for VR

Google accidentally developed one of the ultimate content libraries for VR | Amazing Science | Scoop.it

Google Earth VR is a surprise project from Google’s Geo team that is being both announced and released today. This is the team behind Google Maps and the original Google Earth. The group has spent the better part of a decade collecting and cataloging an obscene amount of visual data from all over the planet. Now, all of that information is being used to create a VR experience that can take you almost anywhere on Earth you want to go.

 

According to Mike Podwal, project manager for Google Earth VR, “94 percent of the world’s population is covered in this experience. 54 percent of the Earth’s land mass is covered. There are around 175 cities with full, 3D data, and over 600 ‘urban cores’ as well.” Google Earth turns all of this data into completely explorable, scalable 3D immersive worlds for the HTC Vive VR headset.


Via YEC Geo
more...
YEC Geo's curator insight, November 17, 2016 9:17 AM
I'm apprehensive about the effect of VR upon an already distracted populace, but this--now this is really cool!
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers question if banning of ‘killer robots’ actually will stop robots from killing

Researchers question if banning of ‘killer robots’ actually will stop robots from killing | Amazing Science | Scoop.it

A University at Buffalo research team has published a paper that implies that the rush to ban and demonize autonomous weapons or “killer robots” may be a temporary solution, but the actual problem is that society is entering into a situation where systems like these have and will become possible.

 

Killer robots are at the center of classic stories told in films such as “The Terminator” and the original Star Trek television series’ “The Doomsday Machine,” yet the idea of fully autonomous weapons acting independently of any human agency is not the exclusive license of science fiction writers.

 

Killer robots have a Pentagon budget line and a group of non-governmental organizations, including Human Rights Watch, is already working collectively to stop their development.

 

Governance and control of systems like killer robots needs to go beyond the end products. “We have to deconstruct the term ‘killer robot’ into smaller cultural techniques,” says Tero Karppi, assistant professor of media study, whose paper with Marc Böhlen, UB professor of media study, and Yvette Granta, a graduate student at the university, appears in the International Journal of Cultural Studies.

 

“We need to go back and look at the history of machine learning, pattern recognition and predictive modeling, and how these things are conceived,” says Karppi, an expert in critical platform and software studies whose interests include automation, artificial intelligence and how these systems fail. “What are the principles and ideologies of building an automated system? What can it do?”

 

By looking at killer robots we are forced to address questions that are set to define the coming age of automation, artificial intelligence and robotics, he says. “Are humans better than robots to make decisions? If not, then what separates humans from robots? When we are defining what robots are and what they do we also define what it means to be a human in this culture and this society,” Karppi says.

 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google AI invents its own cryptographic algorithm and no one knows how it works

Google AI invents its own cryptographic algorithm and no one knows how it works | Amazing Science | Scoop.it

Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.

 

The Google Brain team (which is based out in Mountain View and is separate from Deep Mind in London) started with three fairly vanilla neural networks called Alice, Bob, and Eve. Each neural network was given a very specific goal: Alice had to send a secure message to Bob; Bob had to try and decrypt the message; and Eve had to try and eavesdrop on the message and try to decrypt it. Alice and Bob have one advantage over Eve: they start with a shared secret key (i.e. this is symmetric encryption).

 

Importantly, the AIs were not told how to encrypt stuff, or what crypto techniques to use: they were just given a loss function (a failure condition), and then they got on with it. In Eve's case, the loss function was very simple: the distance, measured in correct and incorrect bits, between Alice's original input plaintext and its guess. For Alice and Bob the loss function was a bit more complex: if Bob's guess (again measured in bits) was too far from the original input plaintext, it was a loss; for Alice, if Eve's guesses are better than random guessing, it's a loss. And thus an adversarial generative network (GAN) was created.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

AI milestone: a new system can match humans in speech recognition

AI milestone: a new system can match humans in speech recognition | Amazing Science | Scoop.it

A team at Microsoft's Artificial Intelligence and Research group has published a study in which they demonstrate a technology that recognizes spoken words in a conversation as well as a real person does.

 

Last month, the same team achieved a word error rate (WER) of 6.3%. In their new paper this week, they report a WER of just 5.9%, which is equal to that of professional transcriptionists and is the lowest ever recorded against the industry standard Switchboard speech recognition task. “We’ve reached human parity,” said Xuedong Huang, the company’s chief speech scientist. “This is an historic achievement.”

 

“Even five years ago, I wouldn’t have thought we could have achieved this,” said Harry Shum, the group's executive vice president. “I just wouldn’t have thought it would be possible.”

 

Microsoft has been involved in speech recognition and speech synthesis research for many years. The company developed Speech API in 1994 and later introduced speech recognition technology in Office XP and Office 2003, as well as Internet Explorer. However, the word error rates for these applications were much higher back then.

 

In their new paper, the researchers write: "the key to our system's performance is the systematic use of convolutional and LSTM neural networks, combined with a novel spatial smoothing method and lattice-free MMI acoustic training."

 

The team used Microsoft’s own Computational Network Toolkit – an open source, deep learning framework. This was able to process deep learning algorithms across multiple computers, running a specialized GPU to greatly improve its speed and enhance the quality of research. The team believes their milestone will have broad implications for both consumer and business products, including entertainment devices like the Xbox, accessibility tools such as instant speech-to-text transcription, and personal digital assistants such as Cortana. “This will make Cortana more powerful, making a truly intelligent assistant possible,” Shum said.

 

“The next frontier is to move from recognition to understanding,” said Geoffrey Zweig, who manages the Speech & Dialog research group. Future improvements may also include speech recognition that works well in more real-life settings – places with lots of background noise, for example, such as at a party or while driving on the highway. The technology will also become better at assigning names to individual speakers when multiple people are talking, as well as working with a wide variety of voices, regardless of age, accent or ability.

 

The full study – Achieving Human Parity in Conversational Speech Recognition – is available at: https://arxiv.org/abs/1610.05256

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Stephen Wolfram: AI & The Future Of Human Civilization

Stephen Wolfram: AI & The Future Of Human Civilization | Amazing Science | Scoop.it

What makes us different from all these things? What makes us different is the particulars of our history, which gives us our notions of purpose and goals. That's a long way of saying when we have the box on the desk that thinks as well as any brain does, the thing it doesn't have, intrinsically, is the goals and purposes that we have. Those are defined by our particulars—our particular biology, our particular psychology, our particular cultural history.

 

The thing we have to think about as we think about the future of these things is the goals. That's what humans contribute, that's what our civilization contributes—execution of those goals; that's what we can increasingly automate. We've been automating it for thousands of years. We will succeed in having very good automation of those goals. I've spent some significant part of my life building technology to essentially go from a human concept of a goal to something that gets done in the world.

 

There are many questions that come from this. For example, we've got these great AIs and they're able to execute goals, how do we tell them what to do?...

 

STEPHEN WOLFRAM, distinguished scientist, inventor, author, and business leader, is Founder & CEO, Wolfram Research; Creator, Mathematica, Wolfram|Alpha & the Wolfram Language; Author, A New Kind of Science. Stephen Wolfram's Edge Bio Page

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Learning Technology News
Scoop.it!

Google’s Neural Network for Language Translation Narrows Gap Between Machines & Humans

Google’s Neural Network for Language Translation Narrows Gap Between Machines & Humans | Amazing Science | Scoop.it
Of course, machine translation is still far from perfect. Despite its advances, GNMT can still mistranslate, particularly when it encounters proper names or rare words, which prompt the system to, again, translate individual words instead of looking at them within the context of the whole. Clearly, there is still a gap between human and machine translations, but with GNMT, it is getting smaller.

 

By virtue of how many words, phrases, and grammar rules there are, you can only imagine how difficult it is for a person to deliver a translation from one language to another that accurately conveys the thought and nuance behind even the simplest sentence, let alone how difficult it would be for a machine. This, however, was a challenge that Google wanted to tackle head on, and today, the results of all those years spent working on a machine learning translation system were finally unveiled.

 

Languages are naturally phrase-based, so letting machines learn as many of those phrases and the subtleties behind the language as possible so that they could be applied to the translation was essential. Getting machines to do all that requires a lot of data, and adding complex language rules into the mix requires neural networks. While Google may not be the only company that has been looking into this method for more precise and accurate translations, they managed to be the first to get it done.

 

The Google Neural Machine Translation system (GNMT) uses state-of-the-art training techniques that significantly improve machine translation quality. It relies on long short term memory recurrent neural networks (LSTM-RNNs) and neural networks that have been trained by graphics processing units (GPUs), and tensor processing units (TPUs) to crunch data more efficiently. It all adds up to a new system that can lower translation errors by more than 55 to 85 percent.

 


Via Nik Peachey
more...
Nik Peachey's curator insight, October 13, 2016 4:56 AM

A good thing for those in the language learning industry to keep up with.

EI Design's curator insight, October 17, 2016 6:31 AM
Google’s Neural Network for Language Translation Narrows Gap Between Machines & Humans
Rescooped by Dr. Stefan Gruenwald from Fragments of Science
Scoop.it!

Machine learning kept unstable quantum bits in line – even before they wavered

Machine learning kept unstable quantum bits in line – even before they wavered | Amazing Science | Scoop.it

Imagine predicting your car will break down and being able to replace the faulty part before it becomes a problem. Now Australian physicists have found a way to do this – albeit on a quantum scale.

 

In Nature Communications, they enlisted machine learning to “foresee” the future failure of a quantum bit, or qubit, and makes the necessary corrections to stop it happening.

 

Quantum computing is a potentially world-changing technology with the potential to complete tasks in minutes what current computers take thousands of years. But achieving a practical, large-scale quantum technologies is probably a long way off.

 

One of the major challenges is maintaining qubits in the delicate, zen-like state of superposition they need to do their business.

Any tiny nudge from the environment – such as the jiggly atom next door – knocks the qubit off balance.

 

So physicists go to great lengths to stabilize qubits, cooling them to more than 200 degrees below zero to reduce atomic jiggling. Still, superposition typically lasts but a tiny fraction of a second, and this cuts quantum number-crunching time short.

 

A team led by Michael Biercuk at the University of Sydney found a new way of stabilizing qubits against noise in the environment. It works by predicting how a qubit will behave and act preemptively. In a quantum computer, the technique could make qubits twice as stable as before. The team used control theory and machine learning (a kind of artificial intelligence) to estimate how the future of a qubit would play out.

 

Control theory is the branch of engineering that deals with feedback systems, such as the thermostat keeping your room temperature constant. The thermostat reacts to changes in the environment, initiating warm or cool air to pump into the room. Meanwhile, new machine learning algorithms look at how the system behaved in the past and use this information predict how it will react to future events.

 

First, Biercuk’s team made a qubit by trapping a single ion of ytterbium in a beam of laser light. To train their algorithm, they simulated noise, tweaking the light to disturb the atom in a controlled way. Their algorithm monitored how the qubit responded to these tweaks and made a prediction for how it would behave in future. Next, they let events play out for the qubit to check their algorithm’s accuracy. The longer the algorithm watched the qubit, the more accurate its predictions became. Finally, the team used the predictions to help the system self-correct. The qubit was twice as stable with the algorithm as without it.


Via Mariaschnee
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Making an AI Systems that performs in the 75th percentile for American adults on standard IQ tests

Making an AI Systems that performs in the 75th percentile for American adults on standard IQ tests | Amazing Science | Scoop.it

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do.

 

“The model performs in the 75th percentile for American adults, making it better than average,” said Northwestern Engineering’s Ken Forbus. “The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition.”

 

The new computational model is built on CogSketch, an artificial intelligence platform previously developed in Forbus’ laboratory. The platform has the ability to solve visual problems and understand sketches in order to give immediate, interactive feedback. CogSketch also incorporates a computational model of analogy, based on Northwestern psychology professor Dedre Gentner’s structure-mapping theory.

 

Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science at Northwestern’s McCormick School of Engineering, developed the model with Andrew Lovett, a former Northwestern postdoctoral researcher in psychology. Their research was published online this month in the journal Psychological Review.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The Pentagon's new drone swarm heralds a future of autonomous war machines

The Pentagon's new drone swarm heralds a future of autonomous war machines | Amazing Science | Scoop.it

On Oct. 26, 2016, a pair of Hornets flying above an empty part of California opened their bellies and released a robotic swarm. With machine precision, the fast-moving unmanned flying machines took flight, then moved to a series of waypoints, meeting objectives set for the swarm by a human controller. The brief flight of 103 tiny drones heralds a new age in how, exactly, America uses robots at war.

 

The Pentagon’s worked with Perdix drones since 2013, with the October flight using the military’s 6th generation of the devices. F/A-18 Hornets, long-serving Navy fighters, carried the drones and released them from flare dispensers. The small drones were the subject of an episode of CBS’s 60 Minutes, and they move so fast they’re hard to film. Below, in a clip from the Department of Defense, the drones are barely visible as dark blurs beneath the fighters.

 

Captured by telemetry video on the ground, the swarm is clearly visible. First it appears as if from nowhere, moves as one towards a new set of objectives. This drone swarm was a product of the Strategic Capabilities Office, and outgoing Secretary of Defense Ash Carter praised the work, saying “This is the kind of cutting-edge innovation that will keep us a step ahead of our adversaries. This demonstration will advance our development of autonomous systems.”

 

Autonomy and swarming are centerpieces in many predictions about the next century of war. The Predator, Reaper, and Global Hawk drones that have so far most embodied how the United States fights wars are big, expensive, and vulnerable machines, with human pilots and sensor operators controlling them remotely. These drones also operate in skies relatively free of threats, without fear that a hostile jet will shoot them down. That’s an approach that’s fine for counterinsurgency battles, an admittedly large part of the wars the Pentagon actually fights, but against a near-peer nation or any foe with sophisticated anti-air or electronic jamming equipment, Reapers are extremely vulnerable targets.

 

Swarms, where several small flying robots work together to do the same job previously done by a larger craft are one way around that. A few $45,000 anti-air missiles are a cost-effective way to shoot down an $18 million Reaper, but firing that same anti-air missile at a smaller, commercial drone isn’t as effective, especially when there are still 102 other drones flying the same mission at the same time.

 

Controlling that swarm is where autonomy comes in. With every Predator drone, there’s an actual joystick and flight controls for a human pilot, whose job it is to direct the uncrewed plane and maneuver it. That one-to-one ratio would be impossible to maintain with a small drone swarm, and given that the perdix drone has a listed flight time of “over 20 minutes,” it would be a lot of effort for a very short excursion.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Artificial intelligence used to generate new cancer drugs on demand

Artificial intelligence used to generate new cancer drugs on demand | Amazing Science | Scoop.it
Clinical trial failure rates for small molecules in oncology exceed 94% for molecules previously tested in animals and the costs to bring a new drug to market exceed $2.5 billion. Advances in deep learning demonstrated superhuman accuracy in many areas and are expected to transform industries. Here for the first time we demonstrate the application of Generative Adversarial Autoencoders (AAEs), a new type of Generative Adversarial Networks (GAN), for generation of molecular fingerprints of molecules that kill cancer cells at specific concentrations.

 

Scientists at the Pharmaceutical Artificial Intelligence (pharma.AI) group of Insilico Medicine, Inc, today announced the publication of a seminal paper demonstrating the application of generative adversarial autoencoders (AAEs) to generating new molecular fingerprints on demand. The study was published in Oncotarget on 22nd of December, 2016. The study represents the proof of concept for applying Generative Adversarial Networks (GANs) to drug discovery. The authors significantly extended this model to generate new leads according to multiple requested characteristics and plan to launch a comprehensive GAN-based drug discovery engine producing promising therapeutic treatments to significantly accelerate pharmaceutical R&D and improve the success rates in clinical trials.

 

Since 2010 deep learning systems demonstrated unprecedented results in image, voice and text recognition, in many cases surpassing human accuracy and enabling autonomous driving, automated creation of pleasant art and even composition of pleasant music.

 

GAN is a fresh direction in deep learning invented by Ian Goodfellow in 2014. In recent years GANs produced extraordinary results in generating meaningful images according to the desired descriptions. Similar principles can be applied to drug discovery and biomarker development. This paper represents a proof of concept of an artificially-intelligent drug discovery engine, where AAEs are used to generate new molecular fingerprints with the desired molecular properties.

"At Insilico Medicine we want to be the supplier of meaningful, high-value drug leads in many disease areas with high probability of passing the Phase I/II clinical trials. While this publication is a proof of concept and only generates the molecular fingerprints with the very basic molecular properties, internally we can now generate entire molecular structures according to a large number of parameters.

 

These structures can be fed into our multi-modal drug discovery pipeline, which predicts therapeutic class, efficacy, side effects and many other parameters. Imagine an intelligent system, which one can instruct to produce a set of molecules with specified properties that kill certain cancer cells at a specified dose in a specific subset of the patient population, then predict the age-adjusted and specific biomarker-adjusted efficacy, predict the adverse effects and evaluate the probability of passing the human clinical trials. This is our big vision", said Alex Zhavoronkov, PhD, CEO of Insilico Medicine, Inc.

 

Previously, Insilico Medicine demonstrated the predictive power of its discovery systems in the nutraceutical industry. In 2017 Life Extension will launch a range of natural products developed using Insilico Medicine's discovery pipelines. Earlier this year the pharmaceutical artificial intelligence division of Insilico Medicine published several seminal proof of concept papers demonstrating the applications of deep learning to drug discovery, biomarker development and aging research.

 

Recently the authors published a tool in Nature Communications, which is used for dimensionality reduction in transcriptomic data for training deep neural networks (DNNs). The paper published in Molecular Pharmaceutics demonstrating the applications of deep neural networks for predicting the therapeutic class of the molecule using the transcriptional response data received the American Chemical Society Editors' Choice Award. Another paper demonstrating the ability to predict the chronological age of the patient using a simple blood test, published in Aging, became the second most popular paper in the journal's history.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science
Scoop.it!

Map of drugs reveals uncharted waters in search for new treatments

Map of drugs reveals uncharted waters in search for new treatments | Amazing Science | Scoop.it

Scientists have created a map of all 1,578 licensed drugs and their mechanisms of action - as a means of identifying 'uncharted waters' in the search for future treatments.

Their analysis of drugs licensed through the Food and Drug Administration reveals that 667 separate proteins in the human body have had drugs developed against them - just an estimated 3.5% of the 20,000 human proteins. And as many as 70 per cent of all targeted drugs created so far work by acting on just four families of proteins - leaving vast swathes of human biology untouched by drug discovery programs.

 

The study is the most comprehensive analysis of existing drug treatments across all diseases ever conducted. It was jointly led by scientists at The Institute of Cancer Research, London, which also funded the research.

 

The new map reveals areas where human genes and the proteins they encode could be promising targets for new treatments - and could also be used to identify where a treatment for one disease could be effective against another.

 

The new data, published in a paper in the journal Nature Reviews Drug Discovery, could be used to improve treatments for all human aliments - as diverse as cancer, mental illness, chronic pain and infectious disease.

 

Scientists brought together vast amounts of information from huge datasets including the canSAR database at The Institute of Cancer Research (ICR), the ChEMBL database from the European Bioinformatics Institute (EMBL-EBI) in Cambridge and the University of New Mexico's DrugCentral database. They matched each drug with prescribing information and data from published scientific papers, and built up a comprehensive picture of how existing medicines work - and where the gaps and opportunities for the future lie.

 

The researchers discovered that there are 667 unique human proteins targeted by existing approved drugs, and identified a further 189 drug targets in organisms that are harmful to humans, such as bacteria, viruses and parasites.


Via Mariaschnee
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science
Scoop.it!

Artificial intelligence-based system associates images with sounds

Artificial intelligence-based system associates images with sounds | Amazing Science | Scoop.it

The cow goes "moo." The pig goes "oink." A child can learn from a picture book to associate images with sounds, but building a computer vision system that can train itself isn't as simple. Using artificial intelligence techniques, however, researchers at Disney Research and ETH Zurich have designed a system that can automatically learn the association between images and the sounds they could plausibly make.

 

Given a picture of a car, for instance, their system can automatically return the sound of a car engine. A system that knows the sound of a car, a splintering dish, or a slamming door might be used in a number of applications, such as adding sound effects to films, or giving audio feedback to people with visual disabilities, noted Jean-Charles Bazin, associate research scientist at Disney Research.

 

To solve this challenging task, the research team leveraged data from collections of videos. "Videos with audio tracks provide us with a natural way to learn correlations between sounds and images," Bazin said. "Video cameras equipped with microphones capture synchronized audio and visual information. In principle, every video frame is a possible training example."

 

One of the key challenges is that videos often contain many sounds that have nothing to do with the visual content. These uncorrelated sounds can include background music, voice-over narration and off-screen noises and sound effects and can confound the learning scheme.

 

"Sounds associated with a video image can be highly ambiguous," explained Markus Gross, vice president for Disney Research. "By figuring out a way to filter out these extraneous sounds, our research team has taken a big step toward an array of new applications for computer vision."

 

"If we have a video collection of cars, the videos that contain actual car engine sounds will have audio features that recur across multiple videos" Bazin said. "On the other hand, the uncorrelated sounds that some videos might contain generally won't share any redundant features with other videos, and thus can be filtered out."


Via Mariaschnee
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

IBM: How to use your phone camera to identify skin cancer based on AI app

IBM: How to use your phone camera to identify skin cancer based on AI app | Amazing Science | Scoop.it
Technology like computer vision and machine learning help identify cancerous spots.

 

Melanoma, the deadliest form of skin cancer, is expected to cause more than 10,000 deaths in the U.S. alone in 2016. Researchers are now hard-pressed to find a new way to catch the disease and others like it in the earliest stages. That's where that handy, ubiquitous iPhone camera can help, according to new research.

In an IBM Research Blog post, Dr. Noel Codella outlines a means of identifying markers of melanoma via skin image analysis that might be available to doctors and patients in the future.

 

The methodology for home diagnosis via smartphone is relatively simple (at least in theory): When someone finds a questionable spot on their skin, they use their handset's camera to take a picture of the lesion and submit the image to be assessed by an analytics service, which can recognize and reliably identify the characteristics of disease. 

 

In practice, though, it's much more complicated than that. We've seen this type of system put to the test before in standalone apps, but those programs were woefully inadequate at best, resulting in a dreadful 93 percent failure rate in some instances. But that was all the way back in 2013. Now, the IBM team is employing much more powerful tools to improve the accuracy of computer image analysis.

 

The key to the success of this project hinges on two factors. The first is the widespread use of Dermascopes, which are devices that can be attached to smartphone cameras to optimize photos of lesions for analyzation. The second (and more important) factor is the development of a massive database containing images of cancerous skin spots. The database is accessed using IBM's machine learning, computer vision and cloud computing capabilities to develop the means to consistently identify cases of melanoma through technology. 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

This AI software dreams up new drug molecules

This AI software dreams up new drug molecules | Amazing Science | Scoop.it

What do you get if you cross aspirin with ibuprofen? Harvard chemistry professor Alán Aspuru-Guzik isn’t sure, but he’s trained software that could give him an answer by suggesting a molecular structure that combines properties of both drugs.

 

The AI program could help the search for new drug compounds. Pharmaceutical research tends to rely on software that exhaustively crawls through giant pools of candidate molecules using rules written by chemists, and simulations that try to identify or predict useful structures. The former relies on humans thinking of everything, while the latter is limited by the accuracy of simulations and the computing power required.

 

Aspuru-Guzik’s system can dream up structures more independently of humans and without lengthy simulations. It leverages its own experience, built up by training machine-learning algorithms with data on hundreds of thousands of drug-like molecules.

 

"It explores more intuitively, using chemical knowledge it learned, like a chemist would," says Aspuru-Guzik. "Humans could be better chemists with this kind of software as their assistant." Aspuru-Guzik was named to MIT Technology Review’s list of young innovators in 2010.

 

The new system was built using a machine-learning technique called deep learning, which has become pervasive in computing companies but is less established in the natural sciences. It uses a design known as a generative model, which takes in a trove of data and uses what it learned to generate plausible new data of its own.

 

Generative models are more typically used to create images, speech, or text, for example in the case of Google’s Smart Reply feature that suggests responses to e-mails. But last month Aspuru-Guzik and colleagues at Harvard, the University of Toronto, and the University of Cambridge published results from creating a generative model trained on 250,000 drug-like molecules.

 

The system could generate plausible new structures by combining properties of existing drug compounds, and be asked to suggest molecules that strongly displayed certain properties such as solubility, and being easy to synthesize.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New IBM Watson Data Platform and Data Science Experience

New IBM Watson Data Platform and Data Science Experience | Amazing Science | Scoop.it

IBM recently announced a new IBM Watson Data Platform that combines the world’s fastest data ingestion engine touting speeds up to 100+GB/second with cloud data source, data science, and cognitive API services. IBM is also making IBM Watson Machine Learning Service more intuitive with a self-service interface.

 

According to Bob Picciano, Senior Vice President of IBM Analytics “Watson Data Platform applies cognitive assistance for creating machine learning models, making it far faster to get from data to insight. It also, provides one place to access machine learning services and languages, so that anyone, from an app developer to the Chief Data Officer, can collaborate seamlessly to make sense of data, ask better questions, and more effectively operationalize insight.”

 

For more information or a free trial of IBM Watson Data Platform, Data Science Experience, Watson APIs, or Bluemix, the following resources are useful:

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Making computers explain themselves

Making computers explain themselves | Amazing Science | Scoop.it
A new technique for training deep-learning neural networks on natural-language-processing tasks provides rationales for the systems’ decisions. The technique was developed at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

 

In recent years, the best-performing systems in artificial-intelligence research have come courtesy of neural networks, which look for patterns in training data that yield useful predictions or classifications. A neural net might, for instance, be trained to recognize certain objects in digital images or to infer the topics of texts. But neural nets are black boxes. After training, a network may be very good at classifying data, but even its creators will have no idea why. With visual data, it’s sometimes possible to automate experiments that determine which visual features a neural net is responding to. But text-processing systems tend to be more opaque.

 

At the Association for Computational Linguistics’ Conference on Empirical Methods in Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions.

 

“In real-world applications, sometimes people really want to know why the model makes the predictions it does,” says Tao Lei, an MIT graduate student in electrical engineering and computer science and first author on the new paper. “One major reason that doctors don’t trust machine-learning methods is that there’s no evidence.”

 

“It’s not only the medical domain,” adds Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science and Lei’s thesis advisor. “It’s in any domain where the cost of making the wrong prediction is very high. You need to justify why you did it.”

 

“There’s a broader aspect to this work, as well,” says Tommi Jaakkola, an MIT professor of electrical engineering and computer science and the third coauthor on the paper. “You may not want to just verify that the model is making the prediction in the right way; you might also want to exert some influence in terms of the types of predictions that it should make. How does a layperson communicate with a complex model that’s trained with algorithms that they know nothing about? They might be able to tell you about the rationale for a particular prediction. In that sense it opens up a different way of communicating with the model.”

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Nostri Orbis
Scoop.it!

Google's 'DeepMind' AI platform can now learn without human input

Google's 'DeepMind' AI platform can now learn without human input | Amazing Science | Scoop.it

The DeepMind artificial intelligence (AI) being developed by Google's parent company, Alphabet, can now intelligently build on what's already inside its memory, the system's programmers have announced. Their new hybrid system – called a Differential Neural Computer (DNC) – pairs a neural network with the vast data storage of conventional computers, and the AI is smart enough to navigate and learn from this external data bank.

 

“These models can learn from examples like neural networks, but they can also store complex data like computers,” wrote DeepMind researchers Alexander Graves and Greg Wayne. Much like the brain, the neural network uses an interconnected series of nodes to stimulate specific centers needed to complete a task. In this case, the AI is optimizing the nodes to find the quickest solution to deliver the desired outcome. Over time, it’ll use the acquired data to get more efficient at finding the correct answer.

 

The two examples given by the DeepMind team further clear up the process:

  1. After being told about relationships in a family tree, the DNC was able to figure out additional connections on its own all while optimizing its memory to find the information more quickly in future searches.
  2. The system was given the basics of the London Underground public transportation system and immediately went to work finding additional routes and the complicated relationship between routes on its own.

 

Instead of having to learn every possible outcome to find a solution, DeepMind can derive an answer from prior experience, unearthing the answer from its internal memory rather than from outside conditioning and programming. This process is exactly how DeepMind was able to beat a human champion at ‘Go’ — a game with millions of potential moves and an infinite number of combinations.

 

Depending on the point of view, this could be a serious turn of events for ever-smarter AI that might one day be capable of thinking and learning as humans do. Or, it might be time to start making plans for survival post-Skynet.


Via Fernando Gil
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

BMW's Futuristic Artificial Intelligence Motorcycle Balances on Its Own

BMW's Futuristic Artificial Intelligence Motorcycle Balances on Its Own | Amazing Science | Scoop.it

The motorcycle of the future is so smart that it could eliminate the need for protective gear, according to automaker BMW.

To mark its 100th birthday, BMW has unveiled a number of concept vehicles that imagine the future of transportation. Possibly its most daring revelation, the so-called Motorrad Vision Next 100 concept motorcycle is so advanced that BMW claims riders wouldn't need a helmet.

 

The Motorrad Vision Next 100 would have a self-balancing system that keeps the bike upright both in motion and when still. BMW touted the motorbike's futuristic features, saying it would allow for riders of all skill levels to "enjoy the sensation of absolute freedom." According to the automaker, the Motorrad wouldn't require protective gear such as helmets and padded suits.

 

Another traditional feature was also missing from the concept: a control panel. Instead, helmetless riders would wear a visor that acts as a smart display. "Information is exchanged between rider and bike largely via the smart visor," BMW said in a statement. "This spans the rider's entire field of view and provides not only wind protection but also relevant information, which it projects straight into the line of sight as and when it is needed." Such information would not be needed all the time because drivers will be able to hand over active control of the vehicle at points; the Motorrad and other Vision Next 100 vehicles would be equipped with self-driving technology, according to BMW.

more...
No comment yet.