Researchers have demonstrated a display that lets audiences watch 3-D films in a theater without extra eyewear. Dubbed “Cinema 3D,” the MIT / Weizmann Institute of Science prototype uses lenses and mirrors to enable viewers to watch a 3-D movie from any seat.
MIT researchers have developed a new technique for imaging brain tissue at multiple scales, allowing them to image molecules within cells or take a wider view of the long-range connections between neurons. The technique, magnified analysis of proteome (MAP), should help scientists chart the connectivity and functions of neurons in the human brain.
The creators of artificially intelligent machines are often depicted in popular fiction as myopic Dr. Frankensteins who are oblivious to the apocalyptic technologies they unleash upon the world. In real life, they tend to wring their hands over the big questions: good versus evil and the impact the coming wave of robots and machine brains will have on human workers.
Scientists, recognizing their work is breaking out of the research lab and into the real world, grappled during a daylong summit on Dec. 10 in Montreal with such ethical issues as how to prevent computers that are smarter than humans from putting people out of work, adding complications to legal proceedings, or, even worse, seeking to harm society. Today’s AI can learn how to play video games, help automate e-mail responses, and drive cars under certain conditions. That’s already provoked concerns about the effect it may have workers.
"I think the biggest challenge is the challenge to employment," said Andrew Ng, the chief scientist for Chinese search engine Baidu Inc., which announced last week that one of its cars had driven itself on a 30 kilometer (19 mile) route around Beijing with no human required. The speed with which AI advances may change the workplace means "huge numbers of people in their 20s and 40s and 50s" would need to be retrained in a way that’s never happened before, he said.
"There’s no doubt that there are classes of jobs that can be automated today that could not be automated before," said Erik Brynjolfsson, an economist at the Massachusetts Institute of Technology, citing workers such as junior lawyers tasked with e-discovery or people manning the checkout aisles in self- checkout supermarkets.
"You hope that there are some new jobs needed in this economy," he said. "Entrepreneurs and managers haven’t been as creative in inventing the new jobs as they have been in automating some of the existing jobs."
Yann LeCun, Facebook’s director of AI research, isn’t as worried, saying that society has adapted to change in the past. "It’s another stage in the progress of technology," LeCun said. "It’s not going to be easy, but we’ll have to deal with it."
There are other potential quandaries, like how the legal landscape will change as AI starts making more decisions independent of any human operator. "It would be very difficult in some cases to bring an algorithm to the fore in the context of a legal proceeding," said Ian Kerr, the Canada Research Chair in Ethics, Law & Technology at the University of Ottawa Faculty of Law. "I think it would be a tremendous challenge."
“A team of researchers at the University of Zurich just announced that they've developed a drone software that's capable of identifying and following trails.”
Leave the breadcrumbs at home, folks, because just this week, a group of researchers in Switzerland announced the development of a drone capable of recognizing and following man-made forest trails. A collaborative effort between the University of Zurich and the Dalle Molle Institute of Artificial Intelligence, the conducted research was reportedly done to remedy the increasing number of lost hikers each year.
According to the University of Zurich, an estimated 1,000 emergency calls are made each year in regards to injured or lost hikers in Switzerland alone, an issue the group believes “inexpensive” drones could solve quickly.
Though the drone itself may get the bulk of the spotlight, it’s the artificial intelligence software developed by the partnership that deserves much of the credit. Run via a combination of AI algorithms, the software continuously scans its surroundings by way of two smartphone-like cameras built-in to the drone’s exterior. As the craft autonomously navigates a forested area, it consistently detects trails before piloting itself down open paths. However, the term “AI algorithms” is an incredibly easy way of describing something wildly complex. Before diving into the research, the team knew it would have to develop a supremely talented computing brain.
Instead of being programmed, a robot uses brain-inspired algorithms to “imagine” doing tasks before trying them in the real world.
Like many toddlers, Darwin sometimes looks a bit unsteady on its feet. But with each clumsy motion, the humanoid robot is demonstrating an important new way for androids to deal with challenging or unfamiliar environments. The robot learns to perform a new task by using a process somewhat similar to the neurological processes that underpin childhood learning.
Darwin lives in the lab of Pieter Abbeel, an associate professor at the University of California, Berkeley. When I saw the robot a few weeks ago, it was suspended from a camera tripod by a piece of rope, looking a bit tragic. A little while earlier, Darwin had been wriggling around on the end of the rope, trying to work out how best to move its limbs in order to stand up without falling over.
Darwin’s motions are controlled by several simulated neural networks—algorithms that mimic the way learning happens in a biological brain as the connections between neurons strengthen and weaken over time in response to input. The approach makes use of very complex neural networks, which are known as deep-learning networks, which have many layers of simulated neurons.
For the robot to learn how to stand and twist its body, for example, it first performs a series of simulations in order to train a high-level deep-learning network how to perform the task—something the researchers compare to an “imaginary process.” This provides overall guidance for the robot, while a second deep-learning network is trained to carry out the task while responding to the dynamics of the robot’s joints and the complexity of the real environment. The second network is required because when the first network tries, for example, to move a leg, the friction experienced at the point of contact with the ground may throw it off completely, causing the robot to fall.
The researchers had the robot learn to stand, to move its hand to perform reaching motions, and to stay upright when the ground beneath it tilts. “It practices in simulation for about an hour,” says Igor Mordatch, a postdoctoral researcher at UC Berkeley who carried out the study. “Then at runtime it’s learning on the fly how not to slip.”
“We’re trying to be able to deal with more variability,” says Abbeel. “Just even a little variability beyond what it was designed for makes it really hard to make it work.” The new technique could prove useful for any robot working in all sorts of real environments, but it might prove especially useful for more graceful legged locomotion.
The current approach is to design an algorithm that takes into account the dynamics of a process such as walking or running (see “The Robots Walking This Way”). But such models can struggle to deal with variation in the real world, as many of the humanoid robots involved in the DARPA Robotics Challenge demonstrated by falling over when walking on sand, or when unbalancing themselves by reaching out to grasp something (see “Why Robots, and Humans, Struggled with DARPA’s Challenge”). “It was a bit of a reality check,” Abbeel says. “That’s what happens in the real world.”
Dieter Fox, a professor in the computer science and engineering department at the University of Washington who specializes in robot perception and control, says neural network learning has huge potential in robotics. “I’m very excited about this whole research direction,” Fox says. “The problem is always if you want to act in the real world. Models are imperfect. Where machine learning, and especially deep learning comes in, is learning from the real-world interactions of the system.”
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.