Using an Android tablet and the video game Angry Birds, children can program a robot to learn new skills.
Because end users can easily program the robot to learn tasks, researchers envision the robot-smart tablet system as a future rehabilitation tool for children with cognitive and motor-skill disabilities.
The researchers paired a small humanoid robot with an Android tablet. Kids teach it how to play Angry Birds by dragging their finger on the tablet to whiz the bird across the screen. The robot watches what happens and records “snapshots” in its memory.
The machine notices where fingers start and stop, and how the objects on the screen move according to each other, while constantly keeping an eye on the score to check for signs of success.
When it’s the robot’s turn, it mimics the child’s movements and plays the game. If the bird is a dud and doesn’t cause any damage, the robot shakes its head in disappointment. If the building topples and points increase, the eyes light up and the machine celebrates with a happy sound and dance.
Nasa plans to send Google's 3D smartphones into space to function as the "eyes and brains" of free-flying robots inside the Space Station.
The robots, known as Spheres (Synchronised Position Hold, Engage, Reorient, Experimental satellites), currently have limited capabilities.
It is hoped the smartphones, powered by Google's Project Tango, will equip the robots with more functionality.
The robots have been described by experts as "incredibly clever".
When Nasa's robots first arrived at the International Space Station in 2006, they were only capable of precise movements using small jets of CO2, which propelled the devices forwards at around an inch per second.
"We wanted to add communication, a camera, increase the processing capability, accelerometers and other sensors," Spheres project manager Chris Provencher told Reuters.
"As we were scratching our heads thinking about what to do, we realised the answer was in our hands. Let's just use smartphones."
In an attempt to make the robots smarter and of more use to astronauts, engineers at Nasa's Ames Research Centre sent cheap smartphones to the space station, which they had purchased from Best Buy, an American electronics shop.
Astronauts then attached the phones to the Spheres, giving them more visual and sensing capabilities.
As smartphones have become ubiquitous, parents and teachers have voiced concerns that a technology-rich lifestyle is doing youngsters harm. Research on this question is still in its infancy, but other…
IBM's cloud-computing system is making its first foray into food
Watson, a cognitive computing system that can learn and process natural human language, has been one of IBM's most exciting projects of the last decade. Over the past few years, Watson has learned a variety of tasks, from defeating contestants on "Jeopardy" to diagnosing life-threatening diseases. Now the cloud-based system is making its first foray into an industry we can all enjoy: food.
IBM calls it "cognitive cooking," a collaboration with New York's Institute of Culinary Education that uses data to create the best-tasting food possible.
IBM engineers carefully examined flavor compounds in thousands of ingredients, going down to the molecular level to measure the pleasantness of each. Then, using nutritional data from the FDA, they had the chefs at ICE try out the combinations Watson had determined would make for a delicious meal.
My children live in the digital world as much as they live in the real one.
Whether they are chatting to their friends on Xbox Live or FaceTime or viewing their profiles on Instagram, these days it seems that there is always a virtual guest in our house.
Their expectations of life are fundamentally different to mine at their ages - eight and 10. They were among the first generation to swipe a dumb screen and wonder why nothing happened; the first to say when a toy was broken: "Don't worry, we can just download a new one"; and the first to be aware that the real world runs seamlessly into the digital one.
These digital natives understand the etiquette of the digital world - how to text, how to email, how to get wi-fi and how to watch whatever they want, whenever they want. And homework is a whole lot easier now that they have the virtual font of all knowledge at the their fingertips - Google.
As the author of the book Growing Up Digital, Don Tapscott has spent a lot of time looking at how the generation born in the age of computing will differ from those before.
"Generation M [mobile] are growing up bathed in bits," he says. "Their brains are actually different."
For him, the way the brain is wired is dictated by how you spend your time.
"My generation grew up watching TV - we were passive recipients. Today children come home and turn on their mobile devices, they are listening to MP3s, chatting to their friends, playing video games - managing all these things at the same time."
This February, we first heard about a "bionic pancreas" that could radically improve the lives of type 1 diabetics. At the time, multi-day trials involving groups of adult and adolescent patients were still yet to occur. Those trials have now taken place, and the results are definitely encouraging.
Being developed by scientists at Boston University and Massachusetts General Hospital, the bionic pancreas is made up of two externally-worn pumps, an app on an iPhone 4s, and a tiny sensor within a needle that's inserted under the skin. Every five minutes, that sensor monitors the glucose levels in the surrounding tissue fluid, and sends the readings to the app. If those levels get too high or too low, the app automatically triggers one or the other of the pumps to release either insulin or its counteracting hormone, glucagon, into the bloodstream.
Ordinarily, diabetics must monitor glucose levels themselves several times a day via fingerstick blood tests. If more insulin is required, it must be either manually injected or pumped into their body.
In the tests, a group of 20 adult diabetics used the bionic pancreas for five days while conducting their usual activities in downtown Boston, plus a group of 32 adolescents also tried them out over a five-day period at a youth camp. As a control, both groups were also monitored for a five-day period while only using their regular manual insulin pumps.
The device ended up working even better than expected. The adults required 37 percent less interventions for hypoglycemia (low blood glucose), while the youth experienced an almost-twofold reduction. Additionally, the adults saw a twofold reduction in the amount of time spent in a hypoglycemic state.
Larger multi-center trials are now planned to take place soon.
The first brain-machine interface system capable of learning commands has been developed in Japan.
The system, designed to help people with severe motion or speaking disabilities, is the first of its kind addressing the excessive mental load existing systems place on a user. Every time the user wants to perform even a simple action, he or she has to focus their mental energy to deliver the message, which could be very tiring.
“We give learning capabilities to the system by implementing intelligent algorithms, which gradually learn user preferences,” said Christian Isaac Peñaloza Sanchez, a PhD candidate at the University of Osaka, Japan.
“At one point it can take control of the devices without the person having to concentrate much to achieve this goal," he said.
For the past three years, Peñaloza Sanchez has been developing the system which uses electrodes attached to the person’s scalp to measure brain activity in the form of EEG signals. The signals show patterns related to various thoughts and the general mental state of the user as well as the level of concentration.
Currently, the system can learn up to 90 per cent of common instructions such as controlling a wheel chair and navigating it around a room.
After the system learns the command from the user, the action could be triggered either by pressing a button or by a quick thought. While performing the automated action, the system looks for the so-called error-related negativity signal – a reaction in a human brain when an incorrect response is initiated – for example if the system opens a window instead of turning on the TV.
"We've had pretty good results in various experiments with multiple people who have participated as volunteers in our in vivo trials,” said Peñaloza Sanchez.
“We found that user mental fatigue decreases significantly and the level of learning by the system increases substantially."
Farms in upstate New York and elsewhere are using automatic milkers that scan and map the underbellies of cows, extract the milk, and monitor its quality, without the use of human hands.
The cows seem to like it, too.
Robots allow the cows to set their own hours, lining up for automated milking five or six times a day — turning the predawn and late-afternoon sessions around which dairy farmers long built their lives into a thing of the past.
With transponders around their necks, the cows get individualized service. Lasers scan and map their underbellies, and a computer charts each animal’s “milking speed,” a critical factor in a 24-hour-a-day operation.
The robots also monitor the amount and quality of milk produced, the frequency of visits to the machine, how much each cow has eaten, and even the number of steps each cow has taken per day, which can indicate when she is in heat.
“The animals just walk through,” said Jay Skellie, a dairyman from Salem, N.Y., after watching a demonstration. “I think we’ve got to look real hard at robots.”
Many of those running small farms said the choice of a computerized milker came down to a bigger question: whether to upgrade or just give up.
In the movie “Her” (starring Joaquin Phoenix and Amy Adams), the lead character falls in love with a Siri-like virtual assistant.
The only difference between Siri and “Her” is that the movie version is simply more advanced -- as advanced as such assistances will inevitably get. It’s only a matter of time before Siri, Google Now, Cortana and others can all pass the Turing test.
In the movie, Joaquin Phoenix's character develops what he believes is a satisfying relationship with the virtual assistant entirely through a Bluetooth earpiece that fits almost entirely into his ear. He puts it in his ear and forgets about it. He talks, the assistant listens. The assistant talks, he listens. They have conversations.
If you can imagine sufficiently advanced A.I., you can imagine that this interface to the world of computers and the Internet is just about all you would need. Think about what you do with computers -- browse the Internet, do social networking, make calls, buy things, schedule meetings, maintain contacts, create business reports -- it could and I believe will be handled almost entirely by talking to a virtual assistant.
A Brazilian neuroscientist says brain-controlled robotics will let the paralyzed walk again.
In less than 60 days, Brazil will begin hosting soccer’s 2014 World Cup, even though workers are still hurrying to pour concrete at three unfinished stadiums. At a laboratory in São Paulo, a Duke University neuroscientist is in his own race with the World Cup clock. He is rushing to finish work on a mind-controlled exoskeleton that he says a paralyzed Brazilian volunteer will don, navigate across a soccer pitch using his or her thoughts, and use to make the ceremonial opening kick of the tournament on June 12.
The project, called Walk Again, is led by Miguel Nicolelis, a 53-year-old native of Brazil and one of the biggest names in neuroscience. If it goes as planned, the kick will be a highly public display of research into brain-machine interfaces, a technology that aims to help paralyzed people control machines with their thoughts and restore their ability to get around.
“It’s going to be like putting a man on the moon—it’s conquering a level of audacity and innovation that the people outside Brazil aren’t used to associating with Brazil,” Nicolelis has told audiences. The kick, he has said, “will inaugurate a new era of neuroscience, [that of] neuroengineering.”
Tauriq Moosa: Sex robots, as far-fetched as they may seem, could end up being commonplace. A mature response to this would be best all round
Consenting adults’ private activities seem to get a lot of other people very cross. Almost nowhere is this more pronounced than activities involving sex: the position, placement and management of people’s genital activities seem to keep a lot of other adults awake – but in an unhealthy, conservative way.
Many people don’t like two men doing romantic things together; many dislike women doing things too; and even if it’s the “proper” combination of sexes, there are rules about monogamy and marriage and money and so forth – that must not be violated, lest you incur the wrath of judgmental columnists and incomprehensible comment sections (or, unfortunately, the law itself).
Even otherwise progressive individuals are troubled by things such as non-monogamous relationships, child-free people (mostly child-free women, because wombs must always be filled with future babies, apparently), and men using sex toys.
So when considering, for example, sex robots, we should expect hatred, antagonism, and judgement. That attitude, in particular and in general toward adult consensual sex, should change. We can use sex robots as a good case-study to demonstrate why.
Patients are more willing to disclose personal information to virtual humans, likely because they don't have the capacity to judge.
The findings show promise for people suffering from post-traumatic stress and other mental anguish, says Gale Lucas, a social psychologist at University of Southern California’s Institute for Creative Technologies.
“Today there’s no legislation regarding how much intelligence a machine can have, how interconnected it can be. If that continues, look at the exponential trend. We will reach the singularity in the timeframe most experts predict. From that point on you’re going to see that the top species will no longer be humans, but machines.”
So, Facebook has been in the dock after publishing details of a supposedly sinister experiment it oversaw several years ago. It involved monitoring the moods of around 700,000 users based on their posts. The research also established that it was possible to affect those moods by posting positive or negative content in the users' news feeds.
The reaction has been highly negative, with many people raising concerns about the implications for privacy online. Whether or not you think they are right, they could probably do with an update on what has been happening online these past few years.
We are entering into an era where data is king, where our every move, our every emotion and every contact can be tracked. With the increasing analysis of social media activity, there is often very little that we do that can be hidden from the organisations we interact with online.
So long as an online company can drop a cookie onto our machine, it can track our behaviour online. This now includes logging how we react to advertising material and especially what makes us click on the content. Increasingly they are learning our behaviour as a result.
The need to gain permission, in the same way that research teams require when they involve human participants, has been slowly eroding. It is coming to be seen as a natural extension of existing practices, where advertising content is focused on target groups.
Users often freely offer their data to the internet, to be used in ways that, frequently, they would never expect. For example a tweet on a local event will time-stamp where a person was at a given time. It may reveal information around their movements and even perhaps who they had contact with along the way.
A system based on face recognition could put an end to forgotten passwords or PIN numbers and offer a safer way to sign in to accounts.
Humans can recognize familiar faces across a wide range of images, even when their image quality is poor. In contrast, recognition of unfamiliar faces is tied to a specific image—so much so that different photos of the same unfamiliar face are often thought to be different people.
Rob Jenkins of the psychology department at the University of York is lead author of a paper that suggests the new system, called Facelock, exploits this psychological effect to create a new type of authentication system. The research is published in the open-access journal PeerJ.
Familiarity with a particular face determines a person’s ability to identify it across different photographs and as a result a set of faces that are known only to a single individual can be used to create a personalized “lock.”
Access is then granted to anyone who demonstrates recognition of the faces across images, and denied to anyone who does not.
Many robotic researchers mimic living animals in their creations, such as cats, dogs, birds, kangaroos, humans. Fine and good. The Korea Advanced Institute of Science and Technology (KAIST) would rather reverse engineer dinosaurs.
KAIST’s Raptor robot was inspired by the velociraptor, an extinct six-foot killing machine known for its role as wily pack hunter in Jurassic Park. The robo-Raptor isn’t hunting anyone (yet)—but it’s fast. Very fast.
In a recently released video, KAIST shows Raptor running the treadmill at a leg-blurring top speed of 46 kph (28.5 mph). That’s faster than the fastest human sprinter (Usain Bolt), and matches Boston Dynamics’ fleet-footed robot Cheetah. (Raptor has momentarily achieved even higher speeds of 48 kph, or 29.8 mph.)
Monash UniversityDepartment of Electrical and Computer Systems Engineering (ECSE) engineers have modeled the world’s first “spaser” (surface plasmon amplification by stimulated emission of radiation) to be made completely out of carbon.
Spasers are analogous to lasers, but generate surface plasmons (coherent electron oscillations) instead of photons.
PhD student and lead researcher Chanaka Rupasinghe said the modeled spaser design using carbon would offer many advantages. “Other spasers designed to date are made of gold or silver nanoparticles and semiconductor quantum dots, while our device would be comprised of a graphene resonator and a carbon nanotube gain element,” he said.
These materials are more than 100 times stronger than steel, can conduct heat and electricity much better than copper, and can withstand high temperatures, he noted.
Any mention of cyborgs or superpowers evokes fantastical images from the realms of science fiction and comic books. Our visions of humans with enhanced capabilities are borne of our imaginations and the stories we tell. In reality, though, enhanced humans already exist ... and they don't look like Marvel characters. As different human enhancement technologies advance at different rates, they bleed into society gradually and without fanfare. What's more, they will increasingly necessitate discussion about areas that are often overlooked – what are the logistics and ethics of being superhuman? Gizmag spoke to a number of experts to find out.
Our natural tendency is to focus on the functionality of enhanced humans. Abilities like super-strength, flight or telepathy seem so far removed from that of which we're capable and so desirable that it's understandable for us to focus on these possibilities. The individual, social and ethical consequences of enhanced humans are considered far less in popular culture, however.
"People tend to imagine the current state of human enhancement as either much more advanced or retarded than it really is," Steve Fuller, Auguste Comte Chair in Social Epistemology in the Department of Sociology at the University of Warwick, tells Gizmag. "I realize that this sounds paradoxical, but generally speaking it helps to explain the curious blend of impatience and disappointment that surrounds the topic. This simply reflects the fact that people know more about human enhancement from its own hype and science-fictional representations – which can be positive or negative – than from what's actually available on the ground."
Researchers at Lancaster University, UK have taken a hint from the way the human lungs and heart constantly communicate with each other, to devise an innovative, highly flexible encryption algorithm that they claim can't be broken using the traditional methods of cyberattack.
Information can be encrypted with an array of different algorithms, but the question of which method is the most secure is far from trivial. Such algorithms need a "key" to encrypt and decrypt information; the algorithms typically generate their keys using a well-known set of rules that can only admit a very large, but nonetheless finite number of possible keys. This means that in principle, given enough time and computing power, prying eyes can always break the code eventually.
The researchers, led by Dr. Tomislav Stankovski, created an encryption mechanism that can generate a truly unlimited number of keys, which they say vastly increases the security of the communication. To do so, they took inspiration from the anatomy of the human body.
Imogen Heap and her team have developed gloves that allow you to make music through simple gestures.
With most concerts, what you see is what you hear: A guitarist cozying up to a speaker for feedback, a drummer tapping the high-hat, a singer breathing into the mic. There’s a visual, interactive element to the show that plays a huge role in how much fun it is for the audience.
Not all musicians have that luxury. Go to an electronic music concert and most of the time you’ll find the artist hunched over a computer, turning knobs and poking buttons. You hear sounds, but rarely do you know how they’re made or where they’re coming from. At best, this effect is mysterious; at worst, it’s boring. “When you see a musician on a laptop, they’re often doing amazing things, you just have no idea because it’s on a screen,” says Imogen Heap. “Usually it just looks like they’re sending an email.”