A new computer program could soon analyze your “selfie” videos for clues to mental health.
Apps to monitor people’s health can track the spread of the flu, for example, or provide guidance on nutrition and managing mental health issues.
Jiebo Luo, professor of computer science at the University of Rochester, explains that his team’s approach is to “quietly observe your behavior” while you use the computer or phone as usual.
He adds that their program is “unobtrusive.” Users won’t need to wear special gear, describe their feelings, or add any extra information, he says.
From Tweets to forehead color
For example, the team was able to measure a user’s heart rate simply by monitoring very small, subtle changes in the user’s forehead color. The system does not grab other data that might be available through the phone—such as the user’s location.
The researchers were able to analyze the video data to extract a number of “clues,” such as heart rate, blinking rate, eye pupil radius, and head movement rate. At the same time, the program also analyzed both what the users posted on Twitter, what they read, how fast they scrolled, their keystroke rate, and their mouse click rate.
Not every bit of information is treated equally, however: what a user tweets, for example, is given more weight than what the user reads because it is a more direct expression of what that user is thinking and feeling.
Sophie de Oliveira Barata runs the Alternative Limb Project, creating unique prosthetics designed to reflect the wearer's personality
Sophie de Oliveira Barata started her career making realistic-looking artificial limbs for amputees.
But at university she had studied special effects prosthetics for TV and film, and wondered if she could use her skills to make limbs that looked more unusual and "spoke from people's soul".
Sophie set up the Alternative Limb Project and now makes bespoke, design-focused prosthetics from materials such wood, glass and metal that reflect the wearer's personality and imagination, as well as making ultra-realistic limbs.
Among others, she has designed limbs for model and singer-songwriter Viktoria Modesta and athlete Jo-Jo Cranfield.
Following in the footsteps of Hiroshi Ishiguro's eerily lifelike creations, Toshiba introduced its very own take on the human-looking droid at Japan's CEATAC electronics trade show this week. The communication android has been built to communicate in Japanese sign language, requiring fluid and precise movement of its arms and hands.
The result of an in-house ideas program, the android has the look of a young Japanese woman, complete with blinking eyes and a "warm smile." Its human-like appearance and ability to emulate human expressions come courtesy of work undertaken by aLab Inc. and Osaka University, while the Shibaura Institute of Technology brought driving and sensor technologies to the party. Toshiba used its experience with industrial robots to create a custom algorithm to facilitate the movement of 43 actuators in the robot's joints.
It's still early days for the project, with the signing robot currently capable of simple greetings and phrases only, though Toshiba is aiming to have progressed to such a degree that the android will be capable of acting as a receptionist or exhibition guide within the next year.
There are plans to introduce speech recognition and synthesis technology for natural communication, with development continuing towards introducing a welfare and healthcare service robot for the elderly and folks suffering dementia by 2020, allowing carers or family members to keep watch on loved ones.
Benjamin Wittes and Jane Chong examine how the law will respond as we become more cyborg-like, and the divide between human and machine becomes ever-more unstable. In particular, they consider how the law of surveillance will shift as we develop from humans who use machines into humans who partially are machines or, at least, who depend on machines pervasively for our most human-like activities.
Implant attached to bone in pioneering technique that helps prevent infection and discomfort
Revolutionary technology at a north London hospital has transformed the lives of amputees taking part in a trial by allowing artificial limbs to be attached directly to their skeleton, giving them feeling and mobility far beyond that experienced by people with traditional prosthetics.
Unlike traditional ball-and-socket joints where a socket is placed over the soft tissue of the stump, Itap (intraosseous transcutaneous amputation prosthesis) involves insertion of a metal implant that forms a direct interface with the bone and sticks out through the skin for the prosthetic to be attached.
If the trial conducted at the Royal National Orthopaedic hospital (RNOH) and the Royal Orthopaedic hospital in Birmingham, which ended in June, is deemed a success, Itap could be rolled out across the UK and internationally through specialist clinics.
Mark O'Leary, 40, from south London, was one of the first of 20 above-the-knee amputees to take part in the trial. He described the change it had made to his life. "Just knowing where my foot is, my ability to know where it is improved dramatically because you can feel it through the bone. A textured road crossing, I can feel that. You essentially had no sensation with a socket and with Itap you can feel everything," he said.
"It's like they've given me my leg back. I know that sounds a bit trite. With this thing I just click the stump on in the morning and I can walk as far as I like, do anything I want within reason. There's no limit."
Using an Android tablet and the video game Angry Birds, children can program a robot to learn new skills.
Because end users can easily program the robot to learn tasks, researchers envision the robot-smart tablet system as a future rehabilitation tool for children with cognitive and motor-skill disabilities.
The researchers paired a small humanoid robot with an Android tablet. Kids teach it how to play Angry Birds by dragging their finger on the tablet to whiz the bird across the screen. The robot watches what happens and records “snapshots” in its memory.
The machine notices where fingers start and stop, and how the objects on the screen move according to each other, while constantly keeping an eye on the score to check for signs of success.
When it’s the robot’s turn, it mimics the child’s movements and plays the game. If the bird is a dud and doesn’t cause any damage, the robot shakes its head in disappointment. If the building topples and points increase, the eyes light up and the machine celebrates with a happy sound and dance.
Nasa plans to send Google's 3D smartphones into space to function as the "eyes and brains" of free-flying robots inside the Space Station.
The robots, known as Spheres (Synchronised Position Hold, Engage, Reorient, Experimental satellites), currently have limited capabilities.
It is hoped the smartphones, powered by Google's Project Tango, will equip the robots with more functionality.
The robots have been described by experts as "incredibly clever".
When Nasa's robots first arrived at the International Space Station in 2006, they were only capable of precise movements using small jets of CO2, which propelled the devices forwards at around an inch per second.
"We wanted to add communication, a camera, increase the processing capability, accelerometers and other sensors," Spheres project manager Chris Provencher told Reuters.
"As we were scratching our heads thinking about what to do, we realised the answer was in our hands. Let's just use smartphones."
In an attempt to make the robots smarter and of more use to astronauts, engineers at Nasa's Ames Research Centre sent cheap smartphones to the space station, which they had purchased from Best Buy, an American electronics shop.
Astronauts then attached the phones to the Spheres, giving them more visual and sensing capabilities.
As smartphones have become ubiquitous, parents and teachers have voiced concerns that a technology-rich lifestyle is doing youngsters harm. Research on this question is still in its infancy, but other…
IBM's cloud-computing system is making its first foray into food
Watson, a cognitive computing system that can learn and process natural human language, has been one of IBM's most exciting projects of the last decade. Over the past few years, Watson has learned a variety of tasks, from defeating contestants on "Jeopardy" to diagnosing life-threatening diseases. Now the cloud-based system is making its first foray into an industry we can all enjoy: food.
IBM calls it "cognitive cooking," a collaboration with New York's Institute of Culinary Education that uses data to create the best-tasting food possible.
IBM engineers carefully examined flavor compounds in thousands of ingredients, going down to the molecular level to measure the pleasantness of each. Then, using nutritional data from the FDA, they had the chefs at ICE try out the combinations Watson had determined would make for a delicious meal.
My children live in the digital world as much as they live in the real one.
Whether they are chatting to their friends on Xbox Live or FaceTime or viewing their profiles on Instagram, these days it seems that there is always a virtual guest in our house.
Their expectations of life are fundamentally different to mine at their ages - eight and 10. They were among the first generation to swipe a dumb screen and wonder why nothing happened; the first to say when a toy was broken: "Don't worry, we can just download a new one"; and the first to be aware that the real world runs seamlessly into the digital one.
These digital natives understand the etiquette of the digital world - how to text, how to email, how to get wi-fi and how to watch whatever they want, whenever they want. And homework is a whole lot easier now that they have the virtual font of all knowledge at the their fingertips - Google.
As the author of the book Growing Up Digital, Don Tapscott has spent a lot of time looking at how the generation born in the age of computing will differ from those before.
"Generation M [mobile] are growing up bathed in bits," he says. "Their brains are actually different."
For him, the way the brain is wired is dictated by how you spend your time.
"My generation grew up watching TV - we were passive recipients. Today children come home and turn on their mobile devices, they are listening to MP3s, chatting to their friends, playing video games - managing all these things at the same time."
This February, we first heard about a "bionic pancreas" that could radically improve the lives of type 1 diabetics. At the time, multi-day trials involving groups of adult and adolescent patients were still yet to occur. Those trials have now taken place, and the results are definitely encouraging.
Being developed by scientists at Boston University and Massachusetts General Hospital, the bionic pancreas is made up of two externally-worn pumps, an app on an iPhone 4s, and a tiny sensor within a needle that's inserted under the skin. Every five minutes, that sensor monitors the glucose levels in the surrounding tissue fluid, and sends the readings to the app. If those levels get too high or too low, the app automatically triggers one or the other of the pumps to release either insulin or its counteracting hormone, glucagon, into the bloodstream.
Ordinarily, diabetics must monitor glucose levels themselves several times a day via fingerstick blood tests. If more insulin is required, it must be either manually injected or pumped into their body.
In the tests, a group of 20 adult diabetics used the bionic pancreas for five days while conducting their usual activities in downtown Boston, plus a group of 32 adolescents also tried them out over a five-day period at a youth camp. As a control, both groups were also monitored for a five-day period while only using their regular manual insulin pumps.
The device ended up working even better than expected. The adults required 37 percent less interventions for hypoglycemia (low blood glucose), while the youth experienced an almost-twofold reduction. Additionally, the adults saw a twofold reduction in the amount of time spent in a hypoglycemic state.
Larger multi-center trials are now planned to take place soon.
The first brain-machine interface system capable of learning commands has been developed in Japan.
The system, designed to help people with severe motion or speaking disabilities, is the first of its kind addressing the excessive mental load existing systems place on a user. Every time the user wants to perform even a simple action, he or she has to focus their mental energy to deliver the message, which could be very tiring.
“We give learning capabilities to the system by implementing intelligent algorithms, which gradually learn user preferences,” said Christian Isaac Peñaloza Sanchez, a PhD candidate at the University of Osaka, Japan.
“At one point it can take control of the devices without the person having to concentrate much to achieve this goal," he said.
For the past three years, Peñaloza Sanchez has been developing the system which uses electrodes attached to the person’s scalp to measure brain activity in the form of EEG signals. The signals show patterns related to various thoughts and the general mental state of the user as well as the level of concentration.
Currently, the system can learn up to 90 per cent of common instructions such as controlling a wheel chair and navigating it around a room.
After the system learns the command from the user, the action could be triggered either by pressing a button or by a quick thought. While performing the automated action, the system looks for the so-called error-related negativity signal – a reaction in a human brain when an incorrect response is initiated – for example if the system opens a window instead of turning on the TV.
"We've had pretty good results in various experiments with multiple people who have participated as volunteers in our in vivo trials,” said Peñaloza Sanchez.
“We found that user mental fatigue decreases significantly and the level of learning by the system increases substantially."
After spending a week walking the showroom floors of CES, a wearable claiming to change your mood is probably going to activate your BS sensors. But today our demo of the Thync wearable was the rare CES meeting that's everything it's pretending to be – possibly more. Your neighborhood drug dealer might want to start looking for a new line of work.
The Thync has some similarities to TENS units (like those found in Chiropractor's offices), but instead of slapping pads onto your lower back, you place them on your head. It uses "neurosignaling" to either calm you down or energize you.
As the company explains, "Neurosignaling uses electronic or ultrasonic waveforms to signal neural pathways in the brain. When specific pathways are stimulated, they trigger a shift in your state of mind or energy level."
Are your BS sensors going off yet? If so, we don't blame you. The technology world is full of stuff that sounds almost exactly like this, and most of it is about as authentic as Milli Vanilli.
But Thync works. During our demo with the Thync team, I tried the calming mode followed by the energized mode, and it was like drugs – minus all the bad stuff. More specifically, the calming mode was much like smoking a joint (minus the munchies, bloodshot eyes and memory loss). And though we were expecting the energizing mode to be similar to caffeine, it was more like the effects of Ephedrine (I used it a few times back in the 90s, before it started killing athletes, when it was sold over-the-counter). Rather than an antsy, over-caffeinated state, I found it to be more like a stimulated clarity – like a veil of fuzzy grogginess that I wasn't even aware of had been lifted.
What if you could ask your smartphone for diet and exercise advice, the same way you ask Siri for driving directions?
What if you could ask your smartphone for diet and exercise advice, the same way you ask Siri for driving directions?
Biotechnology company Pathway Genomics will soon offer an app that promises to do just that. “It’s meant to allow patients to be the CEO of their own health,” says Pathway Genomics CEO Jim Plante. “It will provide genomic information. It will pull in the patients health records, connect to activity monitors like the Fitbit.”
It will also tap into IBM Watson, the machine learning system based on the supercomputer the company used to win at TV Jeopardy. The Watson online service contains a wealth of information from sources such as medical text books as well as the latest medical research journals, and IBM will use this to help power the Pathway Genomics app, after investing an undisclosed amount in the startup.
The app is just one of many—oh, so many—apps and devices aiming to improve our health through mobile and even wearable technology. Google and Apple are inviting developers to build health tools atop its wearable hardware, and various independent projects are moving in the same direction.
Though this is the first time Watson has ventured into consumer applications, it isn’t new to healthcare. One of the first uses outside of Jeopardy was at Cedars-Sinai Hospital’s Samuel Oschin Comprehensive Cancer Institute, where doctors were able to use the the supercomputer to help diagnose illnesses. But the Panorama app will be the first time patients—as opposed to doctors—will have the chance to ask questions of the Watson platform directly.
In a world where economy-class seats are getting thinner and lavatories are shrinking, any flight longer than an hour can feel like a traveling prison. Aircraft manufacturer Airbus is abetting the shift, but a recent patent filing shows it hasn’t forgotten about you, the passenger who actually has to sit in these miserable flying cells. It’s considering helmets that will let you forget you’re in an airplane at all.
Flying can be boring or stressful, which is why airlines provide music, movies and bad TV. The next step appears to be thoroughly immersing passengers in what they’re watching. “The helmet in which the passenger houses his/her head offers him/her sensorial isolation with regard to the external environment,” reads the patent filing.
The helmets feature headphones to provide music. You can watch movies (perhaps in 3D) on the “opto-electronic” screen or possibly through “image diffusion glasses.” If you want to get some work done, turn on the virtual keyboard, which appears on your tray, don a pair of motion capture gloves, and type away. The helmet could even pipe in different odors for an olfactory treat, and the whole thing would be nicely ventilated.
The MICA bracelet displays messages and calendar alerts.
If you love jewelry, Intel has unveiled a sparkling bracelet that’s also a stand-alone message display device.
Unveiled for New York Fashion Week, My Intelligent Communication Accessory, or MICA, has glamorous looks as well as 3G cellular connectivity, so it doesn’t need to be tethered to a smartphone.
Designed by Humberto Leon and Carol Lim of fashion house Opening Ceremony, MICA is a cuff-style accessory covered with snakeskin as well as semiprecious stones such as obsidian and lapis. It will be available in two styles, one with white snakeskin and the other with black snakeskin, each with different stones.
The 1.6-inch sapphire-glass touchscreen can display SMS messages relayed through the bracelet’s Intel XMM6321 3G cellular radio. It can also display calendar alerts.
The bracelet will be sold as an Opening Ceremony product. Its weight and price have not been revealed yet, but it will be sold through some Barneys and Opening Ceremony stores by the December holiday season.
The chipmaker has been emphasizing wearables sold through other companies as mobile technology has put PCs in the shadows in the recent years.
Intel announced the collaboration with Opening Ceremony at CES 2014, where it also showed off smart earbuds that can measure a runner’s heart rate.
In August, SMS Audio announced biometric headphones based on Intel’s technology.
The MICA bracelet also follows Intel’s acquisition of health-tracking wristband maker Basis Science in March.
North Carolina State University researchers have developed methods for electronically manipulating the flight muscles of moths and for monitoring the electrical signals that moths use to control those muscles. The goal: remotely-controlled moths, or “biobots,” for use in emergency response, such as search and rescue operations.
“The idea would be to attach sensors to moths … to create a flexible, aerial sensor network that can identify survivors or public health hazards in the wake of a disaster,” said Alper Bozkurt, PhD, an assistant professor of electrical and computer engineering at NC State and co-author of a JOVE paper on the work.
Bozkurt, with Amit Lal, PhD, of Cornell University, previously developed a method for attaching electrodes to a moth during its pupal stage, when the caterpillar is in a cocoon undergoing metamorphosis. Now, Bozkurt’s research team wants to find out precisely how a moth coordinates its muscles during flight.
Patients are more willing to disclose personal information to virtual humans, likely because they don't have the capacity to judge.
The findings show promise for people suffering from post-traumatic stress and other mental anguish, says Gale Lucas, a social psychologist at University of Southern California’s Institute for Creative Technologies.
“Today there’s no legislation regarding how much intelligence a machine can have, how interconnected it can be. If that continues, look at the exponential trend. We will reach the singularity in the timeframe most experts predict. From that point on you’re going to see that the top species will no longer be humans, but machines.”
So, Facebook has been in the dock after publishing details of a supposedly sinister experiment it oversaw several years ago. It involved monitoring the moods of around 700,000 users based on their posts. The research also established that it was possible to affect those moods by posting positive or negative content in the users' news feeds.
The reaction has been highly negative, with many people raising concerns about the implications for privacy online. Whether or not you think they are right, they could probably do with an update on what has been happening online these past few years.
We are entering into an era where data is king, where our every move, our every emotion and every contact can be tracked. With the increasing analysis of social media activity, there is often very little that we do that can be hidden from the organisations we interact with online.
So long as an online company can drop a cookie onto our machine, it can track our behaviour online. This now includes logging how we react to advertising material and especially what makes us click on the content. Increasingly they are learning our behaviour as a result.
The need to gain permission, in the same way that research teams require when they involve human participants, has been slowly eroding. It is coming to be seen as a natural extension of existing practices, where advertising content is focused on target groups.
Users often freely offer their data to the internet, to be used in ways that, frequently, they would never expect. For example a tweet on a local event will time-stamp where a person was at a given time. It may reveal information around their movements and even perhaps who they had contact with along the way.
A system based on face recognition could put an end to forgotten passwords or PIN numbers and offer a safer way to sign in to accounts.
Humans can recognize familiar faces across a wide range of images, even when their image quality is poor. In contrast, recognition of unfamiliar faces is tied to a specific image—so much so that different photos of the same unfamiliar face are often thought to be different people.
Rob Jenkins of the psychology department at the University of York is lead author of a paper that suggests the new system, called Facelock, exploits this psychological effect to create a new type of authentication system. The research is published in the open-access journal PeerJ.
Familiarity with a particular face determines a person’s ability to identify it across different photographs and as a result a set of faces that are known only to a single individual can be used to create a personalized “lock.”
Access is then granted to anyone who demonstrates recognition of the faces across images, and denied to anyone who does not.
Many robotic researchers mimic living animals in their creations, such as cats, dogs, birds, kangaroos, humans. Fine and good. The Korea Advanced Institute of Science and Technology (KAIST) would rather reverse engineer dinosaurs.
KAIST’s Raptor robot was inspired by the velociraptor, an extinct six-foot killing machine known for its role as wily pack hunter in Jurassic Park. The robo-Raptor isn’t hunting anyone (yet)—but it’s fast. Very fast.
In a recently released video, KAIST shows Raptor running the treadmill at a leg-blurring top speed of 46 kph (28.5 mph). That’s faster than the fastest human sprinter (Usain Bolt), and matches Boston Dynamics’ fleet-footed robot Cheetah. (Raptor has momentarily achieved even higher speeds of 48 kph, or 29.8 mph.)
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.