Stanford researchers, working with Google and NVIDIA, have created a new neural network system for machine learning that is six times the size of the unit built last year that taught itself how to recognize cats on the internet.
"In Doug Engelbart's words, Collective IQ is a measure of how well people can work on important problems and opportunities collectively – how quickly and intelligently they can anticipate or respond to a situation, leveraging their collective perception, memory, insight, planning, reasoning, foresight, and experience into applicable knowledge. Collective IQ is ultimately a measure of effectiveness. It's also a measure of how effective they are at tackling the complex, urgent problem of how to raise their Collective IQ to the highest potential, so they will be that much more effective at solving complex, urgent problems. As the rate and scale of change around the world increases exponentially, so must our collective ability to dramatically increase our Collective IQ to stay ahead of the curve and thrive."
I’ve never been a big believer in trying to shape eras of technology into categorical boxes and branding those technology groupings or time frames with catchy monikers. The temptation to label any technological period is at best a difficult and risky proposition, which can be limiting in the true definition and potentially lead to significant inaccuracies. However, when discussing the evolution of the internet, one can easily envision the industry-derived labels that define the major advancements in the maturation of the world wide web.
Canadian geek artist Steve Mann has augmented his vision with wearable computers for most of his life. He invented his "smart pants" in high school, perfected his body tech rig at MIT, and now has one eye that's a camera.
Mann isn't exactly a cyborg - he doesn't control his augmentations with his mind, and he doesn't require them to survive. But this man's lifelong experiment with wearable computers makes him the first of a new breed. He's the beginning of what Neal Stephenson dubbed "gargoyles," people whose entire bodies have been augmented to record the world around them and zap the information back to a server.
"The traditional internet is oriented towards person-to-person connection, whereas the Internet of Things is oriented towards connection of inanimate objects. As such, the Internet of Things covers a larger range of connections and involves more semantics. Internet and telecom networks are focused on information transfer, while the Internet of Things is focused on information services. By combining sensor networks, the Internet, telecom networks, and cloud computing platforms, the Internet of Things can sense, recognize, affect, and control the physical world. The physical world can be unified with the virtual world and human perception. This opens a whole new media market yet to be explored to see which is the killer applications."
The ancient Chinese curse or saying — “May you live in interesting times.” — is upon us. We are in the midst of a new revolution fueled by advancements in the Internet and technology. Currently, there is an abundance of information and the size of social interaction has reached a colossal scale. Within a span of just one generation, the availability of information and our access to them has changed dramatically from scarcity to surplus. What humans will do or try to do with such powerful surplus of information will be the main topic of this article. First, let’s understand what brought us to this current state.
We all rate user-generated content on the web all the time: videos, blog posts, pictures. These ratings usually serve to identify content that we might like. In online innovation communities such as Dell’s IdeaStorm or My Starbucks Idea those ratings serve a different purpose: identify the “best” idea to be implemented by the host organization (there is a whole other argument that it is only about marketing and not about the “ideas” but let’s not go down that road). So the question arises: how can we best design collective intelligence mechanisms for idea selection in innovation communities?
In a paper published at the last ICIS conference, a group of researchers from TUM (Germany) compared three different rating mechanisms in a field experiment (n=313) against a base-line expert rating.
Get recommended app lists, webcasts and resources selected by Apple Distinguished Educators. Our recommended apps have been tested in a variety of different grade levels, instructional strategies and classroom settings.
While Augmented Reality (AR) has been used for mostly promotional pieces, the technology is slowly being used in more practical ways, becoming more th (Augmented Reality Becomes Practical; Interactive DIY Guides For Mobile Users
A one-time intelligence analyst with the Pentagon, Aimee Mullins is an athlete, model, and activist. And she does it all using a collection of experimental prosthetic legs. She says her special "cheetah" legs give her superpowers.
In 2004, Claudia Mitchell lost her left arm in a motorcycle accident. Two years later, she became the first woman to have a bionic arm - a prosthetic limb that she controls with her mind.
The robotic arm comes from the Rehabilitation Institute of Chicago, and was developed for $3 million. Mitchell, who used to peel bananas using both feet and one hand, can now carry items, lift cups, and move her prosthetic arm almost as naturally as her real one.
Many people ask me what the must reads are in order to make the first steps in collective intelligence, wisdom and consciousness. As surprising as it may look, this questions opens to many domains that might seem out of scope, for instance the way we feed ourselves. Indeed, isn’t food, like so many other questions, a collective intelligence matter?
Migration towards global collective intelligence is today a journey for pioneers. Tomorrow it will be a mass migration. It’s not yet open to everyone, as we are going to places where society has not settled yet. By “going” I mean live there, find and invent practical and pragmatic ways to build tomorrow’s humanity. It doesn’t dictate a way to think or be, all this will naturally emerge. Global collective intelligence has to be seen just like a new continent, with its landscape and its intrinsic laws. It’s us to decide how we want to live there, our creativity will make the difference.
He has used his mind to control a robotic hand, he has sent his thoughts across the Atlantic and clenched a mechanical fist, and he has even felt, in his own neurons, the signals from his wife’s nerves. Kevin Warwick is a professor at Reading University in England, a pioneer in cybernetics and a former cyborg. In 1998, doors would open and lights would follow his passing due to an electronic chip in his body. In 2002 a 100 electrode array was wired into the nervous system of his arm so that he could remotely control an artificial hand. Now, Silicon.com has a wonderful nine minute interview with Warwick, exploring his work and what the future holds for man and machine. According to the former cyborg, the two will become one. He’s already putting animal brain cells in robots as a control system! Watch the video in its entirety below, and get ready to meet the man who thinks he has experienced the future of humanity and returned to tell the tale.
Phones, makeup kiosks, car dashboards, televisions, rolls of paper, museum exhibits; it's hard to find something that hasn't been transformed into a computer interface device. Soon, the back of your hand will join that list, as a new device debuted here at the SIGGRAPH interactive technology conference can instantly convert a patch of skin into a multitouch controller for a computer.
Designed by Kei Nakatsuma, a researcher at the University of Tokyo Department of Information Physics and Computing, this new touch interface uses infrared sensor technology to track a finger across the back of a hand, as if it was a digital stylus or mouse. The device itself fits onto a wristwatch-sized band, giving users an adaptable computer control wherever they go.
Researchers at Georgia Tech have found that a little vibration goes a long way toward upping a person’s sense of touch. Using a glove of their own design, they’ve found that they can heighten tactile sensitivity by applying a small, high-frequency vibration to the side of the fingertip.
Ledface is a crowdsourcing platform that aims to harness collective intelligence to answer questions. In practical terms, this means that selected groups of users can collaborate on the platform to develop the best answer. Ledface’s algorithm comes into play to select the best group for each question, based on each user’s profile, and to compile the best answer.
"Brain cap" technology being developed at the University of Maryland allows users to turn their thoughts into motion. Associate Professor of Kinesiology José 'Pepe' L. Contreras-Vidal and his team have created a non-invasive, sensor-lined cap with neural interface software that soon could be used to control computers, robotic prosthetic limbs, motorized wheelchairs and even digital avatars.