Despite the advent of wearable technologies, the thought of humans becoming part machine remains in the realm of science fiction. But we might be farther along in this process than you’d expect, especially after digging deeper into the concept that many are calling “Augmented Sensory Perception,” where technology is not just biometrically attuned to humans but also embedded in their bodies.
Since the 1980s, Steven Mann, often referred to as the father of wearable computing, has been tinkering with the concept of cyborgs and the idea of “humanistic intelligence.” The theory goes that by including humans in the feedback loop of the computational process, the technology and the individual become inextricably intertwined. You can see this idea emerging at the fringes of wearable technology, where developers are viewing the limits of human ability as a starting place. This is entry point could lead to devices that extend our own physical capacities.
Last month Google surprised the market with a new wearable Glass off-shoot project relating to future smart contact lenses. Our report that was titled "Google Takes their Glass Vision to Smart Contact Lenses," will now act as a sort of foundational report for this invention on an ongoing basis. Today's new patent revelations cover the integration of tiny cameras into their future smart contact lenses. The user will ...
Being that this is a patent application, Google focuses on the ideas they have for future their smart contact lenses and skirts any possible health issues that may arise from wearing devices that are constantly sending and receiving wireless communications. They don't explain what materials the camera and other components will be made of to determine whether there's a risk for possible allergic reactions or if there are risks to the eye being scratched by the various components outlined in this invention.
Yet, to Google's credit, we see in other recent patent applications published by the US Patent Office that they have gone out of their way to demonstrate that their R&D teams have been exploring many ways to ensure that their product will be safe. Of course until it passes through the hoops of varying Governmental bodies it won't be as safe as it should be, but in the short term, Google is demonstrating that they've thought this project through from many angles including consumer safety that we'll be covering in upcoming reports throughout the coming week.
Google originally filed their patent application back in Q4 2012. The US Patent Office published this patent application earlier this month. Considering that this is a patent application, the timing of such a product to market is unknown at this time.
Outside of smartwatches, wristbands, and smart eyewear, wearable technology is making waves in the medical community. For example, we’ve already heard about health-monitoring “tattoos,” which can tell doctors about how your heart, muscles, or brain are functioning. The next evolutionary step could be similar smart patches, developed using nano technology, which not only deliver drugs into your system, but know when you’ve had enough or need a higher dose.
A study, carried out in South Korea and published by Nature Technology, outlines the development of “wearable bio-integrated systems,” as an alternative to wearing bulkier hardware. These skin patches are not only less intrusive, but are also capable of delivering medicine to the wearer, and smart enough to know how much is needed.
Hugh Herr is building the next generation of bionic limbs, robotic prosthetics inspired by nature's own designs. Herr lost both legs in a climbing accident 30 years ago; now, as the head of the MIT Media Lab’s Biomechatronics group, he shows his incredible technology in a talk that's both technical and deeply personal — with the help of ballroom dancer Adrianne Haslet-Davis, who lost her left leg in the 2013 Boston Marathon bombing, and performs again for the first time on the TED stage.
Today, effective brain-machine interfaces have to be wired directly into the brain to pick up the signals emanating from small groups of nerve cells. But nobody yet knows how to make devices that listen to the same nerve cells that long. Part of the problem is mechanical: The brain sloshes around inside the skull every time you move, and an implant that slips by a millimeter may become ineffective.
Another part of the problem is biological: The implant must be nontoxic and biocompatible so as not to provoke an immune reaction. It also must be small enough to be totally enclosed within the skull and energy-efficient enough that it can be recharged through induction coils placed on the scalp at night (as with the recharging stands now used for some electric toothbrushes).
Humanity just made a small, bloody step towards a time when everyone can upgrade themselves towards being a cyborg. Of all places, it happened in the back room of a studio in the post-industrial German town of Essen.
It's there that I met up with biohacker Tim Cannon, and followed along as he got what is likely the first-ever computer chip implant that can record and transmit his biometrical data. Combined in a sealed box with a battery that can be wirelessly charged, it's not a small package. And as we saw, Cannon had it implanted directly under his skin by a fellow biohacking enthusiast, not a doctor, and without anesthesia.
Called the Circadia 1.0, the implant can record data from Cannon's body and transfer it to any Android-powered mobile device. Unlike wearable computers and biometric-recording devices like Fitbit, the subcutaneous device is open-source, and allows for the user the full control over the data.
The "Internet of X" is a buzzphrase we're starting to hear a lot: Beyond the much-discussed Internet of Things, there's now the Internet of Pets, the Internet of Plants, and, most interestingly, the nascent Internet of Bodies.
In other words, 25 years from now gadgets like smartphones, smartwatches, augmented glasses, virtual reality headgear, and the myriad other devices merging humans and the internet may be laughably antiquated. Computers will become so tiny they can be embedded under the skin, implanted inside the body, or integrated into a contact lens and stuck on top of your eyeball.
During 2010-12, noted AI researcher and long-time Humanity+ Board member Ben Goertzel conducted a series of textual interviews with researchers in various areas of cutting-edge science — artificial general intelligence, nanotechnology, life extension, neurotechnology, collective intelligence, mind uploading, body modification, neuro-spiritual transformation, and more. These interviews were published online in H+ Magazine, and are here gathered together in a single volume. The resulting series of dialogues treats a variety of social, futurological and scientific topics in a way that is accessible to the educated non-scientist, yet also deep and honest to the subtleties of the topics being discussed.
Between Ape and Artilect is a must-read if you want the real views, opinions, ideas, muses and arguments of the people creating our future.
- Pei Wang: What Do You Mean by “AI”? - Joscha Bach: Understanding the Mind - Hugo DeGaris: Will There be Cyborgs? - DeGaris Interviews Goertzel: Seeking the Sputnik of AGI - Linas Vepstas: AGI, Open Source and Our Economic Future - Joel Pitt: The Benefits of Open Source for AGI - Randal Koene: Substrate-Independent Minds - João Pedro de Magalhães: Ending Aging - Aubrey De Grey: Aging and AGI - David Brin: Sousveillance - J. Storrs Hall: Intelligent Nano Factories and Fogs - Mohamad Tarifi: AGI and the Emerging Peer-to-Peer Economy - Michael Anissimov: The Risks of Artificial Superintelligence - Muehlhauser & Goertzel: Rationality, Risk, and the Future of AGI - Paul Werbos: Will Humanity Survive? - Wendell Wallach: Machine Morality - Francis Heylighen: The Emerging Global Brain - Steve Omohundro: The Wisdom of the Global Brain and the Future of AGI - Alexandra Elbakyan: Beyond the Borg - Giulio Prisco: Technological Transcendence - Zhou Changle: Zen and the Art of Intelligent Robotics - Hugo DeGaris: Is God an Alien Mathematician? - Lincoln Cannon: The Most Transhumanist Religion? - Natasha Vita-More: Upgrading Humanity - Jeffery Martin & Mikey Siegel: Engineering Enlightenment
Le robot Cybot est anglais : il a été conçu au Cybernetics Intelligent ResearchGroup (CIRG) http://www.cyber.rdg.ac.uk/CIRG/home.htm de l'université anglaise de Reading , laboratoire dirigé par le célèbre Kevin Warwick*. Son design s'inspire de celui des Sept Nains (Seven Dwarves Robots) créés à l'origine au CIRG pour familiariser les étudiants de l'université à la robotique.
Fruit de 7 années de recherche, le Dwarve utilise un système simple de sonar (semblable à celui que l'on trouve sur Cybot) lui permettant d'éviter des obstacles ou de suivre des objets. Dans des versions postérieures, ce robot a reçu la capacité d'apprendre par lui-même, de se mouvoir dans son environnement et de partager des informations avec ses congénères, directement ou par Internet. Pour sa part, Cybot reprend une bonne part des techniques utilisées dans ce robot prototype, tout en intégrant des nouveautés et des traits bien spécifiques - qui en font un robot performant facile à monter en kit.
Hostility to the use of wearable computers and cameras threatens to limit their benefits, says Steve Mann.
Let’s value people at least as much as we do merchandise and elevate the wearable computer to the level of a security camera. We never forbid cameras to protect five-cent candies. So let’s not forbid people to protect themselves with this same kind of technology. I have proposed legislation to protect the right of individuals to remember, computationally, what they experience.
As wearable computers and cameras become more widespread, we will certainly need to adopt new protocols and social attitudes toward the capture and sharing of visual information and other data. But these protocols should not include discrimination against users of these valuable assistive devices.
Invisibility cloaks. The search for extraterrestrial intelligence. A Facebook for genes. These were just a few of the startling topics IFTF explored at our recent Technology Horizons Program conference on the "Future of Science." More than a dozen scientists from UC Berkeley, Stanford, UC Santa Cruz, Scripps Research Institute, SETI, and private industry shared their edgiest research driving transformations in science. MythBusters' Adam Savage weighed in on the future of science education.
All of their presentations were signals supporting IFTF's new "Future of Science" forecast, laid out in a new map titled "A Multiverse of Exploration: The Future of Science 2021." The map focuses on six big stories of science that will play out over the next decade: Decrypting the Brain, Hacking Space, Massively Multiplayer Data, Sea the Future, Strange Matter, and Engineered Evolution. Those stories are emerging from a new ecology of science shifting toward openness, collaboration, reuse, and increased citizen engagement in scientific research.
Deep learning itself is a revival of an even older idea for computing: neural networks. These systems, loosely inspired by the densely interconnected neurons of the brain, mimic human learning by changing the strength of simulated neural connections on the basis of experience. Google Brain, with about 1 million simulated neurons and 1 billion simulated connections, was ten times larger than any deep neural network before it. Project founder Andrew Ng, now director of the Artificial Intelligence Laboratory at Stanford University in California, has gone on to make deep-learning systems ten times larger again.
Such advances make for exciting times in artificial intelligence (AI) — the often-frustrating attempt to get computers to think like humans. In the past few years, companies such as Google, Apple and IBM have been aggressively snapping up start-up companies and researchers with deep-learning expertise. For everyday consumers, the results include software better able to sort through photos, understand spoken commands and translate text from foreign languages. For scientists and industry, deep-learning computers can search for potential drug candidates, map real neural networks in the brain or predict the functions of proteins.
One of the best descriptors of our potential future world was explored in a recent paper by cyberneticist Francis Heylighen titled: Return to Eden? Promises and Perils on the Road to Superintelligence.
Here I summarize the main technological mechanisms as described by Heylighen. I hope you will quickly realize the point of this exercise. In 2014, we live in a world with global brain technology.
IEET Fellow David Eagleman discuses how we and other animals perceive reality. He referres to the umwelt in the context, of how our technologies will enhance our experience of the umwelt so that we can experience difference properties of the world.
“In the semiotic theories of Jakob von Uexküll and Thomas A. Sebeok, umwelt (plural: umwelten; from the German Umwelt meaning “environment” or “surroundings”) is the “biological foundations that lie at the very epicenter of the study of both communication and signification in the human [and non-human] animal.” The term is usually translated as “self-centered world”. Uexküll theorised that organisms can have different umwelten, even though they share the same environment.” - wikipedia
La notion de «réalité augmentée» peut être prise au pied de la lettre dans la mesure où le procédé permet de saisir des dimensions dissimulées rendues manifestes, révélant un panorama élargi des choses non directement perceptibles par les sens. Appellation qui pourrait tout autant faire l’objet d’une torsion, vu les conseils à vocation généralement commerciale, qui appelleraient un léger déplacement de la dimension plutôt flatteuse «d’augmentation», pour la prise en compte de la force «orientante», devant alors plus justement être qualifiée de «réalité orientée». ...
La réalité augmentée expose la preuve patente d’une puissance virtuellement omnisciente de la technique collant désormais au corps ou faisant corps à notre perception des choses, à l’instar des Google Glass ou autres lunettes connectées qui adjoignent à l’expérience quotidienne un réservoir en théorie infini et évolutif d’indications en rapport. Ultime étape avant l’implémentation de lentilles au contact des rétines, nous érigeant comme des cyborgs non pas augmentés d’organes artificiels, mais enveloppés de données individuellement ajustées à chacun de nos «profils» et de nos situations. Dispositifs en sophistication croissante, dont on ne peut réduire la portée et les enjeux à de seules informations ou conseils prétendument «pertinents», mais qui appellent de saisir le «tournant cognitif» qui s’instaure.
We live in an era of accelerating change, when scientific and technological advancements are arriving rapidly. As a result, we are developing a new language to describe our civilization as it evolves. Here are 20 terms and concepts that you'll need to navigate our future.
Dans la veine d'Hercule, la société américaine Ekso Bionics a développé un exosquelette qui permet aux paraplégiques de remarcher. L'appareil est constitué de tiges d'aluminium et de titane fixées au jambes qui, mécaniquement, épaulent la personne dans sa marche.
Les Britanniques de BeBionic3 ont, eux, créé une prothèse pour main en aluminium et carbone qui, connectée aux terminaisons musculaires et nerveuses, permet de réaliser 14 mouvements différents : pointer l'index, agripper avec le doigt, attraper avec toute la main, pincer, etc.
Se pose toutefois la question : quese passe-t-il quand la prothèse surpasse l'humain ? Des chercheurs de Princeton ont en effet créé une oreille bionique permettant de restaurer une audition déficiente, mais celle-ci va plus loin et permettrait d'entendre des fréquences qu'il est normalement impossible de percevoir.
The Future of the Body: Phenomenology, Medicine and the (Post)human 19-20 June 2014, Trinity College Dublin This conference will bring together leading scholars working in Philosophy, Medical Humanities, Medicine, and related disciplines whose work critically engages with the status of the body. Central to this engagement is a phenomenological focus of the role lived experience […]
The aim of this workshop is to employ phenomenology as the method to interrogate the future of the body and the future of medicine. The advantages of a phenomenological approach is that it provides a counterpart to the objectifying tendencies in naturalistic approaches, which treat the body in a non-specific way, as Husserl himself states, “phenomenology demands a direct personal production of the pertinent phenomenon” (Husserl 1975, 61).By employing a phenomenology that calls upon the specificity of a lived experience, we gain a much richer account of the body than would otherwise be available in a naturalistic setting.
C'est sûrement l'un des moments les plus attendus d'Innorobo, le salon de la robotique qui s'ouvre ce mardi matin à Lyon. Pour la première fois, l'entreprise française Aldebaran Robotics va exposer Romeo, cet humanoïde de 1,40 m présenté comme «un véritable assistant et compagnon personnel» pour les personnes âgées. Si cet androïde est loin de ressembler aux robots de la série suédoise «Real Humans», il est tout de même capable de marcher, de voir en trois dimensions et de parler. Avec ses 40 kg, ce robot en fibre de carbonne et en caoutchouc pourrait même bientôt ouvrir des portes et poser des objets sur une table.
Développé depuis 2009, le projet s'est structuré autour de Cap Robotique, une association de 21 entreprises, au sein du pôle de compétitivité Cap Digital. Sur ces cinq dernières années, Romeo a vu ses capacités et ses caractéristiques augmenter. Les scientifiques ont notamment développé la mobilité des yeux en la combinant à un système vestibulaire qui permet tout simplement une stabilisation du regard du robot. Les chercheurs ont également travaillé sur sa capacité à reconnaître des objets, mais Romeo met encore du temps à les identifier.
Oui, l'usage d'Internet a un impact sur la mémorisation dans notre cerveau. C'est ce que montrent les récents travaux de Betsy Sparrow, basée à l'institut de psychologie de l'université de Colombia (USA). Selon l'étude publiée dansScience le 5 août dernier(3), l'utilisation fréquente de moteurs de recherche et de ressources en ligne a modifié la façon dont nous mémorisons les informations. Les ordinateurs et internet (sans oublier les smartphones) sont devenus une sorte de moyen de stockage externe de notre mémoire, sur lequel l'humain se repose. Plutôt que de se rappeler certains faits, les internautes se souviennent de la façon de les retrouver en ligne, ou dans leur ordinateurr(4).
L’armée de terre américaine se tourne vers la réalité augmentée Grâce au casque Q-Warrior, il sera prochainement possible pour les militaires américains de recevoir de nombreuses informations directement devant les yeux.
Grâce au casque Q-Warrior, il sera prochainement possible pour les militaires américains de recevoir de nombreuses informations directement devant les yeux.
Ainsi avec ce casque à la Iron Man, il sera par exemple possible de balayer du regard pour démasquer des individus recherchés dans une foule, de prendre et transmettre des images vidéo avec la caméra, d’identifier des points sur une carte et même de diriger le combattant vers un point d’extraction. Ces lunettes intelligentes auront la particularité d’être beaucoup plus robustes que celles disponibles pour consommateurs comme les Google Glass. L’armée aurait déjà procédé à ses premières commandes auprès de BAE Systems.
A student at the Florida International University (FIU) dons a sensor-laden pair of gloves and vest and an Oculus Rift virtual reality headset. He
Using a potent cocktail of new technologies and $20,000 from a private contributor, Jeremy Robins, a team of FIU researchers and students says they’ve engineered a telepresence robot suitable for law enforcement—a real telepresence RoboCop.
Still, TeleBot is a great example of a growing trend. Telepresence robots are entering a number of industries. In medicine, doctors can visit and deliver care to patients hundreds or thousands of miles away. Business people can attend meetings without hopping a plane. Remote workers can visit the home office and connect with coworkers and collaborators.
We’ve written about telepresence robots used in hospitals (InTouch Health’s RP-Vita), offices (Cisco and iRobot), or pretty much wherever (Double). They range from outrageously expensive ($95,000) to nearly affordable ($2,500).
..... but, examined carefully, the articles seem more enthusiastic than substantive. As I wrote before, the story about Watson was off the mark factually. The deep-learning piece had problems, too. Sunday’s story is confused at best; there is nothing new in teaching computers to learn from their mistakes. Instead, the article seems to be about building computer chips that use “brainlike” algorithms, but the algorithms themselves aren’t new, either. As the author notes in passing, “the new computing approach” is “already in use by some large technology companies.” Mostly, the article seems to be about neuromorphic processors—computer processors that are organized to be somewhat brainlike—though, as the piece points out, they have been around since the nineteen-eighties. In fact, the core idea of Sunday’s article—nets based “on large groups of neuron-like elements … that learn from experience”—goes back over fifty years, to the well-known Perceptron, built by Frank Rosenblatt in 1957. (If you check the archives, the Times billed it as a revolution, with the headline “NEW NAVY DEVICE LEARNS BY DOING.” The New Yorker similarly gushed about the advancement.) The only new thing mentioned is a computer chip, as yet unproven but scheduled to be released this year, along with the claim that it can “potentially [make] the term ‘computer crash’ obsolete.” Steven Pinker wrote me an e-mail after reading the Timesstory, saying “We’re back in 1985!”—the last time there was huge hype in the mainstream media about neural networks.
Mind viruses (Brodie, 1996) can be defined as parasitic memes. A meme is an idea, belief or behavior that spreads across society by being transmitted from person to person (Heylighen & Chielens, 2008). Successful memes tend to exhibit characteristics such as plausibility, simplicity, novelty, usefulness, emotional impact, and ease of communication. As long as a meme provides valuable information, its propagation benefits society. However, parasitic memes mimic the characteristics of beneficial memes in order to spread more easily, while providing information that is worthless, wrong, or even dangerously misleading (Heylighen & Chielens, 2008). Examples include chain letters, false rumors, urban legends, hate speech, conspiracy theories, superstitions, extremist ideologies, and various fundamentalist and irrational beliefs.
Such memes are particularly dangerous to individuals who already have lost their sense of reality by their immersion in supernormal and flow-producing stimulation environments, and who thus are ready to embrace the false promises of a mind virus. Because they spread across communities while indoctrinating their carriers, mind viruses have an even greater potential to create damage. For example, they may recruit a worldwide group of people into an absurd and destructive enterprise, such as an outbreak of gratuitous rioting, a mass suicide, a terrorist plot, or even a genocide. The ubiquitous network enhances their powers of spreading and mobilization, thus increasing the danger (p. 24-25).