Cyborgs_Transhuma...
Follow
Find
6.9K views | +0 today
Cyborgs_Transhumanism
Trends about the next generation
Curated by luiy
Your new post is loading...
Your new post is loading...
Rescooped by luiy from Science-fiction & innovation
Scoop.it!

le corp anti-hacker ?? Le corps humain devient un canal fiable de transmission de données

le corp anti-hacker ?? Le corps humain devient un canal fiable de transmission de données | Cyborgs_Transhumanism | Scoop.it

La communication sans fil sécurisée est essentielle dans de nombreuses entreprises pour empêcher la fuite des informations. Et si l'outil de transmission le plus sûr était le corps humain ?


Via Lockall
luiy's insight:
Le corps humain est un canal sécurisé

Pour les chercheurs, le corps humain pourrait être un outil imbattable en terme de sécurité, impossible à pirater. Car en effet, seul le détenteur du BodyCom peut utiliser l'objet touché. De plus le BodyCom permet d’éviter les "attaques par relais". Ces attaques sont réalisées par les voleurs lorsque ceux-ci tentent de faire croire à un véhicule sans clé (PKES), que son propriétaire se trouve à proximité pour permettre une ouverture. Or avec BodyCom, seule la personne authentifiée par le contrôleur central peut ouvrir le véhicule. Pour développer son innovation, Microchip a d'ores et déjà mis en vente le BodyCom sous forme de kit au prix de 149$.

more...
No comment yet.
Scooped by luiy
Scoop.it!

El primer ‘virus’ de Google Glass permite a otros ver lo que tú ves.

El primer ‘virus’ de Google Glass permite a otros ver lo que tú ves. | Cyborgs_Transhumanism | Scoop.it
Las Google Glass están sólo en manos de los desarrolladores, pero eso no impide que las gafas de realidad aumentada ya tengan su primer virus o, para ser exactos, su segunda vulnerabilidad. La vulnerabilidad ha sido descubierta por el consultor de seguridad y hacker blanco Jay Freeman. Freeman, más conocido como Saurik es una leyenda viviente por ser el responsable de aplicaciones de root como la popular Cydia para iOS. El sistema desarrollado por Saurik pasa por acceder al dispositivo para acceder a él de forma remota y después, mediante apenas unas porciones simples de código, instalar un software que permite grabar imágenes y audio sin que el usuario lo note. Acceder a Google Glass no resulta muy complejo cuando la propia Google declaró el sistema abierto y hackeable. La vulnerabilidad no es discreta. De hecho borra todos los contenidos de la biblioteca de imágenes y vídeo antes de ser funcional. De todas maneras, Freeman alerta de la posibilidad de refinar el ataque para que permanezca inadvertido, con el consiguiente riesgo para la privacidad. Hace unos días, las gafas de Google ya fueron objeto de cierta polémica por sus recomendaciones de seguridad
more...
Ursula Sola de Hinestrosa's curator insight, May 8, 2013 4:38 PM

auch .. hablando de privacidad ...

Scooped by luiy
Scoop.it!

Part 1: Cross-Cultural Representations of the Female Cyborg

Part 1: Cross-Cultural Representations of the Female Cyborg | Cyborgs_Transhumanism | Scoop.it
Exploring the effects of the technological and societal (r)evolutions of modernity on male perceptions of the woman and the machine by comparing examples from two prevailing and far-reaching modes of cultural expression: Japanese anime and the...
luiy's insight:

The female cyborgs of Japan and America are disparate beings, incongruent cousins whose blood ties are the assembly lines and atom bombs of the Industrial Age.  While themes of genuine humanity, individual history, and sexual reproduction are addressed in cinematic representations of both Eastern and Western cyborgs, the actual role and reception of the feminine cyborg in these opposed patriarchal cultures differs drastically.  I will explore the effects of the technological and societal (r)evolutions of modernity on male perceptions of the woman and the machine by comparing examples from two prevailing and far-reaching modes of cultural expression: Japanese anime and the Hollywood motion picture.  For comparison, Mamoru Oshii’s Japanese anime Ghost in the Shell and Jean-Pierre Jeunet’s Hollywood production of Alien: Resurrection will be examined for differences in male perception and visual representation of the female cyborg.  Because the cyborg is a product and sign of a given culture’s sociohistorical legacy, an examination of the cyborg’s visual depiction will reveal “how the spread of technologies in everyday life shapes and is shaped by existing discourses of gender, sexuality, community and nation”[1]

more...
No comment yet.
Rescooped by luiy from Pervasive Entertainment Times
Scoop.it!

Good summary - 5 reasons to get excited about Augmented Reality in 2013

Good summary - 5 reasons to get excited about Augmented Reality in 2013 | Cyborgs_Transhumanism | Scoop.it
Next year, I predict that augmented reality (AR) will be everywhere. Here are my five reasons why:

Via Gary Hayes
more...
Gary Hayes's curator insight, May 6, 2013 8:16 AM

Quote "It doesn’t exist, and it probably won’t. Augmented Reality is a horizontal technology, which means that the nigh-limitless applications make it a challenging endeavor to develop the Evernote-YouTube-Wordpress-Instagram of Augmented Reality. We did however see the AR Angry Birds, and even if it isn’t official it’s still a pretty clear indication that a successful AR game could lead the way for massive adoption. There are already some good examples out there, like the new JengAR game that inserts the 3-D content into the environment itself rather than needing a printed image."

Binary Racoon's curator insight, May 6, 2013 10:30 AM

The Glass are coming. that's enough reason for me.

Jeni Mawter's curator insight, May 7, 2013 11:52 PM

Children's and Young Adult writers need to write with Augmented Reality technology in mind.

Rescooped by luiy from Embodied Zeitgeist
Scoop.it!

Google Glass: An Etiquette Guide

Google Glass: An Etiquette Guide | Cyborgs_Transhumanism | Scoop.it
These high-tech specs with a built-in computer have the geek world abuzz, but wearing them in polite society requires decorum. Here, an open letter to very early adopters.

Via Xaos
luiy's insight:

DEAR GOOGLE GLASS WEARER,

 

Congratulations! You're one of the privileged few who've scored a pair of GoogleGOOG +1.90% Glass, the futuristic eyewear that puts a tiny, voice-controlled, Wi-Fi-enabled computer on your face. It's the most anticipated gadget since the iPad, iPhone or iAnything, really. And the best part? You members of Google's "Explorer Program"—mostly app developers and supernerds—will be testing Glass in the wild months before the general public will get to wear it, fingers crossed, at the end of the year. 

 

Soon you'll be able to view emails, text messages and maps on a translucent screen hovering in the upper-right corner of your peripheral vision. Breaking news alerts will appear right before your eyes. You'll snap photos just by saying, "OK Glass, take a picture." In other words, you'll be able to perform tasks everyone else has to do with their grubby hands and filthy smartphones—what Neanderthals!...

more...
Karen E Smith's comment, June 29, 2013 8:29 PM
I am looking forward to it. I've tried them on and think I could use them on a daily basis.
Rescooped by luiy from Cyborg Lives
Scoop.it!

New adverts 'could track your eyes'

New adverts 'could track your eyes' | Cyborgs_Transhumanism | Scoop.it
Technology which can locate your eyes and track your gaze has been developed by a team at Lancaster University.

-

An advertising system which is able to track your eye movements while you shop has been created by researchers based at Lancaster University.

The Sideways project uses software to locate faces and eye movements of shoppers captured on camera.

It could allow for video screens which change adverts depending on what you look at in a shop.

The team told the BBC they hoped the technology would be in use in shops within five years.

The technology can also be used to allow people to use their eyes to control content on screens, such as scrolling through items on a list.

"The system uses a single ordinary camera that is placed close to the screen," explained senior researcher Andreas Bulling. "So we don't need any additional equipment.

"The system detects the faces of people walking by and calculates where the eyes are relative to the eye corners."


Via Wildcat2030
more...
No comment yet.
Rescooped by luiy from DigitAG& journal
Scoop.it!

#cyborgs : Sexbots Will Give Us Longevity Orgasms - ImmortalLife.net

#cyborgs : Sexbots Will Give Us Longevity Orgasms - ImmortalLife.net | Cyborgs_Transhumanism | Scoop.it
Sexbots are coming, and we will cum with them. Three times a week or whatever our physician / longevity coach recommends.

Via Andrea Graziano
luiy's insight:

Sexbots will always climax when we climax if we press that little button on their butt.

 

Cinema has already depicted very desirable stars as Sexbots — a “mecha gigilo” (Jude Law in “A.I.”) and a “pleasure model” (Daryl Hannah in Blade Runner). Now tech is getting close to producing mainstream sexbots. “First Android” in Germany offers male & female models that breathe, are warm, and have heartbeats that thump louder with sex. In Toronto, inventor Le Trung has fashioned “Aiko” — he claims she’s not for sex, but she can have an orgasm, her name translates as “love child” and her measurements are 32” 23” 33”. Japan has Repliee Q1 Expo, who flutters her eyelids and moves her hands. Male sex robots are lagging in development, but… vibrator sales are buzzing, dildo sales point skyward, and my prediction is that male Sexbot sales will rival female in the upcoming years.

 

More predictions: 

 

Sexbots with this option: do we want eye contact, or not? 

Sexbots that shower after we use them and put themselves back in the closet. 

Sexbots available in hotels, cruise ships, vacation homes, and convalescent hospitals. 

Sexbot booths in liquor stores that wipe out corner prostitution. 

Sexbots that are delicious when you lick them. 

Sexbot Packages for sorority parties, military camps, prisons. 

Parents buy their teens sexbots to assist them in their passage through puberty. 

Healthclubs offer soundproof chambers for workouts with XTreme Sexbot Cardio.  

Sexbots that tell ten million jokes, because laughing also adds years to life. 

Army vets buy “Full Metal Jacket” sexbots that say, “Me so horny! Me love you long time.” 

Sexbot Teachers for shy humans to practice with before exposing themselves to critical humans.

Yes, I believe we’ll still have sex with people. I also believe we’ll only love & marry humans because we’ll still need partners that share the “human condition” — smart & vain, with horrible emotions, and the capacity to make stupid mistakes. Longevity studies indicate long, gentle, happy marriages add seven years to your life, equal to the Big O benefits. Sure, the marriage bed might change when Sexbots arrive, and human couples might buy a Sexbot if they want an easy menage a trois. 

more...
No comment yet.
Scooped by luiy
Scoop.it!

Reading with the Body: Interpreting Three Dimensional Media as Narrative

luiy's insight:

The Avatar as Agent.


In a three-dimensional virtual space the avatar is an embodied representation of its owner. Embodiment for an avatar exists in the sense of occupying time and space. Digital theorist and artist Mark Stephen Meadows has described an avatar as “an interactive, social representation of an Internet user”.8

 

Neal Stephenson uses the word avatar in his 1992 novel Snow Crash for ‘the audiovisual bodies that people use to communicate with each other in the Metaverse.’ or the virtual simulation of the human form in the metaverse, a fictional virtual-reality application on the Internet.9 Kai-Mikael Jää-Aro classifies avatars from a functional perspective as ‘those objects, which potentially are in the high agency end of the spectrum, since the property of agency can change over the course of a session,’ (original emphasis).10 In each of these three contexts the avatar is the anchor for a personality in a virtual world. The relationship between the avatar and the person(ality) that animates it is guided by what determines agency in the virtual environment.


Avatars, like everything in a virtual online environment, are constructed from computer language code. How an avatar is able to move, what sounds it makes, how it communicates and physically interacts with other avatars, and what it looks like are all enabled by the code from which it is composed. However, in regards to the point/s of reception for the human participants in three-dimensional worlds it is the Graphic User Interface
(GUI) that presents options regarding how the avatar can behave. The visual and spatial attributes of the GUI are what the person behind the avatar responds to in interacting with the virtual world space. These attributes include such simulative and symbolic characteristics as the space between a door (a place of entry) and a sofa (resting or meeting place) and the physical dimensions of the avatar. The avatar is the line of difference between the person controlling and the visual and spatial attributes of the virtual world. Interpreting the virtual world is performed from the perspectives and abilities of the avatar. The avatar as such a line of difference is determined by the agency granted to it as part of narrative architecture.

more...
No comment yet.
Scooped by luiy
Scoop.it!

#cyborgs : MindWave Mobile. Neuroscience and brainwave technology.

#cyborgs : MindWave Mobile. Neuroscience and brainwave technology. | Cyborgs_Transhumanism | Scoop.it
luiy's insight:
What is MindWave Mobile®?

 

As the world’s first comprehensive brainwave-reading device for iOS and Android platforms, the new MindWave Mobile headset is evolved for today’s mobile user. It differs from MindWave by transferring data via Bluetooth™, rather than radio frequency, and is available in two packages: Brainwave Starter Kit and the MyndPlay bundle.

Brainwave Starter Kit
Brainwave Starter Kit is a basic introduction to neuroscience and brainwave technology. Simply slip on the headset and see your brainwaves displayed on screen in the colorful Brainwave Visualizer. Watch how your attention and relaxation levels change in real time as you listen to your favorite music!$99.99


With more than 100 applications available through our Store, there are plenty of options to choose from based on your age and personal interests.



$129.99MindWave Mobile with MyndPlay 
MyndPlay is the world’s first mind-controlled video application that puts users in control of their own movie experiences. Similar to Edward Packard’s “Choose Your Own Adventure” game books, MyndPlay allows users to adjust various scenes and outcomes within the movie, simply by focusing or relaxing when required.
more...
No comment yet.
Rescooped by luiy from The Long Poiesis
Scoop.it!

#cyborgs : Disruptions: Brain Computer Interfaces Inch Closer to Mainstream

#cyborgs : Disruptions: Brain Computer Interfaces Inch Closer to Mainstream | Cyborgs_Transhumanism | Scoop.it
Soon, we could be turning on the lights at home just by thinking about it, or sending an e-mail from our smartphone without even pulling the device from our pocket.

-

Last week, engineers sniffing around the programming code for Google Glass found hidden examples of ways that people might interact with the wearable computers without having to say a word. Among them, a user could nod to turn the glasses on or off. A single wink might tell the glasses to take a picture.

But don’t expect these gestures to be necessary for long. Soon, we might interact with our smartphones and computers simply by using our minds. In a couple of years, we could be turning on the lights at home just by thinking about it, or sending an e-mail from our smartphone without even pulling the device from our pocket. Farther into the future, your robot assistant will appear by your side with a glass of lemonade simply because it knows you are thirsty.

Researchers in Samsung’s Emerging Technology Lab are testing tablets that can be controlled by your brain, using a cap that resembles a ski hat studded with monitoring electrodes, the MIT Technology Review, the science and technology journal of the Massachusetts Institute of Technology, reported this month.

The technology, often called a brain computer interface, was conceived to enable people with paralysis and other disabilities to interact with computers or control robotic arms, all by simply thinking about such actions. Before long, these technologies could well be in consumer electronics, too.

Some crude brain-reading products already exist, letting people play easy games or move a mouse around a screen.


Via Wildcat2030, Xaos
luiy's insight:

NeuroSky, a company based in San Jose, Calif., recently released a Bluetooth-enabled headset that can monitor slight changes in brain waves and allow people to play concentration-based games on computers and smartphones. These include a zombie-chasing game, archery and a game where you dodge bullets — all these apps use your mind as the joystick. Another company, Emotiv, sells a headset that looks like a large alien hand and can read brain waves associated with thoughts, feelings and expressions. The device can be used to play Tetris-like games or search through Flickr photos by thinking about an emotion the person is feeling — like happy, or excited — rather than searching by keywords. Muse, a lightweight, wireless headband, can engage with an app that “exercises the brain” by forcing people to concentrate on aspects of a screen, almost like taking your mind to the gym.

 

Car manufacturers are exploring technologies packed into the back of the seat that detect when people fall asleep while driving and rattle the steering wheel to awaken them.

more...
No comment yet.
Scooped by luiy
Scoop.it!

#cyborgs : Virtual reality coming to Second Life | KurzweilAI

#cyborgs : Virtual reality coming to Second Life | KurzweilAI | Cyborgs_Transhumanism | Scoop.it
(Credit: Oculus VR) Linden Lab intends to integrate the Oculus Rift virtual-reality headset with Second Life, Wagner James Au reports on New World
luiy's insight:

Linden Lab intends to integrate the Oculus Rift virtual-reality headset with Second Life, Wagner James Aureports on New World Notes.

“The Oculus could become Second Life’s killer app, but only if Linden Lab is willing to go all in,” said Au. “Sounds like they are doing just that, in an official capacity.

 

We’ll get to experience Second Life with the Oculus Rift sometime in 2014 or 2015, when (and if) the retail version of the Rift comes out, as scheduled, he said.

 

“But we’ll probably see early experiments with integration this year, from Linden Lab staff and third party developers who own Rift dev kits.

“And I’m even more convinced that Philip Rosedale’s new start-up is also working on another Oculus Rift-powered version of virtual reality, too. Given all that, we should be seeing a lot of virtual world/virtual reality innovations in the coming months.”

 

Does that mean YouTube celeb Grandma will don a wild outfit and hang out in Second Life? One can only hope. — Editor

more...
No comment yet.
Rescooped by luiy from Talks
Scoop.it!

Erik Brynjolfsson: The key to growth? Race with the machines

As machines take on more jobs, many find themselves out of work or with raises indefinitely postponed. Is this the end of growth? No, says Erik Brynjolfsson -- it’s simply the growing pains of a radically reorganized economy. A riveting case for why big innovations are ahead of us … if we think of computers as our teammates. Be sure to watch the opposing viewpoint from Robert Gordon.


Via Complexity Digest
more...
Complexity Digest's curator insight, April 24, 2013 3:02 PM

Interesting views and data about human-machine symbiosis.

Scooped by luiy
Scoop.it!

ScreenLab 0x02 - Exploring new modes of perception / @JoanieLemercier @JoelGethinLewis @kcimc + @elliotwoods @UoSArts

ScreenLab 0x02 - Exploring new modes of perception / @JoanieLemercier @JoelGethinLewis @kcimc + @elliotwoods @UoSArts | Cyborgs_Transhumanism | Scoop.it
Last December 3 artists were working at The University of Salford, as part of the ScreenLab artist in residence initiative by Elliot Woods and Kit Turner.
luiy's insight:

Last December 3 artists were working at The University of Salford, as part of the ScreenLab artist in residence initiative by Elliot Woods and Kit Turner. The initiative aims to explore modes of perception and interaction under the theme ‘Future of Broadcast’. The invited artists, Kyle McDonald (USA), Joanie Lemercier (France) and Joel Gethin Lewis (UK) spent over 2 weeks developing open source tools and methods for future students and artists (on and off campus) to remix and re-use. This process also included contributions from a group of talented on-campus students within the arts and art-technology crossover and students who were involved with not in learning, but also in the creation of new digital media techniques.

Two teams were formed, overseen and supported by the events’ co-curator and participant Elliot Woods. Using the ‘Octave’, a state of the art virtual reality suite, Joanie Lemercier and Kyle McDonald inspired by the 360 B.C., philosopher Plato, worked on a project that evolves around four classical elements — earth, air, water and fire which take the geometric form of four regular, convex polyhedrons in the immersive virtual reality environment.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Look out: A wink from a Google Glass user is more than meets the eye | TechRadar

Look out: A wink from a Google Glass user is more than meets the eye | TechRadar | Cyborgs_Transhumanism | Scoop.it
Look out: A wink from a Google Glass user is more than meets the eye
more...
No comment yet.
Scooped by luiy
Scoop.it!

Google Glass Updated With Google+ and Hangout Notifications

Google Glass Updated With Google+ and Hangout Notifications | Cyborgs_Transhumanism | Scoop.it
Owners of Google Glass are reporting that Google is pushing the first software update for the device, changing the version name to XE5.
more...
No comment yet.
Scooped by luiy
Scoop.it!

How Baxter—a Safer and Smarter Industrial Robot—Works | MIT Technology Review

How Baxter—a Safer and Smarter Industrial Robot—Works | MIT Technology Review | Cyborgs_Transhumanism | Scoop.it
Rethink Robotics’ new creation is easy to interact with, but the innovations behind the robot show just how hard it is to get along with people.
more...
No comment yet.
Rescooped by luiy from Tracking the Future
Scoop.it!

The Transhumanist Delusion

The Transhumanist Delusion | Cyborgs_Transhumanism | Scoop.it

While we can measure the degree to which technologies transcend physical and physiological boundaries, we can merely speculate about the ethical consequences of these developments and about their effect on human self-perception. The merging of human consciousness and technology changes not only the latter, but also the former. And the question is whether technology will become more human in the long run, or whether humans will become more technical.


Via Szabolcs Kósa
luiy's insight:
A unique evolutionary moment

The human body sits squarely at the center of this debate. Until today, we have largely conceived of technology as a collection of external objects. Now, technology enters the body, merges with it, becomes a constitutive part of its host. This presents us with a unique moment in evolutionary history. The biggest drivers of change can be found in the military and the pharmaceutical sectors of the economy. And the big unknown is whether we will be able to put the new possibilities to good use.

 

New ideologies have emerged that frame the techno-narrative and justify its propagation. The most influential among them is the ideology of transhumanism, a worldview predicated on the notion of transcendence. By merging man and machine, transhumanists hope to open up new avenues of human development. A core group of transhumanist thinkers has found a home at Oxford University, from where they fight against the humanist desire to protect and examine humanity in its current form...

 

 

Man, machine, industry

This changes everything: Not only our human self-perception (which has always been important for our conception of present and future) but also our definition of civilization. Some of these developments proceed at a breathtaking pace, and it’s only justified to ask whether members of the transhumanist vanguard and advocates of “inversive” technologies actually grasp the consequences of their work.

 

Hence the following assertion: The emerging global neuro-technological industry is more significant than all current political uprisings and military conflicts. Experiments are good. Careless tinkering with human nature is not.

 

The crucial point is that we simply don’t know enough about ourselves to speedily abandon our current view of humanity and to turn ourselves – as some transhumanists desire – into cyborg creatures. Our confusion starts at the fundamental level: For example, what does it mean to “know”? Is it possible to transfer all knowledge online if we can develop algorithms with adequate levels of sophistication? Can knowledge become de-corporealized?

more...
Nacho Vega's curator insight, May 7, 2013 4:35 AM

Technology will become more human in the long run!

Rescooped by luiy from e.cloud
Scoop.it!

Robotic insects make first controlled flight

Robotic insects make first controlled flight | Cyborgs_Transhumanism | Scoop.it
The demonstration of the first controlled flight of an insect-sized robot is the culmination of more than a decade’s work, led by researchers at the Harvard School of Engineering and Applied Sciences and the Wyss Institute for Biologically Inspired...

Via Alessio Erioli
luiy's insight:


ast summer, in a Harvard robotics laboratory, an insect took flight. Half the size of a paper clip, weighing less than a tenth of a gram, it leapt a few inches, hovered for a moment on fragile, flapping wings, and then sped along a preset route through the air.

 

Like a proud parent watching a child take its first steps, graduate student Pakpong Chirarattananon immediately captured a video of the fledgling and emailed it to his adviser and colleagues at 3 a.m. — subject line: “Flight of the RoboBee.”

 

“I was so excited, I couldn’t sleep,” recalls Chirarattananon, co-lead author of a paper published this week in Science.

 

The demonstration of the first controlled flight of an insect-sized robot is the culmination of more than a decade’s work, led by researchers at the Harvard School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering at Harvard.

more...
No comment yet.
Rescooped by luiy from Tracking the Future
Scoop.it!

Where are the Robots? 2013 Guardian Oxford London Lecture

Professor Paul Newman discusses the present and future state of robotics: asking how the state of the discipline measures up to science fiction, and discussing how Robots can learn to navigate our world, with profound consequences for society 


Via Szabolcs Kósa
more...
No comment yet.
Rescooped by luiy from Science News
Scoop.it!

TEDxOxford - Kevin Warwick - Cyborg Interfaces

In this talk Kevin Warwick, professor of Cybernetics at Reading University presents his talk on Cyborgs at TEDxOxford on 26th September 2011. He presents ideas on bringing back sight to the blind, allowing humans to see with sonar, and communicating with thought alone by combining artificial components with humans.


Via Sakis Koukouvis
more...
No comment yet.
Scooped by luiy
Scoop.it!

People with paralysis control robotic arms using brain-computer interface | Brown University News and Events

People with paralysis control robotic arms using brain-computer interface | Brown University News and Events | Cyborgs_Transhumanism | Scoop.it
A new study in Nature reports that two people with tetraplegia were able to reach for and grasp objects in three-dimensional space using robotic arms that they controlled directly with brain activity.
luiy's insight:

About the BrainGate collaboration


This advance is the result of the ongoing collaborative BrainGate research at Brown University, Massachusetts General Hospital, Providence VA Medical Center; researchers at Stanford University have recently joined the collaboration as well. The BrainGate research team is focused on developing and testing neuroscientifically inspired technologies to improve the communication, mobility, and independence of people with neurologic disorders, injury, or limb loss.

 

Funding for the study and its projects comes from the Rehabilitation Research and Development Service, Office of Research and Development, U.S. Department of Veterans Affairs, the National Institutes of Health (some grants were funded all or in part through the American Recovery and Reinvestment Act), the Eunice Kennedy Shriver National Institute of Child Health and Human Development/National Center for Medical Rehabilitation Research (HD53403, HD100018, HD063931), the National Institute on Deafness and Other Communication Disorders, the National Institute of Neurological Disorders and Stroke (NS025074), the National Institute of Biomedical Imaging and Bioengineering (EB007401), the Doris Duke Charitable Foundation, the MGH-Deane Institute for Integrated Research on Atrial Fibrillation and Stroke, Katie Samson Foundation, and the Craig H. Neilsen Foundation. The contents do not represent the official views of the Department of Veterans Affairs or the United States Government.

The implanted microelectrode array and associated neural recording hardware used in the BrainGate research are manufactured by BlackRock Microsystems LLC (Salt Lake City, Utah). The research prototype Gen2 DEKAarm was provided by DEKA Integrated Solutions Inc, under contract from the Defense Advanced Research Project Agency (DARPA).

 

The BrainGate pilot clinical trial was previously directed by Cyberkinetics Neurotechnology Systems Inc. Foxborough, Mass., (CKI). CKI ceased operations in 2009, before the collection of data reported in the Nature manuscript. The clinical trials of the BrainGate2 Neural Interface System are now administered by Massachusetts General Hospital, Boston, Mass. Donoghue is a former chief scientific officer and a former director of CKI; he held stocks and received compensation. Hochberg received research support from Massachusetts General and Spaulding Rehabilitation Hospitals, which in turn received clinical trial support from Cyberkinetics.

more...
No comment yet.
Scooped by luiy
Scoop.it!

EPOC Neuroheadset. Neurotechnology | Neuroimaging

EPOC Neuroheadset.  Neurotechnology | Neuroimaging | Cyborgs_Transhumanism | Scoop.it
EPOC neuroheadset based on the latest developments in neuro-technology, neurotechnology, neuroimaging, high-resolution eeg, Emotiv has developed a revolutionary new personal interface for human computer interaction.
luiy's insight:

Based on the latest developments in neuro-technology, Emotiv has developed a revolutionary new personal interface for human computer interaction.  The Emotiv EPOC is a high resolution, neuro-signal acquisition and processing wireless neuroheadset.  It uses a set of sensors to tune into electric signals produced by the brain to detect player thoughts, feelings and expressions and connects wirelessly to most PCs. 

 

Please note: If you or any of your 3rd party applications require access to Raw EEG, you will need to purchase the Emotiv EEG Neuroheadset.

 

Headset Features: 

Limited edition design 

14 saline sensors offer optimal positioning for accurate spatial resolution 

Gyroscope generates optimal positional information for cursor and camera controls 

Hi-performance wireless gives users total range of motion 

Dongle is USB compatible and requires no custom drivers 

Lithium Battery provides 12 hours of continuous use

The limited edition EPOC is now available to customers worldwide and early users will have access to the Emotiv App Store and the very first games and programs developed exclusively for this one-of-a-kind neuro-technology platform. Developers are currently utilizing Emotiv EPOC technology in a variety of new and exciting ways.

Artistic and creative expression - Use your thoughts, feeling, and emotion to dynamically create color, music, and art. 

 

Life changing applications for disabled patients, such as controlling an electric wheelchair, mind-keyboard, or playing a hands-free game.

Games & Virtual Worlds - Experience the fantasy of controlling and influencing the virtual environment with your mind. Play games developed specifically for the EPOC, or use the EmoKey to connect to current PC games and experience them in a completely new way.

 

Market Research & Advertising - get true insight about how people respond and feel about material presented to them. Get real-time feedback on user enjoyment and engagement. 

more...
No comment yet.
Scooped by luiy
Scoop.it!

'Eve Online' developer builds a virtual reality space dogfighting simulation using Oculus Rift

'Eve Online' developer builds a virtual reality space dogfighting simulation using Oculus Rift | Cyborgs_Transhumanism | Scoop.it
If you had any question whether the Oculus Rift virtual reality headset would lead to desirable new games, just ask any of the 1,500 people who attended FanFest 2013 in Iceland this year. There, E...
more...
No comment yet.
Rescooped by luiy from Cyborg Lives
Scoop.it!

Trying Google Glass- Amber Case describes her brief experience

Trying Google Glass- Amber Case describes her brief experience | Cyborgs_Transhumanism | Scoop.it

Amber Case describes her brief experience trying out Google Glass – quick first impressions.

-

A month ago I met a friend of a friend who was testing Glass for Google. He let me try it on in the back room of a quiet pub and I got to try out all of the different features. Aaron Parecki and I tried it on and experimented with its many features.

 

The features of Glass are not “consumptive”, as in, they don’t cause you to get away from reality. Rather, I’d call Glass’s features “active”. Think of every time you’d like to capture a moment, get driving directions, or check the time. Current technology forces one to take their phone out of a pocket to preform a task, whereas with Glass it’s right there. This is not a media device for sitting back and getting information to you. It’s a device that allows you to quickly act instead of pause and grab your device from your pocket. The features of Glass are not “consumptive”, as in, they don’t cause you to get away from reality. Rather, I’d call Glass’s features “active”. Think of every time you’d like to capture a moment, get driving directions, or check the time. At this point, you have to take your phone out of your pocket to do the task, whereas with Glass it’s right there.


Via Wildcat2030
luiy's insight:

Glass is a piece of Calm Technology

 

Glass is, by default, off. Like Mark Weiser’s words on Calm Technology, the tech is “there when you need it” and “not when you don’t”. This makes Glass a perfect example of tech that gets out of the way and lets you live your life, but springs to life when you need to access it.

 

Audio and Touch Input

The interface has two input types, audio and touch. You nod your head to turn the display on, then you can say “Ok Glass, search for “x”, or simply tap the side of your glass to scroll down the menu. The real world is noisy, so having two input types is important. And I suspect that Glass may have a difficult time recognizing your speech if you have a heavy accent.

 

Driving and Walking Directions

This feature presents directions in a calm way that leaves you attentive on the road. Transit and biking directions were not implemented when I tried Glass, but one can imagine how helpful both could be: I used to sketch out a map and tape it to the handlebars of my bike. Being able to have an ambient understanding of where one is and where one needs to go next will be very helpful. I use the word “ambient” because it truly is ambient. It is not obscuring your vision or taking you away from reality – it is adding an overlay of information to it.

 

Video

Video had some bugs in it still when I tried Glass, but it was a very pleasant experience to be able to quickly record something. This is the feature I think people will use least with Glass, and it is ironically the feature Glass-critics are most antagonistic towards. Recording video all day from one’s Glass makes no sense. Recording special moments does. Recording significant events such as the Boston Marathon Bombing make even more sense, especially if it helps people to gather forensic evidence. Recording all the time will quickly wear out a Glass, and worse, will require a lot of editing after the fact. The Memento Lifelogger is a much better bet for all day recording, as it clusters photos taken at frequent intervals into “events”, making it easier to search through and find the information you’d like to gather.

 

Photos

Being able to take a quick photo was wonderful. It’s not seamless as critics might think. In the same way that all features of Glass are implemented, one must wake up the display and either verbally ask Glass to take a picture or tap the side of Glass to record the image. An external observer can easily see that a Glass is on, and like one can tell if someone is on a cellphone by the way the phone is held up to the head, one can see that Glass wearer is about to take a picture. Glass is not like a Bluetooth earpiece. There are significant signals present for one to see if a Glass wearer is using the device. I think Glass critics fear that Glass users will persistently record and take photos and no one will be able to tell whether Glass is on or not. Rest assured, most Glass users will likely be using their devices for mundane everyday tasks like way-finding and reading text messages. Critics’ fear of Glass devices is akin to a person fearing that what they post on Blogger will be read by the entire Internet instead of being read by two or three of their friends and an occasional random user coming in from search.

 

Search

Glass provides one with a very well-designed and easy way to search by voice. Google results come up in a minimal format that’s easy to read on the tiny display. There’s actually an auto-summary feature that automatically summarizes the information you’ve search for. I tried the phrase “Ok, Glass, search for squirrels”, and glass gave me a summary of what squirrels were, along with images. It reminded me of a smarter, quicker version of Qwiki, a knowledge summary product that received quite a bit of attention in 2011 when it was first demoed at a startup conference in SF.

more...
No comment yet.
Rescooped by luiy from Science-fiction & innovation
Scoop.it!

Imprimante 4D : demain, les objets qui s'autoconstruisent ?

Imprimante 4D : demain, les objets qui s'autoconstruisent ? | Cyborgs_Transhumanism | Scoop.it

On n'arrête pas le progrès ! Alors qu'on vous parlait il y a quelques mois à peine de l'imprimante 3D - et des espoirs que certains placent en cette invention pour préparer rien moins qu'une nouvelle ère industrielle, le chercheur au MIT Skylar Tibbits affirme avoir inventé... l'imprimante 4D ! Une imprimante qui ajoute au trois dimensions de l'espace celle du temps, avec des objets capables d'évoluer et de s'adapter à leur environnement, relève Zdnet. Explications.


Via Lockall
luiy's insight:

 

 « L’acte d’impression n’est plus une finalité en soi mais seulement le début du processus créatif », commente Wired. L'objectif du chercheur est en effet de prêter aux matériaux industriels les propriétés du monde organique, « comme programmé par un ADN interne ».


Résultat : on va peut-être pouvoir imaginer un jean qui s'étire ou se rétracte en fonction des fluctuations de votre tour de taille ou un gratte-ciel qui se gondole à l'arrivée du vent... 

 

Donner vie à la matière : un côté Frankenstein vous dites ? Attendez d'avoir lu le détail sur Zdnet ou visionné la présentation TED du chercheur ci-dessous !

more...
No comment yet.