One of the great thrills of being a Silicon Valley startup is getting to build on new platforms and devices that stand to redefine the technology landscape. Google Glass is one of these rare opportunities. Today, we’re excited to unveil a first look at the Evernote experience on Glass.
Our current implementation focuses on two actions. First, you’ll be able to quickly capture a photo or short video and send it to your Evernote account from the Google Glass sharing menu. Second, you can choose a note from Evernote Web and send it directly into the Glass Timeline so that you have it available right in your field of view when you need it.
The app scans your face into a 3D image and allows you to see how glasses would really look on your face.
The iPad captures the images and then sends them over a Wi-Fi, 3G, or 4G connection to a data center. There, the images are processed and a 3D rendering is sent back to the iPad. That process takes maybe 30 seconds or so. Once you scan your face, it’s easy to try on new glasses.
Video of a sandbox equipped with a Kinect 3D camera and a projector to project a real-time colored topographic map with contour lines onto the sand surface. The sandbox lets virtual water flow over the surface using a GPU-based simulation of the Saint-Venant set of shallow water equations.
Recent research in 3D user interfaces pushes towards immersive graphics and actuated shape displays. Our work explores the hybrid of these directions, and we introduce sublimation and deposition, as metaphors for the transitions between physical and virtual states. We discuss how digital models, handles and controls can be interacted with as virtual 3D graphics or dynamic physical shapes, and how user interfaces can rapidly and fluidly switch between those representations. To explore this space, we developed two systems that integrate actuated shape displays and augmented reality (AR) for co-located physical shapes and 3D graphics. Our spatial optical see-through display provides a single user with head-tracked stereoscopic augmentation, whereas our handheld devices enable multi-user interaction through video see-through AR.
Ten years ago, smart phones were just hitting the market. These early devices evolved into today’s standard tools, incorporating web browsing, photo and video taking capabilities, media players and GPS navigation, among other features. The next evolution, according to Google, is wearable technology.
Google Glass, which should be available to purchase by the end of the year, is the latest gadget by Google, the company best known for its search engine and Android mobile operating system. The device, which resembles a pair of glasses, allows users to access simple computing tools without using their hands, according to the marketing video released by Google in February. The information is then shown on a small display, which is attached to the lenses.
The reactions to Glass, however, are mixed. Junior electrical engineering student Dylan Ross says he believes wearable technology will be the next large trend, but Google Glass might not necessarily be the product to launch that trend.
When augmented reality hits the scene, just watch companies like Mr. Green Casino embrace it like a long-lost child. Just watch other industry leaders like William Hill, Paddy Power, 888 Casino and 32Red look to employ the technology to try and knock Mr. Green Casino from its pedestal as the so-called best in the business. Just watch as gaming industry stalwarts like Caesars and MGM rush to adopt the technology as part of their online gambling initiatives that they know they need to supplement fleeting revenues in their casinos. Just watch as smaller firms begin to emerge in the online gaming space searching for state-of-the-art technologies to lure in users and line their own coffers. They too will come calling for augmented reality technologies.
Google's futuristic Glass headgear will be available before year's end. The device may well be the final step before human-machine interaction moves under our skin — but its wearers may trigger some undesired social reactions from friends and family members, and it may not go over too well at your local watering hole, either. In fact, judging from our early look, Google Glass won't be welcome in lots of places.
Google Glass consists of a small display situated on a frame that resembles eyeglasses. It is connected to a camera, microphone and bone-conducting speaker. Glass pairs with your smartphone wirelessly using Bluetooth, but also can use Wi-Fi to access the Internet. You can use your voice or your finger to get it to take photos, record video, initiate video or voice chats, send messages, search Google and translate words or phrases. Google's being a bit coy about the ship date for this groundbreaking wearable computer. However, while qualifying early adopters are paying $1,500 a pop for the privilege of owning it first, we're told that it will become more widely available by year's end — with a slightly more affordable price tag.
One of the reasons Glass will find itself unwelcome in places is because its camera lives at the wearer's eye level. It takes photos or record videos without a red blinking light telling others it's happening. Anywhere cameras and other recording devices are unwelcome, the same would most certainly go for Google Glass.
For starters, you can forget about taking Glass to Las Vegas
Let's suppose that madness lingers in all of us, that psychosis is a world of raw beauty and pure creation in which our thoughts and emotions are so powerful that they are experienced as real realities. We might venture into this world in search of narratives that help us make sense of things, to change realities that we are not in agreement with. But what if we venture so deep that our neurons do not find the way back to their usual patterns? We might stumble, stagger and panic as we no longer are able to bridge the communication gap between first person reality and third person reality. To get out of psychosis we need help.
What if future neurotechnology would be able to monitor and register real time psychotic experiences? This technology would provide a rare opportunity for interactive observation. Therapists would be able to ‘see’ and ‘hear’ the world as their patients do real time, allowing for more empathic treatment. For instance by learning that the color yellow is a trigger of fear, one can adjust treatment to which it takes the presence of the color yellow into consideration when a patient shows fear during therapy for ‘no apparent reason’. But just because one knows that the color yellow triggers fear, does not mean one understands why. A person in psychosis often does not have the ability to communicate their experience, so in order to make sense of psychotic narrative one might employ social group intelligence in which participants function as second person empathic resonators. Much like wikipedia is written by many people, the content of psychotic narrative could be mapped and interpreted with the informed associative skills of a social network. A new form of therapy might be devised: Cloud Therapy.
Labyrinth Psychotica is an interactive cinematic augmented reality artistic research PhD project that investigates such a future. In this art project a visitor is asked to wear a head mounted display and participate in a (fictional) medical experiment in which your mind will be uplinked to the mind of a girl named Jamie who is diagnosed with schizophrenia. You are asked to make observations and report back to Jamie's therapist. Through this mind uplink you are able to see her memories, hear her voices, follow her chain of thoughts. At the same time her experiences seep into your reality, faces in your own world become distorted, you start to engage in behaviors that your world finds quite mad...Using the sensors of a Wii system, aspects of the narrative become interactive, the system starts to control you. In the cinematic experience you are forced to play The Movie Game, are taken to The Oracle, given the power over colors and get sucked into the black and white world of The Labyrinth.
Labyrinth Psychotica might be considered as a form of 'Digital LSD', a type of do-it-yourself-psychosis-kit', a tool of empathy, a prosthesis for our imagination in situations that our minds find hard to grasp. For MutaMorphosis Labyrinth Psychotica will show a recording of the experience and present reflections on the role of the artist as an intuitive neuroscientist with reference to the work of Prof S. Zeki. The artist and the participant as second person empathic resonator's with reference to the work of F. Varela. And the art experience as a form of 'active extended mind' with reference to the work of A. Clark and D. Chalmers. What if in the future we would not only be able to follow a persons experience, but also shift it? Would you dare to uplink your mind? Would you give consent to others entering yours?
Where Google goes, so too goes Baidu? It certainly seems the company may have been inspired by Google Glass if Sina Tech’s report that the company is working on a new wearable tech product called Baidu Eye (pictured) is correct.
Apple on Tuesday won a patent for an augmented reality (AR) system that can identify objects in a live video stream and present information corresponding to said objects through a computer generated information layer overlaid on top of the real-world image.