Developers who pre-ordered Google’s Project Glass glasses for $1,500 won’t be receiving them until early 2013, but a number of lucky journalists were recently given the opportunity to take the camera-equipped, augmented reality eye-piece for a test drive. The New York Times’ gadget kingmaker David Pogue writes that the device has the potential to be one of the rare devices that introduces a whole new gadget category to the world,
[...] a few things are clear. The speed and power, the tiny size and weight, the clarity and effectiveness of the audio and video, are beyond anything I could have imagined. The company is expending a lot of effort on design — hardware and software — which is absolutely the right approach for something as personal as a wearable gadget
[...] it’s much too soon to predict Google Glass’s success or failure. But it’s easy to see that it has potential no other machine has ever had before — and that Google is shepherding its development in exactly the right way.
Spencer E. Ante of the Wall Street Journal is enthusiastic about the technology as well, but thinks it still needs a “killer app” in order to be accepted:
After 10 minutes of playing with the glasses [...] I could see their long-term potential. The device fit well. It was easy to snap a picture or video without taking my smartphone out of my pocket. It was cool to see the information there in front of my right eye, though a little disorienting. I kept closing my left eye, which was uncomfortable.
[...] What’s really missing, though, is a killer app that could really show the technology’s potential. As Mr. Brin tells it, the glasses are like a less obtrusive smartphone that rids the world of people looking down at their devices while walking on the street. That is great, but it doesn’t seem ambitious enough.
The ability to photograph life hands-free is already something of a “killer app” in my eyes, since it would allow people to snap photos in situations in which even GoPros would seem unwieldy. Google seems to agree, since virtually all of the Project Glass demos and promos up to this point have focused on the device’s potential as a camera.
The spread of smartphone photography shows that the general public wants as-easy-as-possible photos of their lives and memories, so the big challenge for Google is to make its product look great in all aspects: how the device looks while it’s being worn and how the images look when the pop out.
If they can nail both those things, then it stands a much better chance of replacing a huge aspect of how smartphones are currently being used.
A new movie premieres in New York today and chances are none of you will ever see it.
It’s a short film titled “DVF Through Glass” and it’s video that models working for designer Diane von Furstenberg shot during New York’s Fashion Week using Google glasses they were wearing. (Google prefers to call its augmented reality devices Google Glass to distinguish them from actual glasses because they contain no glass. Got that?)
They’re the frames that caused such a stir last spring when Google unveiled them, wearable computers that can shoot videos and photos and tell you where the nearest Starbucks can be found. By wearing them as they strolled down the runway, von Furstenberg’s models became high-tech accessorized. For its part, Google managed to de-geek its invention a tad by putting it on fashion models, not to mention grab some New York media exposure before all the spotlights swung over to Apple’s iPhone 5.
As Spencer Ante pointed out in The Wall Street Journal this week, Google Glass remains a work in progress, with much of its software unfinished. It won’t be available until next year and, at $1,500 a pop, will likely be a novelty bauble for awhile.
Still, it’s already the best known of what are being called “appcessories,” wearable devices that work with smart phones. Earlier this week, a potential challenger, glasses developed by a British firm called The Technology Partnership (TTP), made its debut. Unlike Google Glass, the TTP device looks like regular glasses and beams an image directly into the wearer’s eye, instead of making him or her shift focus to a tiny screen attached to the frame.
Then there’s the Pebble, a smart watch that tells you the time, but also connects wirelessly with your iPhone or Android phone to show you who’s calling, display text messages, Facebook or email alerts and let you control, from your wrist, what’s playing on your smartphone. Its inventors had hoped to raise $100,000 on Kickstarter, with the goal of selling 1,000 watches. Instead they raised $10 million and already have orders for 85,000 watches–so many that they’ve had to push back the first shipment, which was supposed to start this month.
It’s that kind of response that has a lot of people predicting that wearable computing is the next big wave, the thing that will free us from what’s been called the “black mirror” of our smartphone screens. Your phone may still be the powerful little computer you carry around, but it may never have to leave your pocket.
Or you can do without the phone altogether. London digital art director Dhani Sutanto created an enamel ring with the electronics of a transit card implanted in it. One swipe of his ring and he can ride the London subway.
His goal, he says, is to design “interactions without buttons,” to link physical items–such as a ring–to your virtual identity and preferences.
“Imagine a blind person using an ATM and fumbling with the buttons or touch screen,” Sutanto recently told an interviewer. “If they had wearable technology in the form of a ring, for example, they could approach and just touch it. The ATM would say, “Welcome, Mr. Smith. Here’s your £20.”
Turn me on
Google wasn’t alone in infusing tech in Fashion Week. Microsoft was there, too, presenting a dress that tweeted. Okay, the dress, made of paper, didn’t actually tweet, but the person wearing it could, using a keyboard on its bodice, decorate the bottom of the dress with Twitter banter.
My guess–and hope–is that this won’t catch on and we will never have to live in a world where people wear their tweets on their sleeves. But another breakthrough in wearable tech a few months ago could dramatically change what we expect our clothes to do for us.
Scientists at the University of Exeter in the U.K. have created a substance that can be woven into a fabric to produce the lightest, most transparent and flexible material ever made that conducts electricity. One day, they say, we could be walking around in clothing that carries a charge.
To me, this would not seem a good fashion choice if there’s even a chance of thunder and lightning. But the researchers at Exeter have happier thoughts. They talk of shirts that turn into MP3 players and of charging your phone with your pants.
Which could give new meaning to “wardrobe malfunction.”
Here are other recent developments in wearable tech:
You’ve got the power: A British professor is trying to produce clothing made with materials capable of generating electricity from either the warmth or movement of the human body. If you must talk in public, do it with style: Nothing stylish about walking around wearing a Bluetooth headset. But now, at least for women, there are other options, such as a pendant that works like a headset, but looks like a necklace. One device to rule them all: Scientists at Dartmouth are developing a device worn like a bracelet that would authenticate a user’s identity and connect any other medical devices he or she has had implanted or is wearing. ...
When doing certain types of welding, special helmets with dark lens shades should be used to protect the eyes from the extremely bright welding arc and sparks. The masks help filter out light, protecting your eyes, but at the same time make it hard to see the details in what you’re doing. In other words, the dynamic range is too high, and wearers are unable to see both the arc and the objects they’re welding.
A group of researchers in the EyeTap Personal Imaging Lab at the University of Toronto have a solution, and it involves cameras. They’ve created a “quantigraphic camera” that can give people enhanced vision. Instead of being tuned to one particular brightness, it attempts to make everything in front of the wearer visible by using ultra high dynamic range imaging.
For example, a welder using the helmet would be able to see both the details of the bright welding arc and the details on the metal he or she is working on.
Here’s a slightly technical video explaining the project:
Kintinuous - Kinect Creates Full 3D ModelsiProgrammerThe system is being designed to allow robots to build models of their surrounding - Simultaneous Localization and Mapping or SLAM.See it on Scoop.it, via Mobile computer vision...
Projected Augmented Reality Prototype by RTT AG (www.rtt.ag) and Extend3D (www.extend3d.de) shown at "RTT Excite" Conference in Detroit, MI, May, 31st + June...See it on Scoop.it, via Mobile computer vision...
While there's a growing number of TV show discovery services using mobile devices (aka the second screen) to enhance the viewing experience, few have the name recognition of TV Guide.See it on Scoop.it, via Smart TV, the new tv...
Today, Google’s Search blog announced that the company has started implementing technology that will allow you to search for your photos based on what they contain visually, even if there’s not a tag in sight.
By David Pogue, New York Times, September 13, 2012
New gadgets — I mean whole new gadget categories — don’t come along very often. The iPhone was one recent example. You could argue that the iPad was another. But if there’s anything at all as different and bold on the horizon, surely it’s Google Glass. That, of course, is Google’s prototype of a device you wear on your face. Google doesn’t like the term “glasses,” because there aren’t any lenses. (The Glass team, part of Google’s experimental labs, also doesn’t like terms like “augmented reality” or “wearable computer,” which both have certain baggage.) Instead, Glass looks like only the headband of a pair of glasses — the part that hooks on your ears and lies along your eyebrow line — with a small, transparent block positioned above and to the right of your right eye. That, of course, is a screen, and the Google Glass is actually a fairly full-blown computer. Or maybe like a smartphone that you never have to take out of your pocket. This idea got a lot of people excited when Nick Bilton wrote about the glasses in February in The New York Times. Google first demonstrated it April in a video. In May, at Google’s I/O conference, Glass got some more play as attendees watched a live video feed from the Glass as a sky diver leapt from a plane and parachuted onto the roof of the conference building. But so far, very few non-Googlers have been allowed to try them on. Last week, I got a chance to try on a pair. I’m hosting a PBS series called “Nova ScienceNow” (it premieres Oct. 10), and one of the episodes is about the future of tech. Of course, projecting what’s yet to come in consumer tech is nearly impossible, but Google Glass seemed like a perfect example of a breakthrough on the verge. So last week the Nova crew and I met with Babak Parviz, head of the Glass project, to discuss and try out the prototypes. Now, Google emphasized — and so do I — that Google Glass is still at a very, very early stage. Lots of factors still haven’t been finalized, including what Glass will do, what the interface will look like, how it will work, and so on. Google doesn’t want to get the public excited about some feature that may not materialize in the final version. (At the moment, Google is planning to offer the prototypes to developers next year — for $1,500 — in anticipation of selling Glass to the public in, perhaps, 2014.) When you actually handle these things, you can’t believe how little they weigh. Less than a pair of sunglasses, in my estimation. Glass is an absolutely astonishing feat of miniaturization and integration. Inside the right earpiece — that is, the horizontal support that goes over your ear — Google has packed memory, a processor, a camera, speaker and microphone, Bluetooth and Wi-Fi antennas, accelerometer, gyroscope, compass and a battery. All inside the earpiece. Google has said that eventually, Glass will have a cellular radio, so it can get online; at this point, it hooks up wirelessly with your phone for an online connection. And the mind-blowing thing is, this slim thing is the prototype. It’s only going to get smaller in future generations. “This is the bulkiest version of Glass we’ll ever make,” Babak told me. The biggest triumph — and to me, the biggest surprise — is that the tiny screen is completely invisible when you’re talking or driving or reading. You just forget about it completely. There’s nothing at all between your eyes and whatever, or whomever, you’re looking at. And yet when you do focus on the screen, shifting your gaze up and to the right, that tiny half-inch display is surprisingly immersive. It’s as though you’re looking at a big laptop screen or something. (Even though I usually need reading glasses for close-up material, this very close-up display seemed to float far enough away that I didn’t need them. Because, yeah — wearing glasses under Glass might look weird.) The hardware breakthrough, in other words, is there. Google is proceeding carefully to make sure it gets the rest of it as right as possible on the first try. But the potential is already amazing. Mr. Pariz stressed that Glass is designed for two primary purposes — sharing and instant access to information — hands-free, without having to pull anything out of your pocket. You can control the software by swiping a finger on that right earpiece in different directions; it’s a touchpad. Your swipes could guide you through simple menus. In various presentations, Google has proposed icons for things like taking a picture, recording video, making a phone call, navigating on Google Maps, checking your calendar and so on. A tap selects the option you want. In recent demonstrations, Google has also shown that you can use speech recognition to control Glass. You say “O.K., Glass” to call up the menu. To illustrate how Glass might change the game for sharing your life with others, I tried a demo in which a photo appeared — a jungly scene with a wooden footbridge just in front of me. The theme from “Jurassic Park” played crisply in my right ear. (Cute, real cute.) But as I looked left, right, up or down, my view changed accordingly, as though I were wearing one of those old virtual-reality headsets. The tracking of my head angle and the response to the immersive photo was incredibly crisp and accurate. By swiping my finger on the touchpad, I could change to other scenes. Now, there’s a lot of road between today’s prototype and the day when Google Glass will be on everyone’s faces. Google will have to nail down the design — and hammer down the price. Issues of privacy and distraction will have to be ironed out (although I’m not nearly as worried about distraction as I was before I tried them on). Glasses wearers may have to wait until Glass can be incorporated into actual glasses. We may be waiting, too, for that one overwhelmingly compelling feature, something that you can’t do with your phone (beyond making it hands-free). We’ve seen that the masses can’t even be bothered to put on special glasses to watch 3-D TV; it may take some unimagined killer app to convince them to wear Google Glass headsets all day. But already, a few things are clear. The speed and power, the tiny size and weight, the clarity and effectiveness of the audio and video, are beyond anything I could have imagined. The company is expending a lot of effort on design — hardware and software — which is absolutely the right approach for something as personal as a wearable gadget. And even in this early prototype, you already sense that Google is sweating over the clarity and simplicity of the experience — also a smart approach. In short, it’s much too soon to predict Google Glass’s success or failure. But it’s easy to see that it has potential no other machine has ever had before — and that Google is shepherding its development in exactly the right way.
Over the last decade, computers have become better at seeing faces. Software can tell if a camera has a face in its frame of vision, and law enforcement has been testing facial-recognition programs that can supposedly pick out suspects in a crowd. That's prompted an arms race between the people who build facial-recognition systems — and those seeking ways to defeat them.
Facial-recognition software is becoming a bigger issue for privacy advocates as well. Surveillance cameras are already ubiquitous in the U.K., are showing up in more places in the U.S. and may increasingly be connected to facial-recognition systems.
"I went to a Kinko's a while ago," said Alex Kilpatrick, chief technology officer and co-founder of Tactical Information Systems, a company in Austin, Texas, that sells facial-recognition software to law enforcement and the military. "I saw three cameras just while I was standing in line. You see them in all kinds of places now."
he American Civil Liberties Union (ACLU) has said it is deeply concerned with the way facial-recognition systems are used. Police use such systems to flag criminals in public places, the ACLU says, but it argues that the Transportation Security Administration's (TSA) use of the technology in Boston's Logan Airport and in T.F. Green Airport near Providence, R.I., doesn't seem to have helped catch any criminals or terrorists.
How do you get to know a protein? How about from the inside out? If you ask chemistry professor James Hinton, "It’s really important that scientists as well as students are able to touch, feel, see … embrace–if you like, these proteins structures”. For decades, with funding from the National Science Foundation (NSF), Hinton has used nuclear magnetic resonance (NMR) to look at protein structure and function. But he wanted to find a way to educate and engage students about his discoveries.
The picture above shows an example of the interactive visualization of proteins from the Protein Data Bank (PDB), using PDB browser software on the C-Wall (virtual reality wall) at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. The work was performed by Jürgen P. Schulze, project scientist, in collaboration with Jeff Milton, Philip Weber and Professor Philip Bourne of the University of California, San Diego. The software supports collaborative viewing of proteins at multiple sites on the Internet
Since 2006, we've had textured 3D buildings in Google Earth, and we're excited to announce that we'll begin adding 3D models to entire metropolitan areas to ...See it on Scoop.it, via Mobile computer vision...
Perhaps the new Volvo has so much hidden technology or maybe because the they just wanted to create some extra buzz at the show, either way, the Volvo X-Ray App that was released at the Geneva Auto Show is a pretty cool way for potential customers...See...
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.