High street shops are well-established online these days and provide new opportunities for interaction between shop and shopper. Consumers have become accustomed to shopping using a range of devices and the immense popularity of smartphones and mobile devices has led to the rise of mobile or m-retailing, with new communication and distribution channels created with these in mind. Perhaps this mix of the real and online worlds are helpful precursors for what may be the “next big thing”: virtual reality shopping.
Virtual reality (VR) experiences are typically provided through wearable headgear or goggles that block out the real world and immerse the user in a virtual one. This is distinguished from augmented reality (AR), where layers of digital content can be overlayed on the real world, providing access to both. For example, the digital information displayed on the visor of Google Glass.
Google’s big bet on technology for cities is finally starting to make sense.
Earlier this month, when Larry Page announced that Google was launching a new startup called Sidewalk Labs to develop and incubate technology for cities, many wondered what the company wanted with an industry that is so much less sexy than any of its other so-called “moonshot” projects, like developing the self-driving car or, you know, curing death.
Now, that fuzzy logic is coming into focus. Today, Sidewalk Labs announced it would be leading the acquisition of two companies behind New York City’s LinkNYC initiative, an ongoing plan to convert old pay phones into free public Wi-Fi hubs. Through the acquisition, Sidewalk Labs is merging the two companies—Control Group, which provides the interface for the new hubs, and Titan, which is overseeing the advertising that will pay for the project. The new venture, aptly named Intersection, will seek to bring free public Wi-Fi to cities around the world using different pieces of urban infrastructure, from pay phones to bus stops.
Jaguar has made a real effort to push the envelope recently, dropping its stuffy old English tweed jacket for a sharply-cut suit to compete with Germany's finest offerings. A big part this transformation has been a focus on innovative safety technologies, like windscreen map overlays and talking potholes. This time, Jaguar has turned to mind-reading tech to detect distracted or sleepy drivers.
After coordinating scientific research for the United States during World War II, including initiating the Manhattan Project, the engineer Vannevar Bush set his sights on a pacifist instrument for world knowledge.
In the July 1945 issue of The Atlantic, Bush outlined his vision for a head-mounted camera attached to “a pair of ordinary glasses” that would record comments, photographs, and data from scientific experiments: “One can now picture a future investigator in his laboratory. His hands are free, and he is not anchored.” His “camera … of the future,” no “larger than a walnut,” worn on “a pair of ordinary glasses … where it is out of the way of ordinary visions” was in many ways a forerunner of today’s augmented-reality devices.
For decades we’ve been inching closer to popular augmented-reality technologies to enhance the physical world—each new iteration promising to turn the entire world into a computing interface—but only in the past couple of years have headsets no longer needed to be enormous, bulky, and expensive, and superimposed images advanced beyond thin lines.
Scientists have created a synchronous computer that operates using the unique physics of moving water droplets.
The computer is nearly a decade in the making, incubated from an idea that struck Manu Prakash, an assistant professor of bioengineering at Stanford University, when he was a graduate student. The work combines his expertise in manipulating droplet fluid dynamics with a fundamental element of computer science—an operating clock.
“In this work, we finally demonstrate a synchronous, universal droplet logic and control,” Prakash says.
Neuroscientists still do not understand how the activities of individual brain cells translate to higher cognitive powers such as perception and emotion. The problem has spurred a hunt for technologies that will allow scientists to study thousands, or ideally millions, of neurons at once, but the use of brain implants is currently limited by several disadvantages. So far, even the best technologies have been composed of relatively rigid electronics that act like sandpaper on delicate neurons. They also struggle to track the same neuron over a long period, because individual cells move when an animal breathes or its heart beats.
The need to mend broken hearts has never been greater. In the USA alone, around 610,000 people die of heart disease each year. A significant number of those deaths could potentially have been prevented with a heart transplant but, unfortunately, there are simply too few hearts available.
In 1967 the South African surgeon Christiaan Barnard performed the world’s first human heart transplant in Cape Town. It seemed like a starting gun had gone off; soon doctors all around the world were transplanting hearts.
The problem was that every single recipient died within a year of the operation. The patients’ immune systems were rejecting the foreign tissue. To overcome this, patients were given drugs to suppress their immune system. But, in a way, these early immunosuppressants were too effective: they weakened the immune system so much that the patients would eventually die of an infection. It seemed like medicine was back to square one.
The way your brain responds to certain words could be used to replace passwords, according to a study by researchers from Binghamton University, published in academic journal Neurocomputing.
The psychologists recorded volunteers’ EEG signals from volunteers reading a list of acronyms, focusing on the part of the brain associated with reading and recognizing words.
Participants’ “event-related potential” signals reacted differently to each acronym, enough that a computer system was able to identify each volunteer with 94 percent accuracy, using only three electrodes.
The results suggest that brainwaves could be used by security systems to verify a person’s identity.
Better than fingerprints or retinal patterns in the eye
According to Sarah Laszlo, assistant professor of psychology and linguistics at Binghamton University and co-author of the “Brainprint” paper, brain biometrics are appealing because they are cancellable (can be reset) and cannot be stolen by malicious means, such as copying a fingerprint.
“If someone’s fingerprint is stolen, that person can’t just grow a new finger to replace the compromised fingerprint — the fingerprint for that person is compromised forever. Fingerprints are ‘non-cancellable.’ Brainprints, on the other hand, are potentially cancellable.
So, in the unlikely event that attackers were actually able to steal a brainprint from an authorized user, the authorized user could then ‘reset’ their brainprint,” Laszlo said, meaning the user could simply record the EEG pattern associated with another word or phrase.
What if your handheld tools knew what needs to be done and were even able to guide and help you complete jobs that require skills? University of Bristol researchers are finding out by building and testing intelligent handheld robots.
Think of them as smart power tools that “know” what they’re doing — and could even help you use them.
The robot tools would have three levels of autonomy, said Walterio Mayol-Cuevas, Reader in Robotics Computer Vision and Mobile Systems: “No autonomy, semi-autonomous — the robot advises the user but does not act, and fully autonomous — the robot advises and acts even by correcting or refusing to perform incorrect user actions.”
The Bristol team has experimented with tasks such as picking and dropping different objects to form tile patterns and aiming in 3D for simulated painting.
The robot designs are open source and available on the university’s HandheldRobotics page.
Demonstrations of augmented-reality displays typically involve tricking you into seeing animated content such as monsters and robots that aren’t really there. Microsoft wants its forthcoming HoloLens headset to mess with reality more believably. It has developed a way to make you see photorealistic 3-D people that fit in with the real world.
With this technology, you could watch an acrobat tumble across your front room or witness your niece take some of her first steps. You could walk around the imaginary people just as if they were real, your viewpoint changing seamlessly as if they were actually there. A sense of touch is just about the only thing missing.
That experience is possible because Microsoft has built a kind of holographic TV studio at its headquarters in Redmond, Washington. Roughly 100 cameras capture a performance from many different angles. Software uses the different viewpoints to create a highly accurate 3-D model of the person performing, resulting in a photo-real appearance.
The more traditional approach of using computer animation can’t compare, according to Steve Sullivan, who works on the project at Microsoft. He demonstrated what Microsoft calls “video holograms” at the LDV Vision Summit, an event about image-processing technology, in New York on Tuesday. More details of the technology will be released this summer.
“There’s something magical about it being real people and motion,” he said. “If you have a HoloLens, you really feel these performances are in your world.”
Microsoft is working on making it practical and cheap enough for other companies to record content in this form. It might one day be possible to visit a local studio and record a 3-D snapshot of a child at a particular point in life, said Sullivan.
Biomedical engineering company Össur has announced the successful development of a thought controlled bionic prosthetic leg. The new technology uses implanted sensors sending wireless signals to the artificial limb's built-in computer, enabling subconscious, real-time control and faster, more natural responses and movements.
Prosthetics controlled by muscle impulses have been around since the late 1960s, but the technology has severe limitations. It works by laying sensors on the skin of the vestigial limb, which picks up electrical impulses that control, for example, an artificial arm. The trouble is, these sensors pick up electric impulses from more than one muscle. This degrades performance, requires a lot of practice to operate properly, and makes the prosthesis slow, Imprecise, and frustrating to use.
One answer to this is to use more precise sensor arrangements that make the limb, for all practical purposes, mind-controlled. The method is already used with great success on upper limbs and even artificial hands, but, paradoxically, it's been less successful with lower limbs.
The ancient skill of creating and performing spoken rhyme is thriving today because of the inexorable rise in the popularity of rapping. This art form is distinct from ordinary spoken poetry because it is performed to a beat, often with background music.
And the performers have excelled. Adam Bradley, a professor of English at the University of Colorado has described it in glowing terms. Rapping, he says, crafts “intricate structures of sound and rhyme, creating some of the most scrupulously formal poetry composed today.”
The highly structured nature of rap makes it particularly amenable to computer analysis. And that raises an interesting question: if computers can analyze rap lyrics, can they also generate them?
Today, we get an affirmative answer thanks to the work of Eric Malmi at the University of Aalto in Finland and few pals. These guys have trained a machine-learning algorithm to recognize the salient features of a few lines of rap and then choose another line that rhymes in the same way on the same topic. The result is an algorithm that produces rap lyrics that rival human-generated ones for their complexity of rhyme.
Various forms of rhyme crop up in rap but the most common, and the one that helps distinguish it from other forms of poetry, is called assonance rhyme. This is the repetition of similar vowel sounds such as in the words “crazy” and “baby” which share two similar vowel sounds. (That’s different from consonance, which uses similar consonant sounds, such as in “pitter patter” and different from perfect rhyme where words share the same ending sound such as “slang” and “gang.”)
Because of its prevalence in rap, Malmi and co focus exclusively on the way assonance appears in rap lyrics. But they also assume a highly structured form of verse consisting of 16 lines, each of which equals one musical bar and so must be made up of four beats. The lines typically, but not necessarily, rhyme at the end.
To train their machine learning algorithm, they begin with a database of over 10,000 songs from more than 100 rap artists.
Over the course of 1967 and 1968, Argentine writer Jorge Luis Borges delivered a series of lectures at Harvard about the nature of human language. In one of these lectures, he spent a good deal of time ruminating on the importance of metaphor and its limitless possibilities in language. Borges theorized that despite these boundless possibilities for poetic language, there were nevertheless distinct patterns of metaphors that kept cropping up—a favorite example of his being the metaphorical equivalence of "stars" and "eyes."
It was this lecture series given by the surrealist writer that inspired Poetry for Robots, a project launched last week through a partnership between Neologic, Webvisions, and The Center for Science and the Imagination at Arizona State University. The project seeks to put Borges’ theory to the test, asking on their website whether it is possible to teach machines the poetic quality of human language.
Three-year clinical trial results of the Argus II retinal implant (“bionic eye”) have found that the device restored some visual function and quality of life for 30 people blinded by retinitis pigmentosa, a rare degenerative eye disease. The findings, published in an open-access paper in the journal Ophthalmology, also showed long-term efficacy, safety and reliability for the device.
Retinitis pigmentosa is an incurable disease that affects about 1 in 4,000 Americans and causes slow vision loss that eventually leads to blindness.
Using the Argus II, patients are able to see patterns of light that the brain learns to interpret as an image. The system uses a miniature video camera connected to the glasses to send visual information to a small computerized video processing unit and battery that can be stored in a pocket. This computer turns the image to electronic signals that are sent wirelessly to an electronic device surgically implanted on the retina in the eye.
The Argus II received FDA approval as a Humanitarian Use Device (HUD) in 2013 and in Europe Argus II received the CE Mark in 2011 and was launched commercially in Italy, Germany, France, Spain, The Netherlands, Switzerland and England.
A micro-device lined with living human cells able to mimic the function of living organs has been declared the overall winner of the Design Museum's Design of the Year Award for 2015.
Something of a departure from last year's winner, the Heydar Aliyev Center, by Zaha Hadid, Human Organs-on-Chips is the competition's first winner from the field of medicine in its eight-year history. Designed by Donald Ingber and Dan Dongeun Huh at Harvard University’s Wyss Institute, the Human Organs-on-Chips project comprises a series of chips that mimic real human organs, including a lung-on-a-chip, and gut-on-a-chip.
As we previously reported, the research could prove beneficial in evaluating the safety and efficacy of potential medical treatments, in addition to lessening demands on animal testing, accelerating drug discovery, and decreasing development and treatment costs.
University of Cincinnati and university and industry partners have developed a technology for tunable window tinting that dynamically adapts for brightness, color temperatures (such as blueish or yellowish light), and opacity (to provide for privacy while allowing 90 percent or more of the light in), adjustable by the user.
According to the researchers, these “smart windows” are would be simple to manufacture, making them affordable for business and home use. The materials can also be applied to already existing windows, using a roll-on coating consisting of a honeycomb of electrodes. The tinting can be adjusted simultaneously for brightness/opacity and light color and allows for blocking infrared radiation to reduce incoming heat in the summer.
Science fiction has a long tradition of pitting artificial intelligence against humanity in a struggle for dominance. Ray Kurzweil, a noted futurist and inventor, envisions a more co-operative future. He says the human brain will soon merge with computer networks to form a hybrid artificial intelligence.
MakerBot's 3-D printers will soon be able to produce items that look like bronze, limestone, and wood, thanks to a new line of plastic-based composite materials shipping later this year. But the launch may be too little, too late: Entrepreneurs and artists interested in working with metal and wood are already embracing desktop milling machines that can handle the real deal.
The calculation is simple: Buy a MakerBot Replicator, the leading desktop 3-D printer, for $2,889, and you can produce plastic prototypes or the kind of trinkets that you might find in a Happy Meal. Buy a small-scale milling machine like the Othermill, which retails for $2,199, and you can make jewelry and mechanical parts out of everything from aluminum to walnut.
"Once you can cut metal, you can make things that last," says Danielle Applestone, chief executive of Other Machine Co. "For the first couple of months that I was working here, I was scared of cutting with metal. It was louder, I was worried I was going to break the tool. But as soon as I jumped in, it quickly became like wax to me."
"Metal is power, it really is," she says. "You don’t go back."
Nothing kills the experience of a five-star dinner like the shrieks of a crying baby at a nearby table. But imagine if, rather than wincing with frustration, you could simply turn down the baby’s volume?
That’s the promise behind Doppler Labs’ latest invention: earbuds that serve as your personal audio mixer to control real-time sounds in your environment. Adjust the bass levels at a live concert. Turn your coworker’s voice up — or down. Add reverb in a small room to get a concert hall effect. Doppler’s Here buds, unveiled on Kickstarter this week, promise to allow you to choose what you hear, and how. Filtering Sound
Here buds rely on signal processing algorithms that target certain sound frequencies as they enter the headphones. An internal microphone processes the sound waves, then a miniature speaker blasts additional waves to add, delete or alter the sound based on the chosen algorithm. Finally, a second microphone picks up the remixed sound wave and sends it into your ears.
This entire process happens in less than 30 microseconds.
Her name is Amelia, and she is the complete package: smart, sophisticated, industrious and loyal. No wonder her boss, Chetan Dube, can’t get her out of his head.
“My wife is convinced I’m having an affair with Amelia,” Dube says, leaning forward conspiratorially. “I have a great deal of passion and infatuation with her.”
He’s not alone. Amelia beguiles everyone she meets, and those in the know can’t stop buzzing about her. The blue-eyed blonde’s star is rising so fast that if she were a Hollywood ingénue or fashion model, the tabloids would proclaim her an “It” girl, but the tag doesn’t really apply. Amelia is more of an IT girl, you see. In fact, she’s all IT.
Amelia is an artificial intelligence platform created by Dube’s managed IT services firm IPsoft, a virtual agent avatar poised to redefine how enterprises operate by automating and enhancing a wide range of business processes. The product of an obsessive and still-ongoing 16-year developmental cycle, she—yes, everyone at IPsoft speaks about Amelia using feminine pronouns—leverages cognitive technologies to interface with consumers and colleagues in astoundingly human terms, parsing questions, analyzing intent and even sensing emotions to resolve issues more efficiently and effectively than flesh-and-blood customer service representatives.
Since the early 20th century, an unheralded star of genetics research has been a small and essentially very annoying creature: the fruit fly.
Underlying every significant discovery from fruit fly research—and there have been many, relating to almost every aspect of our own biology—is daily, monotonous time spent by scientists toiling over plastic dishes of conked-out flies.
Now a team led by Mark Schnitzer, an associate professor of biology and of applied physics at Stanford University, has introduced a solution to the tedium: a robot that can visually inspect awake flies and, even better, carry out behavioral experiments that were impossible with anesthetized flies.
Have you ever wanted to upgrade your body? Make improvements to all those physical limitations? Build a better version of yourself?
How about augmenting your sex life?
Biohacking is a radical new scientific field that is as groundbreaking as it is notorious. Approaching the human body with what is described as a hacker ethic, it encompasses a wide variety of different practices, ranging from cybernetic augmentation to gene sequencing and biological manipulation.
Referring to themselves as Grinders, this community of fringe innovators aims to become the world’s first cyborgs, one implant at a time. They advocate an open-source upgrade culture, operating within a strange intersection of body modification, hardware fetishism, and home surgery.
And some of the advances they’re starting to make are truly incredible.
Basement enthusiasts are eagerly embracing the trend of magnetic implants. Eventually, we could all be seeing with expanded sensory devices, or streaming first-person POV footage straight to our computers.
Employing state-of-the art materials and production techniques, OcumeticsTM Technology Corporation is pleased to announce the development of one of the world’s most advanced intraocular lenses, one that is capable of restoring quality vision at all distances, without glasses, contact lenses or corneal refractive procedures, and without the vision problems that have plagued current accommodative and multifocal intraocular lens designs.
Cataract surgery is the most common and successful procedure in medicine. It is a painless and gentle procedure. Utilizing standard surgical techniques, augmented by the accuracy of femtosecond laser incision technology, ophthalmic surgeons will be able to implant the OcumeticsTM Bionic Lens to enable patients to achieve their visual goals.
Brain-controlled prosthetics could be widely available in three years time. Iceland-based orthopaedics company Ossur made the announcement after publicly demonstrating the working technology, currently being trialled by two volunteers.
However, given WIRED's May issue featured the story of a tetraplegic woman who could control a robotic arm using only her thoughts -- thanks to a series of electrodes linked to her brain -- you'd be forgiven for thinking brain-controlled prostheses were already par for the course.
And yes the tech, known as myoelectric prostheses, has been in development for years. They work by implanting tiny sensors into the muscle adjacent to the site of amputation, using salvaged nerves to send signals from the brain, via the sensor, to the prosthetic, where a receiver translates that message into movement. Ordinary electronic prostheses, including Ossur's original Proprio Foot, use algorithms to process data from sensors to predict a wearer's next movement. The company, which made Oscar Pistorius' Flex-Foot Cheetah blades, only delivered the upgraded version to two patients 14 months ago.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.