Scientists at the Massachusetts Institute of Technology are developing an audio reading device to be worn on the index finger of people whose vision is impaired, giving them affordable and immediate access to printed words.
The so-called FingerReader, a prototype produced by a 3-D printer, fits like a ring on the user’s finger, equipped with a small camera that scans text. A synthesized voice reads words aloud, quickly translating books, restaurant menus and other needed materials for daily living, especially away from home or office.
Reading is as easy as pointing the finger at text. Special software tracks the finger movement, identifies words and processes the information. The device has vibration motors that alert readers when they stray from the script, said Roy Shilkrot, who is developing the device at the MIT Media Lab.
For Jerry Berrier, 62, who was born blind, the promise of the FingerReader is its portability and offer of real-time functionality at school, a doctor’s office and restaurants.
“When I go to the doctor’s office, there may be forms that I wanna read before I sign them,” Berrier said.
He said there are other optical character recognition devices on the market for those with vision impairments, but none that he knows of that will read in real time.
How can an ant lift objects many times heavier than its own body? Engineers at The Ohio State University combined computational modeling at the Ohio Supercomputer Center (OSC) and lab experiments to find out.
They focused on the ant’s neck — the single joint of soft tissue that bridges the stiff exoskeleton of the ant’s head and thorax. When an ant carries food or any other object, the neck joint supports the full weight of the load.
The researchers reverse-engineered the biomechanics of the neck by developing 3-D models of the of the ant’s internal and external anatomy from X-ray cross-section images (microCT) of ant specimens and loading the data into a modeling program (ScanIPþFE) that assembled the segments and converted them into a mesh frame model of more than 6.5 million elements.
The model then was loaded into a finite element analysis program (Abaqus), an application that creates accurate simulations of complex geometries and forces, and the data was processed on the powerful Oakley Cluster, an array of 8,300 processor cores (Intel Xeon) at the Ohio Supercomputer Center.
The experiments, published in the Journal of Biomechanics, revealed that the neck joints could withstand loads of about 5,000 times the ant’s body weight, and that the ant’s neck-joint structure produced the highest strength when its head was aligned straight, as opposed to turned to either side.
“Loads are lifted with the mouthparts, transferred through the neck joint to the thorax, and distributed over six legs and tarsi that anchor to the supporting surface,” explainedCarlos Castro, assistant professor of mechanical and aerospace engineering at Ohio State.
“While previous research has explored attachment mechanisms of the tarsi (feet), little is known about the relation between the mechanical function and the structural design and material properties of the ant.”
“Our results accurately pinpoint the stress concentration that leads to neck failure and identify the soft-to-hard material interface at the neck-to-head transition as the location of failure,” said Castro.
“The design and structure of this interface is critical for the performance of the neck joint. The unique interface between hard and soft materials likely strengthens the adhesion and may be a key structural design feature that enables the large load capacity of the neck joint.”
The simulations confirmed the joint’s directional strength and, consistent with the experimental results, indicated that the critical point for failure of the neck joint is at the neck-to-head transition, where soft membrane meets the hard exoskeleton.
Ten healthy subjects wearing blindfolds were given solely auditory stimulation in the absence of visual stimulation. In a separate session, retinotopic mapping.
University of Glasgow scientists studying brain process involved in sight have discovered that the visual cortex also uses information gleaned from the ears when viewing the world.
They suggest this auditory input enables the visual system to predict incoming information and could confer a survival advantage.
“Sounds create visual imagery, mental images, and automatic projections,” said Professor Lars Muckli, of the University of Glasgow’s Institute of Neuroscience and Psychology, who led the research. “For example, if you are in a street and you hear the sound of an approaching motorbike, you expect to see a motorbike coming around the corner.”
The study, published in the journal Current Biology (open access), involved conducting five different experiments using functional Magnetic Resonance Imaging (fMRI) to examine the activity in the early visual cortex in 10 volunteer subjects.
In one experiment they asked the blindfolded volunteers to listen to three different sounds: birdsong, traffic noise and a talking crowd. Using a special algorithm that can identify unique patterns in brain activity, the researchers were able to discriminate between the different sounds being processed in early visual cortex activity.
A second experiment revealed that even imagined images, in the absence of both sight and sound, evoked activity in the early visual cortex.
“This research enhances our basic understanding of how interconnected different regions of the brain are,” Muckli said. “The early visual cortex hasn’t previously been known to process auditory information, and while there is some anatomical evidence of interconnectedness in monkeys, our study is the first to clearly show a relationship in humans.
“This might provide insights into mental health conditions such as schizophrenia or autism and help us understand how sensory perceptions differ in these individuals.”