Andrea Censi, a research scientist in MIT’s Laboratory for Information and Decision Systems, has developed a new type of camera sensor called an event-based (or neuromorphic) sensor, which can take measurements a million times a second.
An autonomous vehicle using a standard camera to monitor its surroundings might take about a fifth of a second to update its location — not fast enough to handle the unexpected. With an event-based sensor, the vehicle could update its location every thousandth of a second or so, allowing it to perform much more nimble maneuvers.
“In a regular camera, you have an array of sensors, and then there is a clock,” Censi explains. “If you have a 30-frames-per-second camera, every 33 milliseconds the clock freezes all the values, and then the values are read in order.”
With an event-based sensor, by contrast, “each pixel acts as an independent sensor,” Censi says. “When a change in luminance — in either the plus or minus direction — is larger than a threshold, the pixel says, ‘I see something interesting’ and communicates this information as an event. And then it waits until it sees another change.
Reference:Andrea Censi and Davide Scaramuzza, Low-latency event-based visual odometry, Technical Report 2912, Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, 2013, to appear in ICRA 2014 (open access)
Via Dr. Stefan Gruenwald