In January, four MIT researchers showed off a replacement for one of the most important algorithms in computer science. Dina Katabi, Haitham Hassanieh, Piotr Indyk, and Eric Price have created a faster way to perform the Fourier transform, a mathematical technique for processing streams of data that underlies the operation of things such as digital medical imaging, Wi-Fi routers, and 4G cellular networks.
The principle of the Fourier transform, which dates back to the 19th century, is that any signal, such as a sound recording, can be represented as the sum of a collection of sine and cosine waves with different frequencies and amplitudes. This collection of waves can then be manipulated with relative ease—for example, allowing a recording to be compressed or noise to be suppressed. In the mid-1960s, a computer-friendly algorithm called the fast Fourier transform (FFT) was developed. Anyone who's marveled at the tiny size of an MP3 file compared with the same recording in an uncompressed form has seen the power of the FFT at work.