People can simultaneously identify the pitch and timing of a sound signal much more precisely than allowed by conventional linear analysis. That is the conclusion of a study of human subjects done by physicists in the US. The findings are not just of theoretical interest but could potentially lead to better software for speech recognition and sonar.
Human hearing is remarkably good at isolating sounds, allowing us to pick out individual voices in a crowded room, for example. However, the neural algorithms that our brains use to analyse sound are still not properly understood. Most researchers had assumed that the brain decomposes the signals and treats them as the sum of their parts – a process that can be likened to Fourier analysis, which decomposes an arbitrary waveform into pure sine waves.