The human visual system consists of a hierarchically organized, highly interconnected network of several dozen distinct areas. Each area can be viewed as a computational module that represents different aspects of the visual scene. Some areas process the simple structural features of a scene, such as the edge orientation, local motion and texture. Others process complex semantic features, such as faces, animals and places. Recently, researches have been focussing on discovering the way each of these areas are representing the visual world, and on how these multiple representations are modulated by attention, learning and memory. Because the human visual system is exquisitely adapted to process natural images and movies we focus most of our effort on natural stimuli.
One way to think about visual processing is in terms of neural coding. Each visual area encodes certain information about a visual scene, and that information must be decoded by downstream areas. Both encoding and decoding processes can, in theory, be described by an appropriate computational model of the stimulus-response mapping function of each area. Therefore, our descriptions of visual function are posed in terms of quantitative computational encoding models. However, once an accurate encoding model has been developed, it is fairly straightforward to convert it into a decoding model that can be used to read out brain activity, in order to classify, identify or reconstruct mental events. In the popular press this is often called “brain reading”.
Here are some picture and video examples of it.