With camera megapixel counts going through the roof, pixels have become incredibly small. Unfortunately, traditional sensor designs cover up the pixels with layers of wiring.
If you were asked to design a camera sensor, you’d naturally put the photo receptors on top, closest to the light. Oddly enough, because of the way chips are fabricated, until recently most camera sensors have captured light at the bottom, underneath layers of interconnections. The recent introduction of back-illuminated (BI) sensor technology (also referred to as backside-illuminated or BSI) has changed all that. It is now possible to build sensors “the right way round” with the photo receptive layer facing the light. Back illumination has made some headlines for allowing better low-light performance, but its worth diving into the technology, as it is going to be a lot more important than that.
Silicon is both the substrate on which chips are built and the material that performs the magic of turning photon energy into electrical energy that can be used to create images. It is therefore the simplest solution to create the photosensitive areas in the substrate silicon and stack the electronics on top — leaving openings in the wiring over each photosite (pixel) to allow light to pass through. As camera resolutions have increased, pixel sizes have decreased, especially in smartphones with their tiny sensors. The result is that more and more of the surface area of the sensor is covered by wiring, resulting in less and less light reaching the photosites. So there is a natural need to find a way to move the photosensitive region to the top of the chip, allowing it to gather more light.
Curiously, the human eye and most animal eyes are also built with the photosensitive pigments on the side furthest away from the light streaming through the eyeball. It isn’t known exactly why eyes are designed that way, but designing them this way definitely makes it easier to provide circulation to the energy-hungry rods and cones, as well as allowing cellular debris to be whisked away without floating around inside the eyeball. Creatures like cephalopods who rely on their eyes in the dark waters of the deep ocean do indeed have their photoreceptors close to the lenses of their eyes, to maximize the sheer amount of light captured.
If a sensor was only a layer of photosensitive silicon, it wouldn’t matter much which side was up. A pixel is a lot more than just the photodiode, however. It typically includes transistors and wiring for amplifying the charge, transferring it to the signal processing portion of the chip, and resetting itself between frames. Those electronics get placed on top of the silicon layer, partially obscuring it from the light and resulting in a well-like appearance for a typical pixel.
As you’d expect, putting the photodiode at the bottom of a well reduces the amount of light that reaches it, with some light bouncing off the wiring above, and some just not having the right angle to make it to the bottom of the well. Microlenses are used to reduce this problem (the human eye uses waveguides known as Muller cells), but a meaningful amount of light is still lost before it gets to the photodiode to be captured. Typical sensor fill factors — the portion of light successfully captured — range from 30% to 80%. By contrast, a back-illuminated sensor can have a fill factor of nearly 100%.