Hmm—I’d never heard this metaphor when discussing the quantity vs. quality of pixels on a sensor, but I like it. Here’s HTC’s Symon Whitehorn talking about their move from 8 to 4 megapixels:
This debunks the so-called “megapixel myth,” which says that more megapixels equals a better image. “The old analogy that the industry uses is called pixel rain, so you can imagine photons coming down as rain—with photon rain being collected in buckets with the buckets being the pixel,” says Whitehorn. “Now you could put a lot of little cups out and try to collect the same amount of rain and you wind up getting noise between the cups as opposed to it all falling into one big bucket.”
Of course, now I kind of want to see some cheeky artist take this idea to its absurd extreme, producing a sensor that’s just 1 pixel in resolution—but oh man is that pixel’s quality ever high.
Nikon have already announced this sensor – in the Nikon D4.1
See:
http://www.earthboundlight.com/phototips/new-nikon-one-pixel-d41.html
🙂
I’ve usel tis analogy for serverak years while giving lectures on this topic. Students seems to like it and get it. Good also when explaining blooming.
Great. I tried imagining this in terms of the Foveon sensor, and now I’ve got a picture of a three-stage bong under a waterfall in my head.
There’s a deep flaw in that analogy though as bigger cups catching the pixel rain may also have larger gaps between them.
Imagine filling a box with boulders, once it is ‘full’, then you can add gravel and once it is full once more you can then add sand. And finally once it is completely and utterly full, then you can add water.
This also reminds me of some duff marketing amongst some mountain bike manufacturers some years back with bigger bearing being claimed to better than smaller bearing. Except they didn’t realise the contact patches on each bearing were basically the same as with smaller bearings so you had more load [and therefore friction] per bearing. Not good.
I’m not a more megapixel automatically equals better images believer BTW, as that can be misleading in other ways.
Most analogies will eventually fail if you look at them with enough detail – they’re meant to communicate more abstract ideas in familiar ways.
Gaps between pixel sites (they’re usually square, not round) can indeed be a problem, but the spacing is typically uniform and based on charge isolation between the sites. If the sites are too close, the sensor is more prone to bloom, or spill from one site to another.
Larger sites are intended to allow better discrimination of light levels, but at the cost of detail resolution (Rayleigh criterion). Smaller sites tend to have better spatial resolution, but can’t distinguish as many levels of light.
So in the analogy above (one I’ve also used for years), we see that with a larger bucket, you can define more easily-measurable levels, or have more marks on the side that are legible. That means they take longer to fill up, but ultimately allows you to define more precise changes. This is especially important considering charge is discreetly quantized – like saying every drop that goes in the bucket is exactly the same size. Your boulder/rock/sand/water analogy completely misses in this respect.
Size also relates to quantum efficiency, or the ability to convert photons to charge. Again, size of the pixel matters if you hold the materials the same. As materials and processing get better, you can again shrink pixel size.