> When a digital camera records an image, a gamma curve is applied to it before display, which makes up for our bias against the darker portions which the digital equipment does not have.
Gamma correction makes up for a bias against darker portions in the display, not in our eyes. It's a holdover from the CRT days where the change in brightness between pixel values of, say, 10 and 11, was far less than the change between 250 and 251. Human eyes have excellent low-light discernment which is why 'black' doesn't really look black and you can make out blocky shapes during dark scenes on some DVDs.
Compressed video lacks information in the blacks and that is why we see blocks. The blocks are not there before compression, so it’s not simply a matter of detecting them. While we are good at seeing objects in blacks, your explanation alone doesn’t account for why compression algorithms reason to remove so much of that data. Maybe we are saying the same thing. It’s hard to tell.
Your assertion about the origins, however, are at odds with what I have been taught, my understanding, and all the supporting info I am finding in a quick search. My understanding is that luminance values from a sensor have something of an empirical scale but I’m sure this no complete explanation. I am speaking from my working knowledge. I can’t find anything supporting that it is simply a fix for discrepancies between display types. Can you link to something or explain what I am missing?
Gamma correction makes up for a bias against darker portions in the display, not in our eyes. It's a holdover from the CRT days where the change in brightness between pixel values of, say, 10 and 11, was far less than the change between 250 and 251. Human eyes have excellent low-light discernment which is why 'black' doesn't really look black and you can make out blocky shapes during dark scenes on some DVDs.