
TV Backlight Compensation - zdw
http://www.lofibucket.com/articles/tv_backlight_compensation.html
======
th0ma5
Not nearly as cool, but about 12 years ago I did a poor man's idea of room
correction by recording my own tone sweeps and then creating an EQ profile by
hand for my ALSA soundcard via a LADSPA plugin on my HTPC. I couldn't believe
the difference, especially how much quieter I could listen to the system and
still hear everything. Nowadays this is of course built in to most receivers.

~~~
AustinG08
Any chance you have a write up on this?

~~~
th0ma5
Sorry for the late reply, but here is a write up:
[https://news.ycombinator.com/item?id=22290466](https://news.ycombinator.com/item?id=22290466)

~~~
AustinG08
Thanks! Great little read, thinking about trying out something similar.

------
sorenjan
First of, really cool article. I love fun hacks like this.

There's a few things I don't get. The first one is why he's using the same
global scalar values for gain and offset for the entire image, instead of
pixel-wise values. The pattern isn't uniform, so why is the correction?

The second is why the optimization is just randomizing new values instead of
using a better algorithm like gradient decent. This kind of stochastic search
seems really wasteful.

The third is why it needs to be an optimization problem at all, why not just
look at the blob image? Each pixel is supposed to be white, but we see (R_max,
G_max, B_max), where both G_max and B_max is < R_max, making the pixel red. So
just remap the green and blue channel for each pixel to [0, R_max] instead.
Then each pixel will have the same gray value of (R_max, R_max, R_max) when
displaying white. This should be really straight forward in a shader.

~~~
dezgeg
The correction is not uniform, he passes the blob image to the shader as a
texture.

~~~
sorenjan
Yes, ok, it's not fully uniform, but gain and offset is the same for the whole
image.

------
anonsivalley652
Nice. Colorspace profile management across the distribution pipeline is still
a disaster. It would be nice if:

0\. The full stack of end-user display down through distribution, editing, and
"film" camera applies faithful color management.

1\. Every scene contained a hidden color calibration frame that was recorded
on-set holding a proper calibration sheet that the end-user's display could
apply dynamic color-correction to produce color calibrated to the average
lighting conditions that existed on-set.

2\. The end-user can choose whether to apply just 0. or also 1.

There is another problem: in filming or editing, by applying color filter
"looks" there is currently little in the way of assurances that downstream
will faithfully reproduce what was intended. Colorspace profiles were and are
a great advance, but they need to be measured, calibrated and validated all
the way down so that they're useful.

PS: This article also reminded me of an attack that could reproduce almost an
entire image from diffuse reflections of old-style scanned CRTs/TVs, e.g.,
that ghostly blue glow of TV watching visible from outdoors, but also that
imperfect reflections of CRT computer monitors could reveal their contents.

~~~
blevin
Did the diffuse reflection reconstruction rely on high speed photography? Or a
special geometric setup of the reflector? Trying to think of how this could
work.

~~~
pdkl95
It sounds like "Dual Photography".

[https://dl.acm.org/doi/10.1145/1186822.1073257](https://dl.acm.org/doi/10.1145/1186822.1073257)

"We present a novel photographic technique called dual photography, which
exploits Helmholtz reciprocity to interchange the lights and cameras in a
scene. With a video projector providing structured illumination, reciprocity
permits us to generate pictures from the viewpoint of the projector, even
though no camera was present at that location."

I recommend watchign the video from the paper for good explains and
demonstrates the technique. It's very impressive work.

[https://www.youtube.com/watch?v=p5_tpq5ejFQ](https://www.youtube.com/watch?v=p5_tpq5ejFQ)

~~~
VohuMana
Ok that trick with the playing card was cool. I'm impressed by the math that
is behind this concept and all I could think about while watching it is every
cheesy crime show calling out "enhance"

------
robomartin
BTW, I doubt very much that what you are seeing here is backlight uniformity
problems. This is actually an issue with the LCD element, the liquid crystal
itself, and, possibly, to some extent, the optics. We used to manufacture
advanced displays where uniformity and color accuracy was super important
(think being able to identify healthy vs. cancerous tissue during endoscopic
surgery).

As incredible as LCD manufacturing is, the eye is amazing at being able to
pick-out differences under the right conditions. All kinds of corrective work
had to be applied to the displays in order to achieve uniform, repeatable and
reliable color and image rendering performance.

------
mrtnmcc
Love that idea of just photoshopping for the ideal ground truth.

Instead of white target, could even map a desired digital video file into the
ideal target. For more general distortions you could obviate the 'blob' image
and instead just optimize a gain and offset for every pixel independently.
Seems ambitiously high dimensional but I was able to get this kind of thing to
work effectively using SPSA (
[https://www.jhuapl.edu/SPSA/](https://www.jhuapl.edu/SPSA/) ). Also basically
the algorithm of evolutionary strategies in AI:
[https://openai.com/blog/evolution-
strategies/](https://openai.com/blog/evolution-strategies/)

------
taneq
I've always assumed that LED backlight TVs have this kind of correction built
in and must be calibrated at the factory with a similar jig (calibrated camera
looks at screen, screen shows sweeps through black->(red|green|blue), inverse
mapping is recorded to correct colour and intensity).

Nice hack to do it with a webcam. :)

~~~
nrp
OLED phones, TVs, and other OLED products do this kind of uniformity
correction with factory calibration. On some, the calibration data is stored
on Flash or EEPROM on the panel itself and applied in the Display Driver IC.
In others its stored and applied at system level by a Display Controller or
SoC. I'm aware of some LCD-based products that do this as well, but I'm not
sure if TVs are among them. It's a great way to improve uniformity though, and
almost always cheaper than tightening manufacturing tolerances (i.e.
decreasing yield).

~~~
daniel_reetz
Exactly right. Apple does this for every LCD. In the LED display industry
calibration can be per LED, per PCB, per module, or per display. Some of them
store the calibration data on each individual LED PCB, and some store it in
the send box (Brompton).

------
IgorPartola
This reminds me that you should frequently calibrate your display’s color
profile if you ever work with photos or video. There are commercial products
that help you do this essentially by the process that the OP uses. They don’t
do a whole screen, just a section of it. And it seems that the new MacBooks
have some kind of calibration thing that responds to ambient light. But it
would be nice to be able to do this as a standard thing on all displays.

~~~
jiggawatts
This never made sense to me in the LCD era, and I suspect it's a rule-of-thumb
that has been inherited from older CRT technology.

CRTs use distinct phosphors for each color, which slowly fade over time, and
at different rates.

LCDs typically use color filters, which in most cases tend not to fade. In
fact, most LCDs are so consistent that you can use the calibration done by
someone else with the same panel model and use it yourself just fine. (The
rare exception to this would be LCDs exposed to _direct_ sunlight. Strong UV
light can make just about anything fade)

OLEDs fade, much like CRTs, but are very rarely used as PC monitors.

This is why it annoys me that LCD panels don't simply report their ICC profile
to the operating system. It would be 99% accurate 99% of the time. This is a
vast improvement over the current status quo, where color reproduction outside
of premium televisions is basically random.

~~~
jrockway
My experience here is that the OSes are the problem. They provide an API that
looks like "please display the following (r, g, b) tuple". Unfortunately, that
isn't enough information to accurately display a color. To turn an (r, g, b)
tuple into a color, you have to assign it a colorspace, and that's where
everything is broken.

For example, when I last punished myself by using a better-than-sRGB monitor,
I learned that browsers will properly color correct images that have a
colorspace tag, but they do not do it to CSS. So if you have an image with
color #abcdef and then you set a CSS color to #abcdef, they will be different
colors!

Applications that want to properly display color have to hack around things.
They need to ask the OS for the display's colorspace, then they have to figure
out the image's color space, then compute a transformation that will yield an
(r, g, b) tuple that when transformed by the OS (using the monitor's profile)
will display the right color. This is horrifying but does work; so at least
things like Photoshop can typically show you the right color.

It would be nice if we could use more than 24 bits for color and just stored
everything as a CIExyz color. People have known that that is the right
solution for decades but nothing has happened, so I'm not holding my breath.
Realistically, I think we have to agree on a new set of primaries and gamma,
and start assuming that a (r, g, b) tuple is in that colorspace. I guess this
is what DCI-P3 is. It is the same problem all over again, but might at least
get people better colors soon.

~~~
steerablesafe
Throw "rendering intent" into the mix then it all gets pretty messy.

------
juanuys
Nice. It was a bit simpler in the CRT days. I remember my first after-school
job when I was a wee teen, fixing TVs and degaussing them.

[https://en.m.wikipedia.org/wiki/Degaussing](https://en.m.wikipedia.org/wiki/Degaussing)

~~~
lostlogin
I miss degaussing screens. Ones with the button for that were great, and it
was very satisfying to do. I don’t miss the low resolution, the heat and the
feeling of sunburn after looking into one for too long.

~~~
nottorp
I don't know, I'm beginning to think modern LCDs are worse for the eyes
because in the quest for more contrast they have become too bright to be
healthy.

Maybe OLED will fix that. Whenever affordable OLED monitors show up...

~~~
lostlogin
I agree that screens are too bright. A part of my job is calibrating screens
and the brightness they can achieve is alarming. Regulation in radiology
requires a minimum luminance and the spec is vastly in excess of what I would
use and far above what anyone sets when given the chance.

Wickedly high contrast ratios seem to be encouraged too and keeping the ratio
down gets frowned upon. Some of the screens come out the box at close to
1000:1, which is too high but happens with a high luminance screen. The first
thing I usually do with a screen I use is dim it, and I long for iPad and
iPhone screens that are dimmer. The lowest settings are too high.

~~~
nottorp
High contrast is good; problem is on lcd displays you can only achieve it by
having high brightness overall, because black isn't really black.

On CRTs you could set a black background, turn your brightness way down and
happily code with enough contrast. Even with black on white text you could
turn the brightness down somewhat because the black was... black.

Hopefully when OLED becomes usable for something between wall sized TVs and
watches, this problem will be gone because there is no backlight so black is
again black.

------
api
Bonus points if you put this on an FPGA that decodes, modifies, and re-encodes
HDMI.

------
rasz
Do you want MCAS?! Because that's how you get MCAS!

EE in me died a little reading this, while programmer loved the hack :).
Personally I would just replace the backlight, discoloration is most likely
sign of impending failure anyway. Btw we already do something similar to a lot
of electronics by design. For example camera sensors have defect lists,
pristine ones are extremely expensive, think highend microscopes and
satellites.

------
willis936
Oof this is a very interesting and underexplored topic. I haven’t heard of
calibrating display uniformity before. I hope something like this becomes
integrated into OS level caibration at some point.

------
prashnts
Curious about using three picture average and then gaussian blur to reduce
moiré effect as the author did in third picture. It clearly worked, but i'm
not sure how?

~~~
this_was_posted
I suspect he moved the camera slightly in between taking pictures, this would
create different patterns which then reduces the amplitude of the distortion
when you average these different patterns

~~~
prashnts
Makes sense, thanks! I just tried doing it in photoshop and observed that to
remove moiré grid with single picture i'd need much higher blur radius -- not
ideal for a ground truth image. (Note that its a far reaching guess).

I'm gonna add it to my fictional image analysis toolkit for next time when
i'll need it.

------
londons_explore
Looks like you made this correction in sRGB space rather than physical (un
gamma corrected) colorspace, which is unlikely to be correct...

~~~
steerablesafe
Per-channel gain settings remain correct as long as the full workflow was in
sRGB and you approximate sRGB with c^2.2 (where c is the channel value). Most
other arithmetic is horribly incorrect in sRGB, I agree.

------
jeffrallen
In LED jumbotrons sometimes you see a bad pixel or group. I always thought
they should come with feedback cameras that could detect and correct for the
bad pixels the sane way the author is doing.

Free idea, go do it!

------
ComputerGuru
This is awesome (both the idea and the writeup) but I missed the bit about
object recognition? It must have been necessary to detect and extract the
rectangle representing the TV screen from the captured webcam image (even if
the framing/positioning of the webcam were perfect, most webcams are 4:3
aspect ratio), but was a skew transformed performed to correct for any
perspective issues? Assuming such a transform were written, wouldn’t it have
been trivial to throw up a white sheet of paper next to the tv and have that
represent your neutral as a white point reference?

~~~
montecarl
He showed how he did this. He said he took an image with the camera, and
edited the tv to be perfectly white and used that as the target: "After some
fruitless attempts at simple image statistics, I realized it's possible to
edit a camera picture by hand and use that as a ground truth."

~~~
michaelmior
This is true. So the answer is that there was no object recognition since it
wasn't necessary.

------
PeterisP
This is really cool, nifty hack - kudos!

------
1970-01-01
What make and model TV?

~~~
taneq
Make: Panasonic

Model: Clunker :P

------
agumonkey
Superb

------
cosmotic
This is quite the technical accomplishment but I can't stop feeling like
buying a new TV would have cost less than the value of the time spent on this
impressive solution.

~~~
sudhirj
This makes sense if this was an office, and the TV was linked to productivity
and lost revenue. For a curiosity project, the value of the time spent
experimenting, learning and documenting for and teaching us is priceless.

