
All image-editing software gets scaling wrong - joshwa
http://www.4p8.com/eric.brasseur/gamma.html
======
dgreensp
It's not that being gamma-aware is "right" and most software does it "wrong"
for speed.

In many contexts, RGB values don't have gamma associated with them, just as
they don't have ColorSync profiles or whatnot, or if they have one, it was
just invented at some stage of processing. (Using the current screen gamma
would be pretty arbitrary for an image editor.) Trying to "convert" between
different gamma values during processing likely will do more harm than good if
your entire pipeline doesn't have this philosophy, or worse, if only one
program (e.g. Photoshop) or format (e.g. PNG) does.

I suppose PNG files win a prize for carrying gamma information, so that web
browsers and image viewers can apply a curve in software that throws off their
brightness and contrast relative to everything around them? And when Photoshop
decides the colors in my GIF file need to be "converted" to sRGB from god-
knows-what it thinks they were before?

If you want every last image operation to be done with perceptual accuracy,
convert your images to some fancy color space like CIELAB, after careful
consideration of their intended source gamma and color space, and do your
calculations there; then somehow be sure they are accurately converted for
display on the user's monitor.

~~~
barrkel
What you write doesn't change the fact that nobody in practice uses a monitor
with a 1.0 gamma. The (integer) average of 0 and 255 is 127, but the color
(127,127,127) isn't 50% grey. That's the problem, and it isn't dependent on
your image format, assuming it stores colors in the rgb space.

Even assuming a gamma of 1.8 would mean less loss of detail when scaling down,
even when the eventual viewer is eg using 2.2.

~~~
baddox
That's interesting. I've always heard of and seen "gamma curves," but I'm not
a graphics guy and never gave it much thought. Is (127,127,127) lighter or
darker than 50% grey? Do you have a link to an introduction on this topic and
why I should care?

~~~
barrkel
127,127,127 feels dark. The explanation is in the article:

<http://www.4p8.com/eric.brasseur/gamma.html#explanation>

Basically, if you have a test image which consists of alternating black and
white, it should have the same perceived brightness when you scale it down -
but when you scale it down, the software needs to average out the black and
white. If it does a simple arithmetic average, it'll probably choose
127,127,127 once it's downscaled enough to become uniform in color. But if you
compare that uniform color with the original black and white lines, it looks a
different shade - a darker shade. That's because 127,127,127 isn't 50% grey,
it's not in the middle between black and white.

The article has further explanation.

------
RiderOfGiraffes
Reasonably extensive discussion from the last time this was submitted:

<http://news.ycombinator.com/item?id=1141971>

~~~
joshwa
huh. not sure why the submit form didn't notice the older discussion?

~~~
ralph
They are indeed the same. How did it pass?

    
    
        $ foo()
        > {
        >     wget -qO- "${1?}" |
        >     sed -n '/.*title"/!d; s///; s/">.*//; s/.*"//p; q'
        > }
        $ cat <(foo 'http://news.ycombinator.com/item?id=1141971') \
        >     <(foo 'http://news.ycombinator.com/item?id=1523991') |
        > uniq -c
              2 http://www.4p8.com/eric.brasseur/gamma.html
        $

~~~
RiderOfGiraffes
Based on observations, but not an examination of the current code base, I
would say that matching URLs are only flagged for a limited length of time, so
repeats are allowed after enough time has elapsed.

------
rflrob
I think the claim that this will affect scientific data is a little alarmist.
Most of the data I work with comes from a CCD, which Im almost certain has a
linear scale between input brightness and pixel value. But more importantly, I
wouldn't ever scale my images down, except possibly for publication, and by
that point, I've usually tweaked the brightness and contrast anyways.

~~~
jacquesm
> Most of the data I work with comes from a CCD, which Im almost certain has a
> linear scale between input brightness and pixel value.

CCDs are definitely less than ideal as a sensor, unless you've calibrated the
specific unit you are using and apply a correction curve you can definitely
expect some non-linearity.

Some professional grade CCDs apply a correction internally before passing the
data to the consumer.

Sources of non-linearity are blooming (the leakage of current from brightly
lit cells to dark cells nearby), AD non-linearity and temperature effects.

I'm assuming you are talking about a professional grade CCD with an internal
correction, but the blooming issues are part and parcel of the medium and when
you have high contrast images as input you really have to be aware of that.

------
augustl
The article is almost impossible to read on an iPad, since Safari scales all
web pages by default. The very first example is just a gray blurb, which is
ironic since the article explains how that image becomes a gray blurb when
scaled incorrectly.

------
Entlin
The latest Photoshop (CS5) gets it wrong.

For giggles I fired up pro compositing tool Nuke (www.thefoundry.co.uk), which
has a choice of 8 resize algorithms, and 7 of them get it right.

What that means? I guess that Photoshop isn't really really a professional
app.

~~~
dtf
Bill Spitzak - original developer of Nuke (and FLTK) - wrote the following
pages on this very subject back in 2002:

<http://mysite.verizon.net/spitzak/conversion/index.html>

------
bryanh
This type of article is definitely interesting, but it strikes me as similar
to the comparison of audiophile and consumer cables, or lossless vs. properly
encoded high quality lossy formats. 99.9% of the population can't tell the
difference and don't care except in obvious edge cases, so how much time
should you waste on that 0.1%?

~~~
jrockway
The real question is: how much time should you waste writing an article when
you can just send a patch to ImageMagick instead?

Talking about a problem is nice, but fixing it is even better.

~~~
TGJ
If no one understands why the problem existed in the first place, it will be
repeated.... Sound familiar?

------
AndrewHampton
Interesting that Google's Chrome 5.0 doesn't display the browser examples
correctly, but the browser on my Android phone does.

Link to examples: <http://www.4p8.com/eric.brasseur/gamma_dalai_lama.html>

~~~
jarin
They work fine in Chrome 5 for me, are you on a Mac or Windows?

~~~
eru
My Linux Chrome (6.0.453.1 dev) fails, too.

------
yason
I don't really know intimately about this but it seems to me that the ever-
problematic gamma is one of the problems whose solutions should lie in the
realm of hardware instead of software.

Why couldn't we have graphics cards, monitors, printers, cameras, camcorders
and other devices that perform mapping from linear values to exponential
values (or vice versa)? Then linear scale would be the standard and all
software could continue to live on the nice and comfortable illusion that 127
is half the brightness of 255 and everyone would be happy. It's just that when
a pixel of value 127 was shown on the screen it would actually show up as what
we currently get from 180 or so.

~~~
jacobolus
In the not impossibly distant future (that is, we could do it now, but it
might be slow enough that a few people would complain) we’ll be able to do all
intermediate processing with floating point, and it won’t be an issue.

Using 127.5 to represent half and 255 to represent 1 is completely
unintuitive.

~~~
yason
How would using floating point avoid the conversion from linear to exponential
range? You will still have to convert between brightness and voltage. And you
would like to do gamma conversion in floating point anyway.

For the second point, programmers are familiar with base-2. For decimal
values, both end-users _and_ programmers confuse 127/128 vs. 255/256 for being
half the latter. Besides, end-users often see [0.0, 1.0] floating point range
for color values already in graphics user interfaces, and would expect 0.5 to
give half the intensity that does 1.0.

~~~
jacobolus
Obviously you take integer gamma-compressed values, convert to floats, and
then un-gamma-compress those, doing intermediate math on the linearized
floats, and then gamma compressing and converting back to integers when you
need to save out a file of some sort.

For showing the amount in each component to users, it's possible to use either
linear or gamma-compressed values, depending on the goal, but either way
showing a decimal makes it a lot easier to understand than showing a fraction
of 255.

------
virtualritz
The title is slightly exaggerating as pretty much all top grade image editing
software uses linear space, internally, and hence gets this right. Examples
are DigitalFusion, The Foundry's Nuke and Apple's discontinued Shake.

When working in a space that has any profile burned in, all non-floating point
data must also be promoted to a wider bit depth than the input or else the
linearization will introduce banding.

As memory (RAM) used to be scarce when people started writing such programs,
the fact this problem exists still nowadays imho is twofold:

(1) lack of understanding of the problem per se and (2) [hardware &
processing] constraints of the systems such software ran on, 15 years ago.

Neither is an excuse of course.

------
Geee
I wonder if GPU hardware does it right or wrong on texture rendering. Someone
care to test?

~~~
jacquesm
GPU hardware runs software designed for it, so in the end if it doesn't work
properly it's not the hardware that's at fault.

~~~
jckarter
Texture sampling is still performed with specialized hardware. It does
bilinear or trilinear sampling, which would be "wrong" by the original
article's definition.

~~~
miloshh
Yes, but I think the GPU designers did the right thing here by not injecting
"magic" gamma-correction functionality into the texture sampler. There are
many operations done on textures before they become pixel colors, plus they
are often used to store non-image information (positions, normals, CT
densities) where gamma correction would be incorrect.If you care about correct
tone-mapping (most games probably don't), you can correct your textures
beforehand.

~~~
jckarter
Oh, I agree entirely. I just called it "wrong" in response to Geee's original
question. Should have clarified.

------
ck2
So is there a simple solution for Windows based web designers?

I'll write to the Irfanview author but I don't know if there would be any
change soon.

------
mikecane
I use the free Photo Toolkit (it's good enough for basic blogging: scale,
rotate, crop) and tried this test. It has this flaw too.

------
nailer
I've read this on HN before, but while we're on the topic (hopefully there'll
be a few smart graphics people here), I noticed that the thumbnails my app
generates via PIL have subtly different colors than the original. I'm not
doing anything other than resampling. Anyone know why this occurs?

------
ovi256
Photoshop CS 4 on Mac OS X gets it wrong, aargh ...

~~~
ovi256
But Preview doesn't, because the Core Image library uses the correct algorithm
! Ps CS 4 implements its own. Reading further, you can do it correctly with
Photoshop, if you use the right options.

~~~
pohl
I wonder if this means that safari does not use core image for this.

------
hackermom
Safari 5.0 on OS X gets it wrong
(<http://www.4p8.com/eric.brasseur/gamma_dalai_lama.html>); just a gray image
with subtle, barely visible pink and green contours.

~~~
wzdd
Preview.app, however, appears to get it right -- just tried it on the test
image using tools->adjust size.

