
IPhone 4’s ‘Retina’ Display Claims Are False Marketing - yumraj
http://www.wired.com/gadgetlab/2010/06/iphone-4-retina/
======
teilo
Near the end of the article:

"Sharp Quattron’s fourth primary color is yellow, and there is nothing for it
to do because yellow is already reproduced with mixtures of the red and green
primaries, he said."

This 20-year display "expert" has never heard of hi-fidelity color?
Inexcusable. It is accurate to say that yellow is not a primary color of
light, so it is true enough that no one should refer to the yellow pixels as a
"fourth" primary color. However, it is completely false to say that "there is
nothing for it [the yellow pixels] to do". The yellow pixels are used to
extend the gamut of the device to cover portions of the visible spectrum that
the primary LEDs, in combination, cannot.

We do this all the time with Hi-Fi printers. We add RGB or Orange, Green, and
Violet to the traditional CMYK primaries to expand the gamut of the device far
beyond what traditional process color can accomplish.

~~~
evgen
> The yellow pixels are used to extend the gamut of the device to cover
> portions of the visible spectrum that the primary LEDs, in combination,
> cannot.

What color in the visible spectrum cannot be expressed as a combination of
red, green, or blue? Octarine perhaps? Adding additional colors only helps if
your pixels are large and putting a red and a blue dot next to each other when
aiming for purple ends up just looking like two dots of different colors
instead of a combined dot.

~~~
teilo
Your sarcasm does not become you. Neither does your ignorance. If you actually
worked in the field of color management, like I do, you would know that, as
with subtractive pigments, so with LEDs: There is no such thing as a pure
primary in the real world. It is impossible to create an LED or a color of ink
that is mathematically perfect pure Red, Green, or Blue. Because of this it is
impossible for an RGB-only display to display all colors that are visible to
the human eye. (Let's exclude UV-florescence from the equation for the sake of
simplicity).

RGB displays use color correction curves and various profiling tricks to
correct for the difference between the pure primaries and the actual light
being output by the LEDs.

Yellow LEDs can extend the color range of an RGB display into areas of the
gamut that the display could not otherwise "hit". The same could be said for
(presumably theoretical) Cyan or Magenta LEDs.

~~~
ugh
How does the technology used to capture the video factor into that?

I would assume that with capturing only RGB you already limited the color
space – the information is lost – and you cannot extend it, no matter what you
try.

I know that, when printing, additional colors can help because the RGB space
doesn’t map exactly to the CMYK space. There are RGB colors you just cannot
get with CMYK.

Are the RGB(capture) and RGB(display) color spaces so different that
additional colors can help? I would assume they would have to be, wouldn’t
they?

~~~
dedward
You cannot get back information - no, although you can artificially guess and
make something that looks better in some situations.

But just because you recorded in one RGB space doesn't mean the particular
display you are using can reproduce that exact RGB space - take any dozen RGB
flat-panel displays and see if their color gamuts match up exactly. They
probably don't - so while I'm ignorant of whatever feature the yellow is
trying to achieve here - whether it's just fluff or actually filling out a
deficiency in the output of their LCD gamut - either way, their display with
the added yellow pixels can _produce_ colours that their previous displays
could not.

~~~
ugh
I have no doubts about its ability to display a wider range of colors. It’s
just that after attending a few lectures about data transmission I rather got
the feeling that those responsible for designing the compression and coding
would rather quit and sail the South Pacific than transmit stuff that’s not
actually used.

That might just have been a by-product of all the compression and coding I
learned about being really old (PAL, NTSC), newer stuff might be a lot more
forward looking (as teilo suggested).

------
amayne
Wired's headline is false journalism...

Phil Plait, a Hubble astronomer politely debunks the Wired article here:
[http://blogs.discovermagazine.com/badastronomy/2010/06/10/re...](http://blogs.discovermagazine.com/badastronomy/2010/06/10/resolving-
the-iphone-resolution/)

~~~
amirmc
Phil didn't point out that pupil dilation also is a factor (presumably for the
sake of simplicity). You might be able to resolve two points in one lighting
condition but not in another.

------
tomerico
_the eye actually has an angular resolution of 50 cycles per degree._

My calculations:

degrees in one inch - arcsin(1/12) = 4.78

Pixels in eye for one inch, 12" away - 4.78 * 50 = 240

Same calculations for 10" arcsin(1/10)*5 = 287

I think something is wrong with the numbers in the article

~~~
gjm11
50 _cycles_ per degree is not the same as 50 _pixels_ per degree.

If your eye can resolve 50 cycles per degree, that means it can tell the
difference between a uniform grey and something that alternates between black
and white 50 times per degree. To display such a pattern, you'd need 100
pixels per degree (black, white, black, white, ..., with 50 pairs).

~~~
abeppu
I don't think it's entirely clear from the article what 'cycle' should mean in
this context. But 1/50 of a degree matches relatively closely to the
traditional Snellen (as in the guy who made the eye charts) definition of
normal vision (20/20 or 6/6 in metric countries
<http://en.wikipedia.org/wiki/Visual_acuity#Normal_vision>) as being able to
discern letters whose features were 1 minute of arc(e.g. 1/60 of a degree). At
12 inches, the angle subtended by a pixel (which is, I think, the
corresponding minimum feature size) is arcsine( (1 in /326)/12 in) is 0.88
minutes (that is, less than the 1 minute Snellen definition of normal vision,
which is in turn smaller than the 1/50 definition that this guy gives).

So I think Soneira guy is off base. But I think the much bigger problem is
that both Soneira and the 1 minute definition are talking about the acuity of
a 'normal' person, and seem to be largely ignoring the issue of a significant
variation in the population. My question is, for what portion of the
population does this display exceed the limits of their visual acuity?

~~~
gjm11
The definition in that Wikipedia article says that 20/20 vision means being
able to resolve two points separated by one arc-minute. Again, on the output
side that would mean being able to display two points at that separation with
something contrasting in between.

On the other hand, elsewhere Wikipedia says that it means being able to
distinguish Snellen optotypes whose total size is one arc-minute. On the face
of it, that implies more resolution than two pixels per arc-minute. (Hand-
wavily, maybe it's about the same: if you can resolve 2x2 pixels in a square
of side 1 arcminute, then for high-contrast Snellen-type images that maybe
suggests that you ought to be able to distinguish about 2^(2x2) = 16 different
such images, and a Snellen chart actually uses 10 or 12 different optotypes,
depending on whether it's one of Snellen's original ones or a modern variant.
That's pretty close to 16.)

Phil Plait's blog entry that someone else linked to talks about the difference
between "normal" and "ideal", and concludes that an average person looking at
a new iPhone at 1' distance will indeed not be able to resolve the pixels
(though not by much).

~~~
abeppu
Actually, wrt the Snellen optotypes, the entire optotype subtends 5 minutes (I
couldn't find the 'elsewhere' you're talking about), but distinguishing them
requires that you be able to resolve features 1 minute in size. In fact, on
Snellen charts, the letters are carefully designed to reflect this. For
instance, on the 'E', the width of each bar of the 'E' is equal to the width
of the white space between bars.
[http://en.wikipedia.org/wiki/Snellen_chart#.2220.2F20.22_.28...](http://en.wikipedia.org/wiki/Snellen_chart#.2220.2F20.22_.28or_.226.2F6.22.29_vision)

So yeah, if you drew out a tiny "E" (or "P" or "F", I don't think it matters)
on a 326 ppi screen a bit more than 12 inches from the viewer's eyes, where
the width of each bar was 1 pixel and the space between bars was 1 pixel, then
I think that would match up closely with the standard for normal vision, at
least in terms of visual acuity for one eye.

~~~
gjm11
Actually, now I can't find the "elsewhere" either and I wonder whether I
misread. Having looked again, I agree with you: standard-according-to-Snellen
vision means being able to resolve features corresponding to (e.g.) an "E" on
a 5x5 pixel grid with pixels of size 1 arc-minute.

Matching that up with the resolution of a _display_ device is still a bit
subtle. For instance, suppose you're trying to display an "E" of that size on
the display, but it's offset by half a pixel vertically. Result: you get a
grey rectangle that's a bit darker along the left edge. :-)

(I think my conclusion from all this is: what Apple are claiming about the
iPhone 4 display is about as close to the truth as it's reasonable to expect
in marketing materials. That is: everything they've said is at least
defensible, but they've put a very positive spin on everything. Seems fair
enough to me. And as a pixel-freak who isn't currently a smartphone user, I'm
awfully tempted by the new iPhone...)

------
URSpider94
This article is a mess.

First of all, iPhone 4's 300 "pixels" per inch actually comprise three color
sub-pixels each. Most display drivers today can use the color sub-pixels to
carry spatial as well as color information, so that would bump the performance
of this display comfortably over the detection threshold of the human eye.

Second, the throw-away comment that "magazines are printed at 300 dots per
inch" is incorrect and mis-leading. Dots are not the same as pixels, a printed
dot is either "on" or "off". A magazine printed at 300 dpi has a substantially
lower actual resolution than a 300 pixel-per-inch display that can render
18-24 bits of color at each pixel.

Finally, as others have mentioned, the slamming of Sharp's Quattron technology
is wrong-headed. Adding a yellow pixel can significantly increase the color
gamut of an LCD display, since in many cases the color resists used to make
the RGB color filters do a particularly poor job of producing yellows.

------
pjdavis
I think "marketing puffery" is an apt description for this article.

~~~
Tamerlin
It's not like Apple is a stranger to marketing puffery... remember the SPEC
CPU claims for the G5?

------
someone_here
Is this important?

~~~
gte910h
It will likely cause purchases of the device that would not otherwise happen.

In the US, your advertising is required to be truthful, or at worst, vague
claims. Those images are neither.

Now I honestly believe Jobs misunderstood the science, as he's a suit, not an
eye doctor/etc, but they shouldn't repeat that anymore now that they've been
corrected until they can show their claim is true.

------
zyb09
Well it's a 960x640 display, which is pretty cool, no? Don't know why they've
come up with this whole "retina" BS, but that's just Apple I guess.

------
latch
A CEO "pushed it a little too far" ? SHOCKED!

