
A Proposal for a High Resolution Display (2009) - jevinskie
https://bernd-paysan.de/hires.html
======
jacobolus
The ideal would be to use hexagonal pixels for camera sensors, displays, and
printer dither/halftone patterns, the same way human retinas arrange their
cone cells. :-) Cf.
[https://scholar.google.com/scholar?q=hexagonal+image+process...](https://scholar.google.com/scholar?q=hexagonal+image+processing)

Since we have so much variety in screen shapes and resolutions today, most of
the time we’re resampling images on the fly to rotate/scale them into their
position on screen anyway, so the pixel data representing an image in a file
isn’t getting directly sent to the same on-screen grid. Thus for most of what
we display on screen, there’s no inherent need to stick to the same grid shape
used for storage, and a hexagonal pixel grid would in practice look better for
the same number of pixels (hexagonal grids are particularly good for
representing curved shapes and reducing moiré artifacts). For that matter,
there’s no inherent need to store square grids of pixels in our image formats;
we could use hexagonal-pixel files and easily resample them for display on
arbitrary display grids. The only thing stopping us is technical fluency with
rectangular grids, and cultural/historical inertia.

(Think of the iPhone 6+, where software can’t even address the native screen
pixels, and literally everything gets rendered to one size grid and then
resampled for a different display grid.)

Beyond that, people really need to stop using the extremely misleading CIE
1931 (x, y) chromaticity diagram when comparing color gamuts. The CIE 1976
(u', v') UCS diagram is much more uniform, and just as easy to plot (it’s just
a simple linear transformation of the (x, y) diagram). We’ve known the
problems with the (x, y) diagram for >50 years now, there’s really no excuse
for its continuing ubiquity.

For making diagrams of pixel grids, it would also be better to show less
intensely colored RGB pixels, so that the G’ pixel doesn’t so look obviously
less intense.

~~~
dietrichepp
On the iPhone 6+, software can, in fact, address native screen pixels. This is
even quite easy. However, that is not the default API behavior, the
application must specifically request device native pixels by setting a
certain flag. Typically, general GUI apps would not bother because it's easier
that way, games would flip the switch for performance, and more sophisticated
apps (photo apps, perhaps) would flip the switch as well.

~~~
panic
Here's more info about how the iPhone 6+ scales pixels in various situations
if anyone's curious: [http://oleb.net/blog/2014/11/iphone-6-plus-
screen/](http://oleb.net/blog/2014/11/iphone-6-plus-screen/)

------
userbinator
Digital cameras have used LCDs with non-square pixels for years, but the
problem is that with such an arrangement, you can't get sharp text (or any
other fine horizontal/vertical line detail) without things looking blurry or
unusually jagged vertically/horizontally. Photography is (mostly) fine because
the content is largely smooth gradients.

 _The red and blue pixels can be used for sub-pixel anti-aliasing, which can
improve percieved resolution and rendering accuracy further._

Subpixel AA just makes my eyes water and I feel dizzy after a few minutes,
probably because there are no real edges to focus on. I'm probably in the
minority, but not the only one who prefers pixel-sharp text.

~~~
jacobolus
If your antialiasing is making things look fuzzy, then you are either
positioning yourself too close to the display, or your display has too low
resolution.

When you’re talking about digital camera LCDs, do you mean the display on the
back of the camera? Those are mostly awful, I just ignore them. :-)

~~~
frankling_
Isn't your first statement tautological? I'm also in the apparent minority to
whom font antialiasing (subpixel or not) on low-ppi screens, e.g., 22" at
1920x1080, seems a bit blurry, even when leaning back. Certainly though, there
are implementations of different quality: I find ClearType the most
acceptable.

As far as I am aware, I have yet to see non-antialiased fonts rendered on a
high-ppi screen, but it would be interesting to see to what degree
antialiasing is still necessary on such screens.

~~~
jacobolus
Luckily there are now displays with ~22″ diagonal and 3840×2160 pixels.

These look much better than the 1920×1080 ones.

------
Someone
I think this mostly would be a marginal gain, especially given that there
would be a lot of catching up to do in the manufacturing space, except for the
wider color gamut, but it is interesting to think about possible improvements.
Here are some ideas:

\- _" Note that red and blue pixels need to be twice as intensive as green"_
Is there a necessity for pixels to be rectangular? If not, I would go and shop
in
[https://en.m.wikipedia.org/wiki/List_of_convex_uniform_tilin...](https://en.m.wikipedia.org/wiki/List_of_convex_uniform_tilings)
to look for a way to have fewer, but larger red and blue pixels.

[https://en.m.wikipedia.org/wiki/Snub_square_tiling](https://en.m.wikipedia.org/wiki/Snub_square_tiling)
using green triangles looks like a decent candidate to me.
[https://en.m.wikipedia.org/wiki/Elongated_triangular_tiling](https://en.m.wikipedia.org/wiki/Elongated_triangular_tiling)
also seems reasonable. In practice, the ideal pattern would depend on the
relative brightness of the colours in the light source used.

(I guess it would be easier to just make the familiar rectangular pattern with
4 stripes RGBG and/or with variable widths of the color bands at a slightly
higher resolution than to experiment with these)

\- If you are making a display where each pixel directly emits light (as
opposed to a LCD display, where a pixel can be controlled to let through light
from the backlight) and pixels can be made transparent for colors they do not
emit, layer the pixels.

~~~
jacobolus
There’s no inherent reason display pixels (or sensor pixels) must be any
particular shape.

For an amusing idea w/r/t sensor pixel shapes, check out this paper:
[http://www.cis.pku.edu.cn/faculty/vision/zlin/Publications/2...](http://www.cis.pku.edu.cn/faculty/vision/zlin/Publications/2015-TIP-
Penrose.pdf)

------
zbuf
A nice idea.

The trouble is our reliance on expecting displays to reproduce frequencies
(aka. crispness) that is beyond the Nyquist frequency of the display.

So with the proposed bayer pattern, we can no longer represent horizontal and
vertical lines which are common in text and widgets with the sharpness we
expect (unlike photographs, where we don't expect these)

We should be looking to satisfy Nyquist with a high enough resolution display
and an optical low-pass filter between the display and the eye. Then things
will all fall in to place...

------
mikejmoffitt
This kind of display is poor at representing hard, orthagonal edges, which
drive much of our user interfaces, lots of text rendering, and pixel art.

~~~
Retra
The depends on the real resolution, doesn't it?

~~~
mikejmoffitt
Increasing resolution makes it _better_, but the same resolution with square
RGB arrangements will do a better job for the aforementioned contexts.

------
grabcocque
Isn't that Pentile?

~~~
justinsaccount
Without the rotation, yeah.. pentile is basically

green red green blue green red green blue green

What I don't understand about this proposal is why the rotated grid has 4
different colored squares.

~~~
patrickyeon
The author talks about using two different greens in the 5th paragraph. It
creates a wider colour gamut.

~~~
mark-r
The real question is if the second green is physically realizable with cost-
effective chemistry. Given that all real-world sources will start with 3-color
RGB, it's questionable whether the improvement would be worth the cost. The
Sony F828 camera with its RGBE sensor had the same problem in reverse, there
was no advantage to having a 4th primary when everything had to be down-
converted to the lowest common denominator.

~~~
jacobolus
Sony sensor press release:
[http://www.sony.net/SonyInfo/News/Press_Archive/200307/03-02...](http://www.sony.net/SonyInfo/News/Press_Archive/200307/03-029E/)

I’d love to see some better multispectral cameras and displays. If you can get
the content and hardware working together, you can get substantially better
results than current tech. Of course, as you say, there’s a big chicken–egg
problem.

------
sethish
OLPC did the color swizzling back in 2006
[http://wiki.laptop.org/go/File:PixelLayoutDiagonal.png](http://wiki.laptop.org/go/File:PixelLayoutDiagonal.png)

~~~
geon
So they throw away 2/3 of the graphics memory?

------
trevyn
(2009)

