Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really don't like the approach of hiding the display's real pixel size. By now, we should have solved the problem of displaying GUIs with correctly sized elements at various resolutions. This approach only makes it more complicated to do it right in the future.


Both Apple and Windows have always hardcoded their PPI values to 72 and 96 respectively. I've never quite understood this.


Apple's is based on the typographic convention of 72 points per inch, thus text on a Mac screen would be drawn the same size as it would print.

According to Wikipedia, Microsoft's 96 is apparently based on screen text being viewed from a different distance than printed text, so they use 96 ppi to account for the difference, which also gave them more a few more pixels with which to draw characters. Apple had to draw a 10 point character 10 pixels high, but Microsoft could draw a 10 point character 13 pixels high, giving them more to work with.


> Apple's is based on the typographic convention of 72 points per inch, thus text on a Mac screen would be drawn the same size as it would print.

This would be true if the displays were also 72 dpi, which they are not. DPI in Windows can be set manually. Mine is set to 84, based on the handy on-screen ruler in the settings page. If I view a document in Word's print preview at 100% zoom, and stick a printed page on my display (via static electricity, try it) next to it, they match exactly. Very handy for printing over a page with existing content (paper with preprinted corporate borders, headers).


I think the Mac displays used to be, at least at the maximum size. I vaguely recall Mac monitors being somewhat "lower-resolution" than same-size Windows monitors, just because of this. Maybe it was Mac laptops? Not sure. Certainly the original Mac 128 was fixed resolution.

Nowadays, of course, it's a free-for-all.


http://en.wikipedia.org/wiki/Macintosh_128K#Peripherals:

"built-in display was a one-bit black-and-white, 9 in (23 cm) CRT with a resolution of 512×342 pixels, establishing the desktop publishing standard of 72 PPI"

ImageWriter prints were 144dpi, and you could verify thtat the system was WYSIWIG by holding a print in front of the screen. Also, Mac OS graphics used 1/72" pixels for years (with various hacks added soon in order to support the LaserWriter's 300 dpi)

Windows (initially?) had separate notions of device pixels and logical pixels (dialog units?) that allowed for some resolution independence. Its most obvious disadvantage was that it was not simple (neigh impossible?) to know whether two parallel lines you drew looked equally wide.


ISTR that displays also squawk their DPI over EDID. I say that based on experience fiddling with point sizes to get Glass Tty VT220 working right on several different displays; but of course you can also change the dpi with a magic xorg.conf setting, command line option, or xrandr.

Does Windows use the EDID-reported DPI at all?


In windows you can change it to 120 in the usability-settings. Many UI-elements start to look like shit though because the text doesn't fit etc.


The same reason why Apple has not changed iOS devices' aspects and "virtual pixel count" since day one. Android did allow random resolutions and screen aspects, and look at the mess it has caused.


Not much of one? I'm curious to see how the new iPhone is if it follows in the steps of the rumors and has an elongated screen.


What would be coming in the future, though, that we'd have to do it "right"? If this is good enough that people can't see pixels, do we ever need to go higher? Resolution seems like one of the few features that has a limit, so this "hack" may in fact be the most elegant and simplest solution for now and in the future.


Hopefully, it's the first step towards truly resolution-independent interfaces. Imagine differences in screen resolution being differences in quality, as opposed to scale. Of course there will need to be a lot of research into UI/UX to make the most of the opportunities.

In a way, when referring to monitors, the word 'resolution' is in itself a misnomer, as we're not actually talking about density but instead about absolute pixel measurements.


Imagine differences in screen resolution being differences in quality, as opposed to scale.

That's exactly what this is, though — the window decorations, for example, in OS X on the new MBP are identical in size to the ones on the "old" MBP, they're just more detailed.

And since they're so detailed (hence "retina display" moniker) that most people won't be able to see any pixels at all and, since in just a few years, all displays will obviously be "retina displays", I think there's a strong argument to be made that resolution-independent interfaces aren't ever gonna happen and that that's not a bad thing at all. (They've been tried for years and they always end up a burden for the developer and/or not working well for the user.) The limitations of human biology make it so that this simple "hack" is actually the easiest for developers, works the best for users, and won't ever need to be replaced. … OK, yeah, one day we'll all have bionic eyes and we'll have to do the same doubling trick again, but presumably "Bionic-Retina Displays" will be cheap to produce by then. :)


It's still an ugly hack.

While I agree screen densities won't increase anytime soon, it's still an unsolved problem.

IIRC, NeXT did solve it, as did Sun's NeWS.


Without completely eliminating all parameters to Cocoa and CoreFoundation drawing functions that take measurements in pixels and instead using pixel-less measures (sort of like how we specify font size in pts), this is always going to be a "hack".

Maybe one day...?


Huh? All Cocoa drawing takes place in points (they're floats in most cases), not pixels.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: