A flat screen is best for general use.
> It is recommended that the reference pixel be the visual angle of one pixel on a device with a pixel density of 96dpi and a distance from the reader of an arm's length. For a nominal arm's length of 28 inches...
It would seem that the reference pixel is simply 1/2688 of the typical distance between your eyes and the device. If a device is meant to be used at half the "arm's length" distance (14 in), the reference pixel on that device would be only half as large. If a device is meant to be used at 3x times the distance (84 in), the reference pixel would be 3x larger. Much easier than angular diameters.
I suspect I know why I was downvoted; I used unnecessary emotive language ("dumb") and didn't explain my point clearly. Most of the rest of the commenters were focused on one part of the article's point, which is very relevant -- the idea that a pixel is no longer a pixel, but a particular fraction of an inch of screen space. I was complaining about a different part, which is the article author's claim that the function mapping real pixels to CSS pixels is nonlinear (which I think is just a misreading of what the spec intended.)
I worked out a while ago that the comments I expended least effort on were the ones that were most likely to get significantly upvoted (and also downvoted) - longer comments get way fewer votes. My theory is that a single, almost throwaway sentence is easier to agree (or disagree) with, and hence earn a reflexive vote-click. When I write a few paragraphs (or more) as a comment, particularly with researched links and/or data, and thoughts/commentary on those links/data, I get way fewer votes, either up or down.
My initial reaction to this "discovery" was to decide to post shorter and more concise comments. But a few moments of reflection revealed that for me that's a pointless change of behavior for two reasons: 1) I don't comment with the aim of getting voted up, and shouldn't change my comment behavior just because there's a metric to be gamed, and 2) because at least for me, the bulk of my karma has come way more from fortuitously being first to submit a popular link (mostly assisted by my non West Coast GMT+10:00 timezone) - since the karma to be gained from a several hundred upvoted submission is _way_ less effort than writing thoughtful comments - and it's clear people are gaming that to pump karma (who was it that posted a while back about seeing bots stalking their rss feeds to autosubmit new posts? patio11 maybe?)
As you can see, I'm rambling all over the place with this comment - almost certainly in a way that makes it more difficult for readers to chose whether to up or down vote, and I'll guess resulting in neither.
Possibly stupid idea floating around my head right now - what if the voting system allowed you to not just up or down vote a comment, but to selectively up or down vote paragraphs or sentences or sentence fragments? Maybe I could choose to upvote your "the article author's claim that the function mapping real pixels to CSS pixels is nonlinear (which I think is just a misreading of what the spec intended.)" and possibly downvote another bit (there's not actually any of your post I'd choose to downvote, but maybe for example the "I suspect I know why I was downvoted"), then choose how to split my 1 unit of vote between the bits I want to vote up and the ones I want to vote down, so I could say 2/3rds for the up vote and 1/3rd for the down vote giving you a total of +1/3rd of a unit of karma - and more importantly, giving you feedback on why you're seeing the voting numbers you are…
Take this as a sign that HN is going the way of Digg and Reddit, may they rest in peace.
As for the mechanism of action, I think it has to do with people having the need to feel special. The crowd does A, a lonely voice suggests B and people jump on the bandwagon to be different. In other words, it's a mild and misplaced rebellion against the status quo. I've seen this happen countless times on social news sites and it seems to be one of the glitches of the human brain. Follow the white rabbit...
Of course, when you complain about being downvoted it shames people into upvoting you (sort of like when adults bully children into fake-apologizing for something they're not sorry about).
Personally, I'm disgusted at the W3C standard. It's a great idea to have an angular measure (really great) but to call it a "pixel" is horrible. A pixel is the smallest controllable dot on a physical display, and nothing else. Call it an "aixle" abbreviated "ax" and short for "angular pixel" but don't overload the term "pixel".
Personally I've made peace with it. I say either "CSS pixels" or "device pixels", depending on what I want to express.
And although the high resolution screens have only made the difference between the two more visible, it was there since Opera featured full pages zoom many years ago, and when Mobile Safari introduced the "viewport" meta-tag in 2007.
I mean, think about it. Say you have a 600dpi printer. Take a typical web page that sets its body element to be 1000px wide, because the person writing it was using a 96dpi display. If "px" really meant "smallest controllable dot", that web page would print about 1.66 inches wide. Which is obviously undesirable. On the other hand, if "px" means "the length that looks about as long as one pixel on a 96dpi display" then the same web page would print about 8 inches wide, which is probably much closer to what both author and user wanted.
This is also exactly why Apple did the "pixel doubling" thing on iPhone 4 and iPad 3: it was done to prevent existing content that made certain assumptions about the visible size of "px" from breaking.
Screens are more accurately measured in PPI (pixels per inch) while the smallest elements a printer can produce (more akin to each of the 8 bit sub-pixels on a screen) are measured in DPI (dots per inch). Since ink is 1 bit more smaller elements (dots) are needed in some sort of dithered pattern to represent grays and colors.
Using halftone screening  the image elements are called lines and so a 600dpi printer is capable of producing 85–105 LPI (lines per inch).
The lines per inch of print are more analogous to the pixels per inch of a screen than dots per inch are.
So, that 96ppi LCD and the 600dpi printer have around the same information density for practical purposes.
Sometimes pure black is printed incorrectly and looks "fuzzy" with dots around the edges - in that case it is rendered with a halftone screen.
What would you have used the "device pixel" unit for, exactly?
And that's because we've had "retina" devices for many years now, called "printers" and CSS was designed to deal with that situation from the start.
To my knowledge, all desktop browsers ignore this spec and treat each pixel as a pixel. (This will likely change with the upcoming Retina Macbook Pros)
For a while all mobile devices treated all pixels as a pixel. But then iOS and Android devices began to dramatically increase their DPI. In the case of iOS, the math is easy, everything gets multiplied by 2 (though chasing pixel precision in a browser does still require hacks ).
Android is much more fragmented (go figure). System-wide, there is a DPI setting that influences the viewport pixel-size that the browser claims. For a 800x480 screen, a 1.5x multiplier is used. The browser advertises the mobile-standard 320px viewport width.
For the most part, this is good because websites are easier to design for and look roughly as designed on more devices. On ultra-high DPI devices, they even appear pixel-precise.
The problem is on the very common mid-dpi devices like the millions of 4" 800x480 devices out there. Pixel-level control is lost, and the pixels are large enough for this to be visible. Some people don't care about pixel-level design precision, some people do. Most people, though, will recognize that a webpage looks not-quite-perfect even if they can't put a finger on it.
We're almost out of the woods on phones as DPI is quickly approaching the upper 200's across the board. Unfortunately we're just entering it for non iOS tablets.
That's not ignoring the spec, though, it's following it. Where the device pixel is close to the reference pixel (as it is, on desktop browsers), the px measurement is supposed to represent one device pixel. See the CSS 2.1 spec: http://www.w3.org/TR/CSS2/syndata.html#length-units
Not really. Most desktop browsers support pixel scaling with Ctrl+Plus and Ctrl+Minus or user preferences, and they may even remember this setting for each domain.
So not only is a CSS pixel not always a device pixel, but it may be a different (fractional) number of device pixels on different websites.
Gecko has supported fractional pixel values for years in general, though for borders in particular the width is clamped to integer _device_ (not CSS) pixels.
WebKit has been rounding them at parse time (even in cases when 1 CSS px is multiple device pixels) for a while, but they're about to fix that.
I believe that IE also supports subpixel layout. Not sure about Opera, offhand.
As others have mentioned, if a user prints a web page on a 1200dpi printer, your 960px fixed layout is 4/5ths of an inch wide. Your users would not be pleased. Under the device-centric interpretation of px, you'd have to provide a medium-specific style sheet for every possible type of output device. Designers would not be pleased.
Maybe you're trying to use HTML+CSS as a rendering target for a general purpose widget library like Gtk+. In that case, I can understand your frustration.
Using absolute measurements to size things on a web page that is then viewed on a TV (viewing distance in the 5-15ft range), a tablet (viewing distance in the 1-2ft range), and an eyeglass HUD (viewing distance in the 1-3in range) would be a disaster.
The fact that people _were_ using inches and millimeters on the web and expecting them to somehow work across all these devices is why they're all now defined in terms of CSS reference pixels...