Hacker News new | past | comments | ask | show | jobs | submit login
CSS px is an Angular Measurement (inamidst.com)
129 points by ned on Mar 12, 2012 | hide | past | web | favorite | 60 comments



This is dumb. The fact that one pixel subtends a particular visual angle does not imply that N pixels subtend N times that angle. It should be very plain to anybody implementing or using a rendering engine that pixels are intended to be linearly additive. This implies that N pixels will subtend less than N times the angle one pixel subtends. On a flat screen, this is normal and expected, because a pixel further from the eye will subtend less angle than a pixel closer to the eye.


The fact that this measurement is an approximation (and a very good one in fact) doesn't make it "dumb". In fact, it's flat displays that are "dumb". An optimal display would be curved so that each pixel subtends a roughly equal visual angle, and it's only the fact that most displays subtend a relatively small visual angle that allows us to approximate this with flat displays.


A curved display (let's say, spherical) centered approximately at the "primary" observer's eyeballs whose pixel elements all subtend equal solid angles would (a) lead to weird unintuitive and hard-to-program-for locations in their most natural expressions the further you wandered from the center-horizontal or center-vertical row/column of pixels; (b) make it very hard to those whose eyeballs aren't smack-dab in the center of the sphere to make an intuitive mapping from their distorted view of the screen to something that makes sense.

A flat screen is best for general use.


I agree that a spherical display the size of a monitor or TV would be unweildy; however a head-mounted display with a large field of view would work best if it was curved.


Well, if you want to program for a spherical display there's the 30 foot diameter AlloSphere: http://www.allosphere.ucsb.edu/


You're right, and the article linked here is wrong. The CSS spec doesn't say that px is an angular measurement, and the formula given that is supposed to convert between px and radians has no basis in the spec. The spec specifically defines px as a length; angular measurements only become relevant in calculating the reference pixel, which is a length used to define the size of 1px in circumstances where physical pixels are significantly different in size from the physical pixels of a computer monitor.


OK, so based on the following spec:

> It is recommended that the reference pixel be the visual angle of one pixel on a device with a pixel density of 96dpi and a distance from the reader of an arm's length. For a nominal arm's length of 28 inches...

It would seem that the reference pixel is simply 1/2688 of the typical distance between your eyes and the device. If a device is meant to be used at half the "arm's length" distance (14 in), the reference pixel on that device would be only half as large. If a device is meant to be used at 3x times the distance (84 in), the reference pixel would be 3x larger. Much easier than angular diameters.


That's actually a very good point, don't know why you're being downvoted.


I find it interesting that I was downvoted to the negatives, then you posted this reply, and then I got upvoted to the top. I wonder what the causation is there.

I suspect I know why I was downvoted; I used unnecessary emotive language ("dumb") and didn't explain my point clearly. Most of the rest of the commenters were focused on one part of the article's point, which is very relevant -- the idea that a pixel is no longer a pixel, but a particular fraction of an inch of screen space. I was complaining about a different part, which is the article author's claim that the function mapping real pixels to CSS pixels is nonlinear (which I think is just a misreading of what the spec intended.)


<meta discussion about voting>

I worked out a while ago that the comments I expended least effort on were the ones that were most likely to get significantly upvoted (and also downvoted) - longer comments get way fewer votes. My theory is that a single, almost throwaway sentence is easier to agree (or disagree) with, and hence earn a reflexive vote-click. When I write a few paragraphs (or more) as a comment, particularly with researched links and/or data, and thoughts/commentary on those links/data, I get way fewer votes, either up or down.

My initial reaction to this "discovery" was to decide to post shorter and more concise comments. But a few moments of reflection revealed that for me that's a pointless change of behavior for two reasons: 1) I don't comment with the aim of getting voted up, and shouldn't change my comment behavior just because there's a metric to be gamed, and 2) because at least for me, the bulk of my karma has come way more from fortuitously being first to submit a popular link (mostly assisted by my non West Coast GMT+10:00 timezone) - since the karma to be gained from a several hundred upvoted submission is _way_ less effort than writing thoughtful comments - and it's clear people are gaming that to pump karma (who was it that posted a while back about seeing bots stalking their rss feeds to autosubmit new posts? patio11 maybe?)

As you can see, I'm rambling all over the place with this comment - almost certainly in a way that makes it more difficult for readers to chose whether to up or down vote, and I'll guess resulting in neither.

Possibly stupid idea floating around my head right now - what if the voting system allowed you to not just up or down vote a comment, but to selectively up or down vote paragraphs or sentences or sentence fragments? Maybe I could choose to upvote your "the article author's claim that the function mapping real pixels to CSS pixels is nonlinear (which I think is just a misreading of what the spec intended.)" and possibly downvote another bit (there's not actually any of your post I'd choose to downvote, but maybe for example the "I suspect I know why I was downvoted"), then choose how to split my 1 unit of vote between the bits I want to vote up and the ones I want to vote down, so I could say 2/3rds for the up vote and 1/3rd for the down vote giving you a total of +1/3rd of a unit of karma - and more importantly, giving you feedback on why you're seeing the voting numbers you are…


>I find it interesting that I was downvoted to the negatives, then you posted this reply, and then I got upvoted to the top. I wonder what the causation is there.

Take this as a sign that HN is going the way of Digg and Reddit, may they rest in peace.

As for the mechanism of action, I think it has to do with people having the need to feel special. The crowd does A, a lonely voice suggests B and people jump on the bandwagon to be different. In other words, it's a mild and misplaced rebellion against the status quo. I've seen this happen countless times on social news sites and it seems to be one of the glitches of the human brain. Follow the white rabbit...


Sorry, I mistakenly downvoted you. I only skimmed the article since I already knew the history of px, and then I misread your comment; I thought you were arguing the opposite position (which is indeed dumb and pedantic).

Of course, when you complain about being downvoted it shames people into upvoting you (sort of like when adults bully children into fake-apologizing for something they're not sorry about).


You're right: this implies that a browser needs to render pixels at the edges of the screen differently than the ones in the middle to fully conform to the spec, particularly very wide (e.g. 30") displays and/or when your face is very close to the screen.


Is anybody else having problems with notifo-based comment notifications? I am receiving them, but the links stopped working sometime between Jan. 22 and March 8.



Oh dear. Thanks for the heads-up; this is sort of unfortunate since (as far as I know) HN doesn't provide any other way to be notified of comment replies.


Try http://hnnotify.com/ ; it works perfectly.


This is one of the more important Hacker News submissions I've seen in a long time. It seems that there is, potentially, a fundamental disconnect between what "px" is supposed to mean and what it means in practice. Given the incredible importance of the web, particularly the front-end of the web, and the extraordinary increases in screen resolution (today the iPhone 4 and iPad 3, tomorrow most computers), it's very important to resolve this discrepancy.

Personally, I'm disgusted at the W3C standard. It's a great idea to have an angular measure (really great) but to call it a "pixel" is horrible. A pixel is the smallest controllable dot on a physical display, and nothing else. Call it an "aixle" abbreviated "ax" and short for "angular pixel" but don't overload the term "pixel".


I'm afraid this boat has long sailed. The "pixel" term is in the CSS 2.1 Spec. It's set in stone.

Personally I've made peace with it. I say either "CSS pixels" or "device pixels", depending on what I want to express.

And although the high resolution screens have only made the difference between the two more visible, it was there since Opera featured full pages zoom many years ago, and when Mobile Safari introduced the "viewport" meta-tag in 2007.


Unfortunately, redefining "px" from its original meaning as "device pixels" to a new meaning of "probably one 96th of an inch except on mobile browsers where ..." means that CSS no longer has any way to express "device pixels".


What are the use cases for expressing "device pixels" in a world of widely varying (both across devices and in time) device pixel densities?


High-resolution devices (that includes iPhone 4 and iPad 3, but also every single printer anyone's used in the last 5+ years) are exactly why the spec says what it says. The only other option was to not have a "px" unit in the spec at all.

I mean, think about it. Say you have a 600dpi printer. Take a typical web page that sets its body element to be 1000px wide, because the person writing it was using a 96dpi display. If "px" really meant "smallest controllable dot", that web page would print about 1.66 inches wide. Which is obviously undesirable. On the other hand, if "px" means "the length that looks about as long as one pixel on a 96dpi display" then the same web page would print about 8 inches wide, which is probably much closer to what both author and user wanted.

This is also exactly why Apple did the "pixel doubling" thing on iPhone 4 and iPad 3: it was done to prevent existing content that made certain assumptions about the visible size of "px" from breaking.


Confusion between px and dpi are common for people who haven't done work for both screen and print:

Screens are more accurately measured in PPI (pixels per inch) while the smallest elements a printer can produce (more akin to each of the 8 bit sub-pixels on a screen) are measured in DPI (dots per inch). Since ink is 1 bit more smaller elements (dots) are needed in some sort of dithered pattern to represent grays and colors.

Using halftone screening [1] the image elements are called lines and so a 600dpi printer is capable of producing 85–105 LPI (lines per inch)[2].

The lines per inch of print are more analogous to the pixels per inch of a screen than dots per inch are.

So, that 96ppi LCD and the 600dpi printer have around the same information density for practical purposes.

[1] http://en.wikipedia.org/wiki/Halftone [2] http://en.wikipedia.org/wiki/Halftone#Resolution_of_halftone...


Interesting. Do printers use halftone screens for pure black-and-white content as well?


If it's pure crisp black like text, then no.

Sometimes pure black is printed incorrectly and looks "fuzzy" with dots around the edges - in that case it is rendered with a halftone screen.


A pixel is the smallest controllable dot on a screen. The spec shouldn't have changed the definition of this, but introduced a new unit instead. Personally, I'd have both!


Everyone was already using the existing "px" unit because UAs had shipped it. Would you have preferred that every single web page broke on high-res devices (so that no manufacturer would ever introduce any high-res devices to the market) to the status quo?

What would you have used the "device pixel" unit for, exactly?


So basically use em, like we should have all along?


Use em, yes, but meanwhile "px" should have continued to mean "pixel", not "96th of an inch except on mobile where ...". Occasionally you really do need to talk about pixels in CSS, and redefining "px" makes that impossible.


I was at the same time telling myself that we really need to start using em more often. I have used px so many times at this point because it's more convenient for precise aligning. This is actually going to be a problem considering retina display macbooks might be coming soon.


It's not going to be a problem, precisely because a "px" in CSS doesn't mean an actual device pixel.

And that's because we've had "retina" devices for many years now, called "printers" and CSS was designed to deal with that situation from the start.


Interesting. The angular resolution of the human eye is about 0.02° according to Wikipedia (http://en.wikipedia.org/wiki/Naked_eye#Basic_accuracies), which, using the provided calculator on this page, corresponds very nearly to one pixel (0.938 to be exact). Pretty sweet.


Not only is the definition of pixel unexpected, but 1in is now defined to be 96px in CSS 2.1, because this was the default in Windows for so many years. In retrospect, given whole page zoom from Opera and high-resolution displays from Apple, probably CSS shouldn't have had units named "in" and "px" at all but instead should have had a single unit like SVG.


Ok, but does it matter?


Increasingly.

To my knowledge, all desktop browsers ignore this spec and treat each pixel as a pixel. (This will likely change with the upcoming Retina Macbook Pros)

For a while all mobile devices treated all pixels as a pixel. But then iOS and Android devices began to dramatically increase their DPI. In the case of iOS, the math is easy, everything gets multiplied by 2 (though chasing pixel precision in a browser does still require hacks [1]).

Android is much more fragmented (go figure). System-wide, there is a DPI setting that influences the viewport pixel-size that the browser claims. For a 800x480 screen, a 1.5x multiplier is used. The browser advertises the mobile-standard 320px viewport width.

For the most part, this is good because websites are easier to design for and look roughly as designed on more devices. On ultra-high DPI devices, they even appear pixel-precise.

The problem is on the very common mid-dpi devices like the millions of 4" 800x480 devices out there. Pixel-level control is lost, and the pixels are large enough for this to be visible. Some people don't care about pixel-level design precision, some people do. Most people, though, will recognize that a webpage looks not-quite-perfect even if they can't put a finger on it.

We're almost out of the woods on phones as DPI is quickly approaching the upper 200's across the board. Unfortunately we're just entering it for non iOS tablets.

[1] http://bradbirdsall.com/mobile-web-in-high-resolution


"all desktop browsers ignore this spec and treat each pixel as a pixel"

That's not ignoring the spec, though, it's following it. Where the device pixel is close to the reference pixel (as it is, on desktop browsers), the px measurement is supposed to represent one device pixel. See the CSS 2.1 spec: http://www.w3.org/TR/CSS2/syndata.html#length-units


> To my knowledge, all desktop browsers ignore this spec and treat each pixel as a pixel.

Not really. Most desktop browsers support pixel scaling with Ctrl+Plus and Ctrl+Minus or user preferences, and they may even remember this setting for each domain.

So not only is a CSS pixel not always a device pixel, but it may be a different (fractional) number of device pixels on different websites.


Only to people who freak out when they realize that 1 px = 2 pixels on Retina displays.


That includes me. Making a pixel not be a pixel is surely one of the great pooch-screws of modern standards. 'pt' existed to be resolution-independent.


I'm not a web designer. Can you explain to me why it's important to be able to position/size something to exactly N pixels, rather than to exactly kN pixels (where k is some integer decided by the browser implementor)?


The biggest reason is that most css properties ignore fractional px values—you can't draw a .5px border, for instance.


Which is absurd, because antialiasing exists for that very reason. I should be able to render a single-pixel-thick vertical line between two columns of display pixels, and get two columns of display pixels at 50% (apparent) brightness. Subpixel rendering would be even better.


This is only true in some UAs.

Gecko has supported fractional pixel values for years in general, though for borders in particular the width is clamped to integer _device_ (not CSS) pixels.

WebKit has been rounding them at parse time (even in cases when 1 CSS px is multiple device pixels) for a while, but they're about to fix that.

I believe that IE also supports subpixel layout. Not sure about Opera, offhand.


Images and <canvas> and <embed> use actual pixels. It's hard to make things look right when you don't have the same unit in CSS.


Maybe I'm misunderstanding, but I don't think this is correct; the browser should scale everything. See http://joubert.posterous.com/crisp-html-5-canvas-text-on-mob... for example.


If they had set 1 px = 1 pixel, almost all Web pages would be unreadably small. AFAIK, Web designers don't use pt.


And people would have quickly learned to not use "pixel" unless they really mean "pixel".


You never really mean pixel, so I guess you're arguing for the removal of the px unit completely. (bzbarsky explained this better: http://news.ycombinator.com/item?id=3697227 ) Then designers would start asking for a unit that is resolution-independent but always an integer multiple of pixels — essentially px under a different name. The px unit is really useful, but maybe it should have had a different name; it's too late to argue about it now.


On the rare occasions where I say "px" rather than some scalable unit, I really do mean pixel. For example, I might use px to specify the width of an element that needs to have exactly the same width as an img tag.


On devices where 1px != 1 device pixel, images should be scaled by the same proportion as the px unit. So, if you set an img to 150px wide, and a div below it to 150px wide, they will be the same width, even if 150px == 300 pixels. I'd be happy to learn of any browsers where that is not the case.


If I wanted an image scaled, I'd give it a size in em or similar, not in px.


HTML is a text markup language, not a device interface language. CSS is a page layout language. HTML and CSS are designed for optimum user control as well as designer flexibility. You assume you're designing for one type of device. Designers before you have assumed 96dpi for web pages. The W3C realize these assumptions, realize that the user may want to use a different type of device, and thus define px in terms of DPI.

As others have mentioned, if a user prints a web page on a 1200dpi printer, your 960px fixed layout is 4/5ths of an inch wide. Your users would not be pleased. Under the device-centric interpretation of px, you'd have to provide a medium-specific style sheet for every possible type of output device. Designers would not be pleased.

Maybe you're trying to use HTML+CSS as a rendering target for a general purpose widget library like Gtk+. In that case, I can understand your frustration.


I think the important part is that setting something to be 1000px wide does not mean it will take up 1000 pixels on a 1000 pixel wide screen.


I've mentioned this before and I'm not the first to say it, but it's time to stop using px and start using in its place absolute measurements like inches or millimeters. It's the only sane approach to supporting different resolution displays.


Absolute measurements (which no longer exist in CSS) are a sane approach to support different resolution displays that are all used at the same viewing distance.

Using absolute measurements to size things on a web page that is then viewed on a TV (viewing distance in the 5-15ft range), a tablet (viewing distance in the 1-2ft range), and an eyeglass HUD (viewing distance in the 1-3in range) would be a disaster.

The fact that people _were_ using inches and millimeters on the web and expecting them to somehow work across all these devices is why they're all now defined in terms of CSS reference pixels...


Unfortunately, devices stopped actually making "mm" a millimeter long a while ago; for instance, various mobile devices scale "mm" by a factor of 2 because they assume that sites don't actually want a physical unit of measure.


If you don't mind my asking: which devices?


Take a look at http://robert.ocallahan.org/2010/08/css-units-changes-landed... for some of the details.


Example Moon in px: https://gist.github.com/2025080


This matters more still for a display with head-tracking. As I read this, the apparent angular size of 1px within a human’s FOV ought to remain constant, regardless of viewer position or physical pixels.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: