Hacker News new | comments | ask | show | jobs | submit login
Pixel-fitting – how antialiasing can ruin your logos and icons (dcurt.is)
291 points by danielzarick on May 9, 2012 | hide | past | web | favorite | 75 comments



One problem with 'pixel-fitting' is that certain letterforms will appear sharper or crisper than others. For example, in the article the PLUS (from Hulu Plus), the PLU are sharper than the S because more of their outer surfaces conform to square pixels, while the S has continuous curves that have to alias all the way around. So you end up with some characters looking overly crisp while the S looks soft. I prefer to not draw attention to specific characters and have consistent aliasing across all the characters of a logotype. Photoshop has several font rendering modes that let designers choose how aliasing is rendered (Strong, Crisp, etc) to achieve a desirable result, depending on the typeface and font size chosen.


This equivalent to TrueType hinting, right? It's funny that hinting went out of fashion over the years, mainly due to a fairly sensible argument from Apple that displays would get better - which they undoubtedly have if you look at an iPad 3. Linux users, who had a choice, started switching from patent-encumbered pixel-fitted full hinting to the softer anti-aliased look, perhaps in part because many of their cool Apple owning friends told them that it "looked right".

So in a way, because of all this history, the pixel-fitted logo actually looks too sharp, and hence a little uncool.


I don't know, my "cool Apple owning friends" told me for years that unhinted font rendering was better, and I always thought they could pry my Bitstream Vera from my cold dead hands. Then I used OS X full-time for a while. Upon switching back to Linux, I immediately turned hinting off. It just... looked "wrong".


The difference is that with no hinting you optimize for maintaining the shape of the characters, while with hinting you optimize for having the shapes falling on pixels.

“cool Apple friends” have often been graphic-oriented people, so they preferred seeing their fonts right. Windows and Linux people only saw blur because they were less trained to look at fonts.

(I know, this is a generalization, I disabled hinting myself on my Linux machine years ago, and I’m no graphic person at all)


I've actually found that I like neither no hinting (OS X) or full hinting (Windows). What I've found I like is a slight hinting. I get mostly the shape of the letters like you see in OS X and a semi decent (probably not for an artist) font sizes. This also gets me letters that get fit to my pixels and makes it look nice (I don't have a 300dpi screen where you can't tell the difference).


This sort of thing is a tough call. I'd say the pixel fitted logo looks better if each is considered in isolation, but viewed as a set the anti-aliased ones look better as they look more consistent.

Fonts have a similar issue. The TrueType font hinting made fonts more readable but sacrificed correct horizontal spacing and often made them less aesthetically pleasing IMO. If I were considering a font purely for on screen work without scaling then I think a carefully designed and hinted font looks best, but if there is going to be any scaling (through different font sizes or zooming in or out) then I'll go with the Apple style every time.


I've always felt that the strong hinting used by Windows optimizes for individual glyph legibility by sacrificing kerning and overall readability. I've found that I can't read a whole ebook on a Windows system, but on a Mac I can tolerate it (if I use a large enough font size).


This reads exactly like Microsoft's ClearType patent where they shift the 'rendered' glyph around to maximize the number of 'whole pixels' it uses. Sounds like 'pixel fitting' is the same process (albeit done manually it seems).

Waaaay back when I was at USC the Image Processing Institute there here some interesting image filter analysis of high frequency images. Basically when you take a horizontal line, and plot it x/y with X being the horizontal position and Y being the value of the pixel there, you can treat that as a signal and measure the frequency of it. If you look at B&W text you will see a series of very sharp transitions from background level to foreground level and back again. These are square waves which have well defined harmonics. So taking the image and increasing the rate of change between high to low or low to high 'sharpens' it, similarly if you are rendering an image and you have choices about the function which determines which pixel represents which part of the rendered image, you can do several renders with pixel offsets and maximize harmonics for maximum sharpness.


I actually find the "sharper" Hulu icon to look worse... because the curved parts of the letters stay blurry, while the straight parst are razor-sharp. The "L" and "S" look in the "pixel-fitted" version look totally different, like they don't go together at all.

I don't mind the "extra blurriness" all over everything on my screen at all, because the mind gets used to it quite quickly, and it actually allows for more detail, because our brain automatically interprets sub-pixel width changes, curvatures, etc. This is why fonts on Macs look have more accuracy and detail in their letterforms, because they aren't hinted, i.e. "pixel-fitted".


DC is starting with a bitmap not vector source. He can't move the corner points in a bitmap, just chop off or duplicate rows of pixels.

You're right though; The P and S look too plump, and the L looks too skinny. It's not a good advertisement for the procedure.


He did say that he "probably overcompensated, which resulted in some lost some detail in the P and S"


I don't mind the "extra blurriness" all over everything on my screen at all

Add that many of us roll at a zoom level != 100%, immediately undoing all of this pixel-perfection work. Only when you're targeting the iOS devices do you have any realistic control over pixel to pixel mapping.


And retina has made pixel matching obsolete, the screen resolution is higher than the eye.


When using the vector tools in photoshop, it is important to make sure "snap to pixels" is selected. It fixes this issue when creating things like boxes and buttons.


In Inkscape I think you have to apply a pixel grid (which can be hidden) and turn on grid snapping.


As someone who has been creating digital graphics for broadcast for over 20 years, typically at resolutions that are far lower than anything we have to deal with now, it's interesting to see all this activity lately on HN.

There are times and places where "hand fitting" your pixels is useful, but they are few and far between. Good filtering will almost always get you a better looking result. The constraints on old NTSC monitors are/were horrendous, so you simply have to filter things or you will end up with text and designs that are not only "crunchy" looking, as my assistant used to say, but with images that vibrate, strobe, comb, and burn. Burn like a soul in hell.

The first thing to learn is almost never work with full on black or white. (Or fully saturated color) In broadcast video there are legal reasons for this. But even there, you should always work at least 5% inside your "legal" gamut, or color space. There's plenty of contrast in the remaining space, and it gives you some "headroom" for your filtering to work.

For "white" I like to work down about 15%. The bottom around 10%. This is a starting point, and different designs may make me change things. Anything above or below this should be the result of filtering or as a very, very light application of color, usually feathered and not noticeable. You should be able to "feel" the effect, but not see it.

There's really no getting around the fact that whoever is designing your stuff should know their way around a color wheel. You can use complementary colors to achieve useful contrast. Select your colors to work well together. It's not just a matter of looking "pretty", but a matter of controlling the viewer's eye. It's a form of engineering. And yes, tons of designers don't really get this. The better ones do and are worth the extra money.

Filtering is a matter of making the highly synthetic image you are creating look like something your optical viewing system would find in the real world. All that laborious pixel fitting is simply going to make your image "ring" like a bell if you don't do the other stuff. And if you do the other stuff, correctly, and away from pathological situations, you probably don't need to do the pixel fitting.

If an image is too sharp, it looks fake. Phony. Synthetic. If you are creating a design that is supposed to look like a 15 year old video game graphic, then go for it. (But those were actually pretty blurry and were typically viewed on crummy NTSC monitors.)

So a controlled amount of blur, and I'm not talking about anti-aliasing, can actually help you here. It does help if the image is anti-aliased, but not if the structure of your element is close to the sample size of your image. You shouldn't even be doing that kind of thing. Unless you are creating "small type" that you don't want anyone to read, type should be large enough to filter. If this doesn't fit with your design spec, you've got a design spec done by an amateur.

Now, once you've blurred your image, just a bit, and at the right scale(!), then you can sharpen it a bit. What!?!? But you just blurred it! Yep. Don't go overboard. And don't try it on small details (they probably don't need sharpening anyway. They are small, right?) You can run into trouble on corners, particularly concave ones. Here you might want to do a bit of pixel fiddling, but it should be subtle, and filtered. Don't just go and plunk down a single pixel color change, or a flat color. If you do, that defect will ring like a bell. I used to keep a whole library of photoshop layers with filtery, blurry, corners and edges for dropping into tight spots.

Control your environment. Drop shadows are not a decorative item. Well, they can be, but in this case the thing they do is to create a sense of environment. Soft and fuzzy. Extended, but subtle. You are creating your own fake radiousity space here. And yes, a little creative use of color bleed and fake reflection can work wonders here. But don't overdue it. If the viewer can notice it, explicitly, you've probably overdone it. The window drop shadows on OS X are probably just a touch too strong for my taste. YMMV.

The drop shadow doesn't actually have to be a shadow. It can be though of as merely a filtered edge.

If you really want to get into it, find a good book on oil painting. These tricks have been around for a while. I can recommend the Andrew Loomis classic from the 1930s-40s, Creative Illustration. It's full of all kinds of filtering tricks. For a more modern book, James Gurney's Color and Light and Imaginative Realism are both quite good. You can also pick up a copy of one of his Dinotopia books, it won't help designing graphics, but, hey, dinosaurs. Right? (He also has an excellent website: http://gurneyjourney.blogspot.com/)

Gradients and ramps are useful, both to break up large flat areas of color and to add depth and a sense of space. But they are also useful on flat, graphic (old sense of the word) designs. A gentle use can place it in the viewer's world space without disrupting the, well, flatness. Just apply the effects uniformly, always being aware of color. Use of these within color fields can be useful and add visual interest and verisimilitude as well. A lighter hand is usually better than screaming "HEY, COLOR GRADIENTS!!!!"

Noise and dithering are also very useful, but can be very dangerous. Less is usually more. Noise is particularly useful on large areas, especially those with gradients. 24 bit color space is actually pretty limited when you start throwing gradients across all your colors.

I grey smallish elements slightly, unless I need them to pop (but you probably shouldn't be needing smallish elements to pop!) Over a uniform or low detail background, a bit of transparency can help, but there are technical limitations here on the web (see below).

When I need the user to look at different areas, instead of "glows" or movement in rollovers, use shifts in saturation instead. High saturation should be used only for directing attention. These days I discourage interactive things like rollovers because, they are a crutch, and mobile devices don't support them. (I have been playing around with using saturation with drag and drop, but it's more for fun and to see what happens. If you try this, be careful.)

Finally, you can use all these effects over the entire visual field. This is more useful inside illustrations or in environments where the content is contained. Vignettes and hot spots, if subtly applied can control the eye, filter the entire scene, and control perceived depth. This is harder in typographic environments, like the main areas of web pages and such. You can use these techniques, but you run the risk of looking less like a web page and more like a "Crazy Eddie" TV ad.

This is all mostly for display areas, illustrations and display type. For the reading typographical areas most of the filtering will need to be done by the OS and you don't have much control over that. You don't want to be screwing around with ham handed filtering on body type. You can still control color and contrast here, but remember old folks eyes, and don't unnecessarily grey the type.

In the new world where we can use Canvas and 8 bit alpha transparency, I've been known to slip in a few vignettes and hot spots as environmental effects. But think of the poor user's CPU, and only do this when it either pays off big time or is in a "display" area like a landing page or interstitial page of some type. Even then, try and talk yourself out of it.

For what it's worth, the hulu logo example in the article looks fine to me. It looks to me like it's been filtered, not just anti-alised. The "improved" one rings, to my eyes. Particularly in the corners.

YMMV.

Edit: Hinting mostly works best with very small elements, like body type, and even then in high contrast situations at high resolutions. Like laser printers. I've always found it annoying on CRTs and flat panels. I used to keep an old pair of reading glasses around to "filter" my screen. These days, with largers monitors, I use a 24 inch, at an appropriate distance, I don't need any filtering than my own aging eyes. The body type looks just great. Younger, sharper eyed readers may not agree, but just wait a few years ...


Do you have a blog? I found this really interesting and would like to read more.

Also, I'm curious - what are the legal reasons for not working with full black or white in broadcast video?


No blog. I've thought about doing that, though mostly in the context of covering technical aspects for artists and designers. I've often found it easier to teach designers to deal with technical issues than to teach technical people to deal with design issues. I was actually expecting a lot more argument from people about my comment. Every good artist learns early that they can't trust their own eyes ...

The legal brightness and color usage has to do with the fact that certain levels can cause signal distortion. The details are different, but think of overpowering an audio amp and the distortion you can get. Analog video is a real nightmare of limitations and artifacts. Most facilities I've worked at had signal clippers at some point in their signal chain, but those can cause their own problems. Best to stay away from the cliff's edges. These days HD and digital are much nicer, but there are still a lot of NTSC TVs out there. The situation is a lot like continuing to support older versions of IE.


It's not "legal" as in "a matter of law". It's about the technical specifications of television and how television signals are processed and transmitted. In analog TV the transmission uses AM (Amplitude Modulation). You could create signals that over-modulate the transmitter if not careful. Hence the use of the terms "legal" to refer to signals that would pass through the transmitter without causing any trouble. Most stations would use boxes called "legalizers" to prevent bad signals from reaching the transmitter. However, broadcasters generally have specification documents that content providers must follow in order to submit material that will not be rejected by their QC department.


No, it actually was "legal" as in FCC regulations; and you could get fines for broadcasting signal out of spec. (Though I've never known of it happening.)

I'm not sure what the situation is now with digital TV. My last years in the industry were before the mandatory switch, and the places I was working were simulcasting. So we were creating stuff that was NTSC safe. By then most of the software we were using took care of keeping things in spec, but some of the older equipment we were using, mostly character generators and few really cranky old switchers and effects boxes, still required that we be careful about our input levels.


OK, you are probably right. It's been a while. I played in broadcast some time ago.

I think you jogged my memory further. I think the way it worked is that the transmitter over-modulation due to illegal levels would/could interfere with adjacent channels and that's why the FCC made the rules.


Very good comment, though broadcast and video are much harsher environments for graphics than a computer monitor. The resolution is lower as you point out, especially chroma resolution, and there may be interlacing and/or compression going on too. Each of these makes heavier blurring more desirable for high-frequency images such as text.

Regarding this:

So a controlled amount of blur, and I'm not talking about anti-aliasing,

it's worth pointing out that anti-aliasing is a form of blur, but most software usually uses a box filter which is a really a poor filter. I have some demonstration images here:

http://www.freedesktop.org/~sandmann/tigers

Compare these two especially:

http://people.freedesktop.org/~sandmann/tigers/tiger-box.png

http://people.freedesktop.org/~sandmann/tigers/tiger-jinc-ga...

The first one is what cairo produces; the second is improved in three ways: (1) The antialiasing filter is based on the Jinc function rather than a box, (2) white and black are offset slightly from the top and bottom, and (3) the compositing is done in linear light, not sRGB.

The strokes in the second image are also somewhat wider in order to compensate for the lighter appearance that gamma-aware compositing produces.


I'm using the terminology pretty loosely, trying to match the tone of the discussion here. This is a casual comment. And really too long a one at that.

When I'm saying anti-aliasing I'm really meaning super sampling and weighted assignment of brightness values. There's a lot of ways to do that, all with their own artifacts and advantages.

Filtering is used as a catch all for operations that reduce image sampling and display artifacts.

Most website designers will be using Photoshop and its blur and Gaussian blur filters. Those work just fine. If you need anything fancier, or can make use of it, you probably don't need to have read my comment.

For this type of work I've found that techniques developed by oil painters over the centuries work just fine. A little math goes a log way.

My point is that if you are fiddling with pixels, you probably aren't looking at the whole image, and probably should be. Counter intuitive as it may seem, sharpness is not always your friend. You've got bigger things to worry about.

In the "real world" nearly everything we look at is seen by reflected light, with pigments subtracting frequencies and value from the available light. Edges come and go due to characteristics of our visual system. Brightness levels are incredibly wide compared to what we get on the screen. On a computer screen we are looking at emitted light, with a very strange, and constrained gamut. Edges and areas are defined by additively mixed samples and other artifacts. Go outside some evening, just as the sun is setting, and walk through a residential neighborhood with a lot of trees, houses and such. Very peaceful. Except for the neighbor who has his living room window open and lets his TV shine like a demon in the night, destroying the ambience and mood. (And don't get me started talking about audio!)

A good designer knows how to use those characteristics, the limitations of his/her medium, to recreate the experience of the world, or to control them to create something different that serves their purposes. It's why a painting can look better than even a very good photograph. Or why a highly manipulated photograph can look better than an un-staged one. Portrait studios are not the site of a documentary :)


Thanks to commieneko and ssp for a couple of good posts.

Followup questions:

What do you think about the effect that pixel-alignment produces specifically in the context of this comparison?

There seem to be a couple of basic problems with the "clarity" of the pixel-aligned images (please correct):

- As lines diverge from rectilinear, aliasing is inevitable. Either the aliasing or the anti-aliasing will produce discontinuity.

- High-contrast neighbors on pixel boundaries are more likely to highlight perceptual problems related to frequency.

The part that interests me here is the presentation of this comparison on Dustin Curtis' site. His site seems to be pursuing visual impact as an ultimate goal, and contrast is a big part of that: http://i.imgur.com/UC8ZX.png (OP with histogram overlay)

Does the context minimize the negative effects you've described? Do the filtered images look out of place in such a stark environment?


I'm of the design philosophy that's best exemplified by this old vaudeville joke:

Patient: Doctor, Doctor! It hurts when I do this! Doctor: Well don't do that!

Thank you, thank you. I'll be here all week ...

Seriously ...

With sampled images displayed on a rectilinear grid, you are going to get into pathological situations that hurt. So don't do that.

One trick you can sometimes get away with is to rotate the work so that it isn't orthogonal to the display grid. Of course then, parts that were okay before may become problems. And the client, bless them, may not like it. No body said life was going to be easy.

Contrast at the edges is where you have the most hurt in this case. Drop shadows, very subtle, please, can help as can can vignetted edges. I've been known to do unclean things like create a 2 or 3 pixel rule the shape of the edge, blur it and multiply or screen it on top of the offending parts. You can even dodge and burn it more or less in the nasty bits. (blending modes are a big topic!) Worse case scenario, try and use the problem as a design element. Once I took a particularly truculent logo and grudged it up with some high frequency noise that was applied with a transparency just so.

Now one thing I used to do all the time for animation was to use temporal sampling to smooth over the rough edges. Even for seemingly static elements, a little bit of focus wiggle or even a slow, smooth slide, barely perceptible, will often cover a world of sins. We probably aren't to the point where that kind of thing is going to be useful on web pages for "static" elements, but the day is coming. There are other advantages, as this allows the graphics to "breath" and seem part of an environment. Of course you may very well not want that effect. But even so, higher resolution displays, faster processors, more resources, will mean that such things, subtly applied, can give us more tools to work with in troublesome situations.

But the best advice, for nearly every case, is "Then don't do that!"


I'm curious how you came to be on hackernews. Usually it's either business or coding type people, with almost disdain for visual design. A recent article had the headline that visual design won't fix your broken business.


I'm a hacker and a coder from way back. When I first started created digital imagery, back in the late 1970s, there was barely any commercial hardware, in the modern sense, and essentially no commercial software. We dug out the math and physics books and rolled our own.

My background is in fine arts; my degree is in painting, drawing and print making. Image making is a very old technology. Leonardo's notebooks have a lot of good advice for modern web designers. So do the 19th century impressionists. Add in the fact that I'm an amateur astronomer with an interest in optics and the human visual system, and I've got a pretty useful toolbox for making things that people need to look at.

I read the article you mention, and thought it was cute. What a lot of web developers don't realize is that if your visual design doesn't work, then that's one thing that's broke about your project. Visual design doesn't necessarily mean "pretty" or "pleasing to the eye", though that never hurts. Visual design means that your work is communicating correctly and efficiently.

Think about designing a billboard or a 10 second TV commercial. You've got a very narrow window of opportunity to grab someone's attention, amidst all the other distractions, get your message across, and hopefully initiate some kind of action or memory response.

The biggest problem with some designers is that in order to perform in a highly competitive environment they have to become very specialized and attuned to a particular medium and a particular set of requirements. If they've never worked in other mediums they may think that the specialized rules and needs of their medium are general rules. I've seen print designers create billboards that would look beautiful in a magazine spread, but would be a messy blur on the side of the highway as you went zooming past.

I've worked in print, broadcast, movie special effects, animation, multi-media, display, web, charcoal, oil and acrylic paint, silkscreen, wood engraving, copper plat etching, and other even weirder mediums. I once had to design an ad that was to be silk screened onto walnuts. Now there's a tricky venue to master ...

Back when I was teaching design and web development, one thing I would stress was that anyone working in a visual medium ought to learn to draw. Get Betty Edward's book Drawing On The Right Side Of The Brain and go to town. Learning to draw observationally is learning to see; and to see the whole picture, not just the part that happens to be under your pencil at the moment. If you can't draw it that means you aren't really seeing it; you are merely recognizing it.


Plenty of us care about visual design. Typography posts, in particular, routinely get upvoted. Participants of Hacker News (just like Reddit, or Twitter, or comedians) are not a monolith.


I also care. But this is the guy's (or gal's) profession. I'm curious how they heard about hackernews or came to join and participate.


Hacker News used to have a job board for designers. They seem to have stopped that, more the pity.

But I've run a couple of start ups in the past, and I've been involved with web design and web applications since the early '90s. So Hacker News is of general interest to me. I'm a geek and a hacker too.


Have you considered that you may just have a different taste?

To me, the fitted hulu logo looks so much better. I like the crispness.


Sure, what does YMMV mean? It's not just a matter of taste either. People have widely varying degrees of visual acuity.

But emitted imagery is going to ring in areas of detail and high contrast. Reflective imagery, not so much. Sharpness is going to create detail and contrast.

If you are going to do this kind of thing, you will have to develop an eye looking at stuff. And then be ready to lose that eye and look at it like a street person again.

My point is look at the whole thing, not the pixels. Those corners ring, I want to look at them. We aren't having a beauty contest here. That logo is a single element on a page full of a lot of other stuff. Do we want it ringing? Hell, do we even want the viewer looking at the damn thing. If I ran that circus I'd probably fade the thing to about 50%. Yes it's got to be there. They should notice it, recognize it, but we probably want them looking at something else.

It's a rule, a law really, that you never, never, NEVER look at a logo or design in isolation when you are making a judgement. You look at it in its venue. If it's a critical use, you will be fiddling with the design, the whole thing, not just the logo, for each use. This isn't an aim, shoot and forget situation.

Edit: I should also point out that when looking critically at a design I use an old visual astronomer's trick: Averted vision. That is to say, I don't look directly at the thing I'm looking at, I look beside it or even a ways off. Your eye isn't a camera. It's a complicated flying spot scanner with a really complicated, variable sensor. Different parts of your eyes "notice" different things.

The center is good for detail and color, but the edges are good for contrast and small variations in same. When a viewer looks at a page, his center of vision is looking at those groovy sharp pixels, but the edges are noticing stuff too, like contrast and movement. Just like the caveman who is looking eagerly at the bright red berries on the bush, the edges of his eyes are looking for that tell-tale flicker in the underbrush that means a big mean cat is about to jump on him and eat him. You have to be cognizant what is going on all over the field of view in order to control, or organize the viewer's experience.


I find interesting that all what you say applies very well to music (I was a sound engineer before): there is no pure crisp sound, thus add a bit of reverberation (ie blur). Usually just turn the knob until you hear it, then turn it back half way. Defocus your ears to make sure the compressor is not pumping. what you didn't say is that sometime you want an effect, an then you should turn the knob to it's full position.


The real world is noisy and messy. Our senses are evolved to deal with that. When we sense something not noisy or messy, it really stands out. When I first started working with digital imagery, back in the 1970s, it looked very alien and harsh. Of course a lot of it still does to a certain extent, but we've learned to accept it as a kind of visual language. I would expect that most readers of Hacker News can't remember a time when they weren't surrounded by digital images and sounds.


Manual hinting may be fine for simple vector shapes like that, but in most cases it does more harm than good. It's the same debate of windows vs OSX anti-aliasing: windows tries to fit shapes in to the pixels for a sharper look, while OSX favors preserving the exact shapes resulting in more blur.

In the Hulu pixel-fitted example, the typeface is mutilated beyond recognition. Yes, it's sharper, but it doesn't look right, the 'u's even appear higher than the x-height.

In a screen where everything is anti-aliased, images like that stick out like 8-bit art. And just like in HD vs FullHD, most people can't even tell the difference. Be patient and we'll be over this in a couple years :)


As a programmer, I switched from Windows 7 to Mac (Lion) since everyone at my new medical-related job uses Macs. I must say that I much prefer the W7 anti-aliasing, because I care little about how close my characters are to their true glyph shapes and care much about how crisp a page of code looks like on my monitor. I've gotten used to the blur now, but whenever I go back and look at my old W7 machine the crisp nature of the text really jumps out at me.


I hope I'm not naively over-generalizing a complex process, but is 80% of what he doing just not anti-aliasing straight lines? If that is most of it, couldn't pixel-fitting be improved by just doing that?


It's slightly more complicated than that.

For example, in his "Markdown Mark" example, he's careful to make sure that the rectangular border is exactly three pixels wide on every side. Just rounding straight lines to the nearest pixel boundary wouldn't guarantee that (it's pretty easy to wind up with shapes that should look even, but are 2px on one side, and 4px on the other).

TrueType fonts face exactly the same challenges as described in the article, and they include bytecode for a specialised VM to describe how to nudge control points to pixel boundaries. The open-source FreeType rasteriser has code that's pretty good at doing this automatically, but it's very domain-specific to font-rendering; it would be no use to a general SVG renderer, for example.


Great post. I would have kudo'd it were I not so annoyed with having been tricked by the kudo button's hover-for-irrevocable-action on another article in the past.


I went back to see what you were talking about and now I'm in the same boat as you. I think it's a good opportunity for another cool effect though. I imagine you could hover (after "kudoing") and the animation would happen in reverse resulting in an "un-kudoing".


Meh, the effect is trivial and the behavior is bad. The real opportunity is to just fix it.


Isn't it a fairly meaningless metric, though? I don't give much credence to the fact that n people gave the article 'kudos' - Dustin is already a trusted reader for most of HN.


Ugh, that widget really stresses me out!


"Until we have Retina displays everywhere, we're going to have to live with antialiasing techniques"

I don't think Apple needs to come in and rescue every display maker in the world with their branding. Any sufficiently high resolution display will do.


I see your point, but "retina" is just a simple stand-in for "pixels so dense individual pixels are no longer distinguishable". It doesn't seem like the author was saying 'just apple devices' -- it's simply a convenient way to reference that class of displays.

Once more displays come out with extremely high pixel density, we'll probably have an industry wide name for it.


I agree. Why use the term Retina? That term means nothing to many consumers.

Why not use the company agnostic term HD? You can be even more technical and say displays with 100+ DPI.


Perhaps because HD got railroaded by the TV people and now it's commonly taken to mean 1920x1080 rather than a DPI measurement?

Also, your "more technical" definition is off by a long shot; people have been using 100-125 DPI screens for years, that's 100+ and it's not what would pass for a retina display unless it was 40" across and you sat at TV viewing distances away. Retina is being used in the generic sense to say "2X the DPI you're used too", or 250+ DPI for the current examples available. But even that DPI measurement is flexible because it's related to viewing distance.

So rather than stretching for generic terms that dont actually make it any clearer what you're describing, lets just stick with "retina" until we really have a better term :)


HD means high definition. The TV people have adopted that term for HDTV, so be it.

I threw a number out there, but should have known better since someone was to nitpick it.

The correct term is PPI (screens) over DPI (printers). Also, PPI is pixels per inch. PPI is not relative to viewing distance. DPI is not relative to viewing distance.[1]

An iPhone 4S has 326 PPI. We already have a technically accurate method to measure pixel density, so let's use it. Use bytes to describe data, not Libraries of Congress.

Retina is a marketing term. Does phone A had better PPI than phone B? I don't know, is Retina better than Super AMOLED++? Why not just compare technical details instead of resulting in fuzzy marketing word arguments?

[1]: DPI does not change with viewing distance, but the closer the viewing distance the higher the DPI needs to be to achieve the same visual effect. A billboard can look the same as a postcard at a much lower DPI because at that distance the human eye can't discern the individual dots.


What the author misses is that with colour LCD displays you can do sub-pixel anti-aliasing with the 3 RGB sub-pixels, if you start from a vector, which is why HTML type looks better than Photoshop rendered type.


I purposefully chose to ignore subpixel antialiasing because it's complicated and you have little control over how it works. It's done at a layer further abstracted from the source file, so you can't accurately pixel-fit anything to a subpixel. It's also just another hack, using a smaller unit than a half-pixel, and, like most hacks, it has some serious negative side-effects.


He's not suggesting that you subpixel antialias manually. He's pointing out a benefit of leaving your images as vectors: the browser knows the subpixel order of the display, and can make the image prettier than you can without that information.


It can make it effectively higher-resolution than you can, but that doesn't mean it can make it prettier. It could still look end up looking worse (from the color bleeding effect, for instance).


You can't even reliably fit raster images to screen pixels on the web. Images can easily end up at non-integer screen pixel locations which requires resampling (and ruining your careful alignment) to render.

Logo design should be done in-context, not in a dark room at 400% zoom.


i've always found that subpixel antialiasing looks horrible


Which has nothing to do with logos, which are raster images 99% of the time and don't use subpixel rendering for obvious reasons.


A small tip here: if instead of using 2d vectors like Illustrator, you use 3d vectors like Blender for your logos and icons. Then the smooth shading of the 3d objects (versus the opaque shading 2d styles usually go for) makes these manual hinting problems meaningless. So 3d vectors are much easier to scale automatically than 2d vectors, without worrying about quality loss.

So many choose 2d logos and icons instead of 3d, simply out of style. But this little advantage is certainly something to at least consider.



Fireworks have had a nice little feature for a while

Apple+k and it finds the nearest pixel.

It's doesn't always do good, but sometimes it's just what the doctor ordered.


Vector icons tend to look horrible at small sizes. A post from a month ago:

http://www.pushing-pixels.org/2011/11/04/about-those-vector-...

http://news.ycombinator.com/item?id=3720363


Also note that SVG logo's are rendered far better than a scaled down pixel version. I tried a lot of different reduction methods but the SVG version was always "sharper". Just include it in your image-tag: <img src="logo.svg" alt="my logo">

I think you can use a background image as fallback.


I find it is easiest to use javascript to set fallbacks.


Anyone feels this everlonging problem is related to autofocus ? Pretty sure it wouldn't be very hard to automatically realign subparts to fit based on Fourier analysis.


I really wish Fireworks or some image editor was designed for asset creation first and took care of these kinds of issues. Even with Fireworks if you zoom in on the palette and draw a box you can end up creating it on the half pixel and get anti aliasing issues. I want a tool that restricts you to whole pixels for everything, including when it does automatic resizing.


Can you clarify your issue with Fireworks? By "palette" do you mean "canvas"? From my experience, there are no sub-pixels initially on the edges of boxes drawn in Fireworks. Sub-pixels will occur if you use the Scale tool (Q) to resize the box, however you may subsequently snap the box to full pixels using Cmd+K. Also, if you resize the box using the X/Y dimension inputs in the Properties inspector, the edges will remain at full pixels.


Here you go, I created a box, zoomed in, and resized it using my cursor. http://i.imgur.com/Nx0Bx.png The pixel dimensions show whole numbers. Punching in numbers into the properties inspector isn't always an option, for instance when you're trying to resize a box with rounded corners while preserving the corner radius.


I see, so it's after you've created the box. I agree that there should at least be a mode where everything snaps to a full pixel. Anyhow, the Cmd+K trick would be useful in your case. Also, corner radii should be preserved if you resize with the properties inspector.


This seems like it could be automated algorithmically — it looks like an optimization problem where you want to minimize the number of partially-filled (e.g. "gray") pixels, especially along vertical and horizontal lines.

For rectangular shapes, even a simple repositioning could do a lot.


That's automated hinting, it's available in TrueType, Freetype and professional type design software.


This article reminded me of an article by Juan Vuletich about the application of signal processing to anti-aliasing:

http://www.jvuletich.org/Morphic3/Morphic3-201006.html

I'd say there's still work to be done in this area.


While this has the attention of some of the best designers, I'd appreciate a real quick look at our logo for criticism, it's here: http://www.verelo.com/images/logo.png (is it using anti-aliasing?)


I'm pretty sure this is what drives me nuts about the kickstarter logo. It could be so much crisper:

http://www.kickstarter.com/images/kickstarter-error.gif


Facebook's too? Especially juxtaposed with all of the crisp white text?


Very interesting. Reminds me of pixelsnap, an Inkscape plugin my brother wrote to do this very thing: http://code.google.com/p/pixelsnap/


I hate to break it to the guy, but I can't remember the last time my browser was at 100% zoom, so his perfect pixels will never look perfect on my screen anyway...


That doesn't look like something you can't automate for most of those cases.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: