I really enjoyed this. I had a project once where I had to pre-render a font to be displayed at various (low) resolutions, and the Microsoft foundry fonts stood out as working exceptionally well at very small sizes. Almost nothing else was even acceptible at e.g. 6 or 7pt. I knew it was because of superior hinting, but I didn't realize how hinting worked, exactly, until now.
It's an interesting read, but I just can't help but think that the tragedy lies with Microsoft and not the low resolution. It's 2011 and Apple has been doing font auto-hinting for ages, and so has been Adobe. Microsoft though still prefers to live in tragedy and defiantly clings to TrueType format and its horrendously labor-intensive manual hinting. I really don't know what's wrong with them, but for a fraction of Vista's 1bn budget they could've written a dozen of TTF auto-hinters.
And in 2011, "live in tragedy" Microsoft can render fonts sharply, Apple can't.
I'm a Windows guy temporarily using a Mac to write an iPad app, and the main thing that bugs me about OS X is how fuzzy the fonts look. In Windows, my fonts look ultra sharp on-screen; on Mac, fonts look fuzzy. Apparently it's because of this: http://www.codinghorror.com/blog/2007/06/font-rendering-resp...
But I say to Apple: unless one's monitor has the dot pitch of the iPhone "retina display", your font rendering looks really fuzzy. <rant>What good is "preserving the typeface design" if it makes your eyes water?</rant>
I think it's mainly a matter of what you're used to. Each time my main computer has moved to a different font rendering tech (Aliased → ClearType → Apple) I've initially been spitting mad like you seem to be, but have ended up preferring it.
It's also a matter of what monitor you have - (at least on Linux where you can tweak it) configurations that look great on some monitors can look really bad on others
As a Mac guy I have the opposite reaction. What you describe as "sharp" I see as "wrong and too thin and pixely". It is like frou_dh said that someone prefers what they are used to.
Apparently, this is subjective. I agree with you that OS X fonts look awful, but it's clearly a deliberate design choice, and knowing Apple, I suspect there is at least one person who thought it over and genuinely thinks their font rendering is the better choice. I can't fathom why anyone would prefer OS X fonts, but apparently people do.
The reason is rather simple - this makes all fonts on OS X look great-to-passable, while unhinted fonts on Windows look really bad and are simply unusable.
The (IMO, terrible) font rendering on Windows is actually the second biggest reason I strongly prefer OS X (the biggest reason is not relevant to this discussion.)
Can you explain why you prefer the font rendering in OS X, or do you think it's just a matter of what you've grown used to? I think fonts in OS X look notably fuzzy, and the strokes all tend to look too bold.
Anecdotally, there's definitely some truth to it being a matter of what you've grown used to. I moved from an old Windows desktop to a Mac OS X laptop. I found the fonts on Mac OS X to be exactly as you described -- fuzzy, bold, fairly annoying. After a month of heavy usage, I stopped noticing it. When I visited my old Windows computer a year later, I realized I perceived all the fonts as incredibly jaggy, thin, and hard to read. But after a while (1 week) I stopped noticing as much.
I have a similar experience, but while I can get used to the blurriness, and after a while I don't notice it as much, going back is always a breath of fresh air. I imagine it like putting your glassses on - if I had to wear any - everything just aligns right. In fairness this happens on Windows as well if I have ClearType turned on, but not quite as much. Mind you the MBP has a higher dpi than my desktop monitor, and the bkurriness is still noticeable. On a different laptop that has Linux running, slightly higher dpi looks good; same with my Android phone.
No matter how long I use font aliasing, without some very high dpi my eyes just cannot adjust, cannot totally ignore the bluriness. Seems like I'm in the minority though.
On the flip side I'm an OS X (and Linux) user and I tend to think fonts on Windows look spindly and have erratic kerning.
It really is a matter of taste and what you are used to. Also because antialiasing is basically a trick of visual perception, it wouldn't surprise me if there really are differences in people's vision systems such that some people are more likely to see heavily antialiased text as blurry, similarly to the way some people are more likely to object to low framerate video or get a headache when viewing stereoscopic 3D.
I can't count the number of times people have tried to correct errors in my Visio diagrams because they see bad kerning as if I accidentally included an extra space in the middle of the word.
I recently found that one can go into Options in Visio and change font rendering so it uses gray-scale. Then the kerning is just fine.
> I suspect there is at least one person who thought it over and genuinely thinks their font rendering is the better choice.
Basically, the OSX font rendering is as close as they can to print, where the Windows font rendering gives primacy to on-screen font clarity. This leads to a "fuzzier" OSX look, but a much better match to the actual typeface where windows's renderer "breaks" fonts to make them "clearer".
This, of course, comes in no small part from Jobs's background in loving the written/printed word. I'm sure it's also helped by the Mac background/history in design and print.
Fascinating piece of history. I wonder if this will all once become relevant again for some new kind of display technology which trades pixel density and color depth for some other desirable property (such as e-ink etc.).
What I find interesting is that the subtitles on DVDs, like "Deadwood" circa 2007 use a blocky font that looks like it was taken from a 1983 PC CGA card.
You are probably watching video files with subtitles as embedded (or separate) text files (srt etc). This allows the video player to render them with any font and higher resolution.
Iirc DVD subtitles are actually a pre-rendered overlay stream. YUV (with uv subsampling) works great for pictures, not so much for text (and has only ever the original DVD resolution and hence fails at upscaling). VLC does indeed grab subtitles from the text and renders them crispy in the screen resolution.
It was my understanding that the difference between closed captions and subtitles was a semantic one--closed captions are for the hearing impaired and subtitles are for people that can hear but don't know the language.
So closed captions have stuff like "door opens, a radio is playing an old tune in the background. Sheila: Hello" but with subtitles you'd just see "Hello" since if you can hear then you can infer everything else about the scene.
There are sort of two different definitions at work here - the semantic definition which is as you describe, and then there's the technical definition which has to do with the implementation.
"Subtitles" on a DVD are basically a 4-colour (3 + transparent) bitmap which is overlaid on the video stream. But DVDs also support the closed-captioning that was originally used for TV broadcasts, which is text embedded in the video signal that's decoded and displayed by the television set itself.
Media playing software can extract subtitles from the latter system and display them in whatever font you like - but you can't do the same with the DVD-overlay subtitles, because they're just images.
I doubt the need for font hinting will go away any time soon. Most monitors are still roughly 96 dpi (same as mentioned in this article). Apple's retina display might be in the area where hinting doesn't matter as much. However, reaching 300 DPI for a largish computer monitor is really outside the realm possibility today for anything but a completely custom design. For a 30" 16:10 screen you'd need a resolution of ~7575 x 4750. You'd be seriously hard pressed to come up with a graphic card / cabling solution to handle that resolution.
DisplayPort currently supports the data rates necessary for 300dpi displays up to about 16-17", and 200dpi 24" displays. That's already good enough to toss out almost all font hinting, without any advances needed in graphics cards or cables.
AMD's high-end GPUs can handle 6 monitors each running at 2560x1600, and can render 3d content at that resolution with useful performance.
Where are these monitors, that mean font hinting can be tossed aside? You'd think 300dpi screens were absolutely everywhere, and that only people still using 100dpi monstrosities were dinosaurs, sticks-in-the-mud, and your technophobic grandmother - all easily ignored! But that appears to be not the case. The majority of screens (probably all of them, pretty much, going by surface area...) seem to still be at the 100-150dpi resolution. Worse, the higher-DPI screens are on things like laptops, phones and tablets - all devices that have a limit on how far away from your eyes you can hold them! End result: grid still very much in evidence.
People are so keen to do away with the very idea of the pixel grid, and just pretend it doesn't exist, but the technology to display the result is just not there yet! Are we going to have to put up with even more years of blurry fonts and coloured fringes, just so that people can pretend the future is here, when it blatantly is not?
> DisplayPort currently supports the data rates necessary for 300dpi displays
The problem isn't (and has not been in quite a long time) the content of the computer box, it's OS support for high-DPI screens and the price of screen hardware (IBM used to sell 204 dpi computer screens, 22.2" with a 3840×2400 resolution, you had to feed it with 4 DVI channels but it worked — for 2D anyway, the first generation shipped with a quad-gpu Matrox G200 MMS)
Indeed. Also, 7575 x 4750 x 24bpp = ~96MB RGB framebuffer.
Additionally:
DPI of my 2011 laptop: 115dpi.
DPI of my 2011 monitor: 100dpi.
DPI of my 1997 monitor: 90dpi (100dpi possible at a rather ugly 60Hz)
DPI of my 1993 monitor: 91dpi
With this progress (or lack of) in mind, I think the pixel will be around for a while yet. Indeed, (thanks to LCDs) outside a few small-scale high-DPI places, pixels are more obviously visible than ever before! So why people are so sure that the pixel grid can just be ignored, I'll never know...
I don't think the spatial resolution of the LCD panel matters quite as much as the angular resolution relative to your eye. If your monitor is further away, you can get away with lower resolutions without being able to see individual pixels.
I disagree that "correct math looks wrong". You simply used the wrong correct math. Lots of other "correct maths" could produce even worse output. And some could produce better.