> It was Knuth who noticed a book that had been electronically typeset (Patrick Winston's AI book), realized that electronic typesetting machines actually had enough resolution to print like “real books”
Searching for what the actual resolutions were,
I saw in your older comment (1) that you mention "Alphatype CRS" and "more than 5000 DPI."
If you mention 1977, and there were already 5000+ DPI machines then, I'd really like to read a little more about that: it's impressively more than what the "laser printers" used by mortals were able to have. How did these high resolution machines work? How did they achieve the resolution physically, and how did they manage to process so much bits at the time when the RAM of microcomputers was counted in kilobytes?
The Alphatype CRS ("Cathode Ray Setter") was nominally 5333dpi, but it's a bit of a fudge. As the person who wrote the DVI interface for it, and personally ran the entire Art of Computer Programming, Vol. 2, 2nd Edition through it, let me explain.
You are very correct that in the early 1980's, even 64k of RAM was quite expensive, and enough memory to handle a complete frame buffer at even 1000dpi would be prohibitive. The Alphatype dealt with this by representing fonts in an outline format handled directly by special hardware. In particular, it had an S100 backplane (typical for microcomputers in the day) into which was plugged a CPU card (an 8088, I think), a RAM card (64k or less), and four special-purpose cards, each of which knew how to take a character outline, trace through it, and generate a single vertical slice of bits of the character's bitmap.
A bit more about the physical machine, to understand how things fit together: It was about the size and shape of a large clothes washer. Inside, on the bottom, was a CRT sitting on its back side, facing up. There was a mirror and lens mounted above it, on a gimbal system that could move it left/right and up/down via stepper motors (kind of like modern Coke machines that pick a bottle from the row/column you select and bring it to the dispenser area). And, at the back, there was a slot in which you'd place a big sheet of photo paper (maybe 3ft by 3ft) that would hang vertically.
OK, we're all set to go. With the paper in, the lens gets moved so that it's focused on the very top left of the paper, and the horizontal stepper motor, under control of the CPU, starts moving it rightwards. Simultaneously, the CPU tells the first decoder card to DMA the outline info for the first character on the page, and to get the first vertical slice ready. When the stepper motor says it's gotten to the right spot, the CPU tells the decoder card to send its vertical slice to the CRT, which flashes it, and thus exposes the photo paper. In the meantime, the CPU has told the second card to get ready with the second vertical slice, so that there can be a bit of double-buffering, with one slice ready to flash while the next one is being computed. When the continuously-moving horizontal stepper gives the word, the second slice is flashed, and so on. (Why two more outline cards? Well, there might be a kern between characters that slightly overlaps them (think "VA"), and the whole thing is so slow we don't want to need a second pass, so actually two cards might flash at once, one with the last slice or two of the "V" and the other with the fist slice of the "A".)
So, once a line is completed, the vertical stepper motor moves the lens down the page to the next baseline, and then the second line starts, this time right-to-left, to double throughput. But therein lies the first fallacy of the 5333dpi resolution: There is enough hysteresis in the worm gear drive that you don't really know where you are to 1/5333 of an inch. The system relies on the fact that nobody notices that alternate lines are slightly misaligned horizontally (which also makes it all the more important that you don't have to make a second pass to handle overlapping kerned characters; there it might be noticeable).
Looking closer at the CRT and lens, basically the height of the CRT (~1200 pixels, IIRC) get reduced onto the photo paper to a maximum font size of ~18pt (IIRC), or 1/4in, giving a nominal resolution of ~5000dpi on the paper. But this design means you can't typeset a character that was taller than a certain size without breaking it into vertical pieces, and setting them on separate baseline passes. Because of the hysteresis mentioned above, we had to make sure all split-up characters were only exposed on left-to-right passes, thus slowing things down. Even then, though, you could see that the pieces still didn't quite line up, and also suffered from some effects of the lack of sharpness of the entire optical system. You can actually see this in the published 2nd edition of Vol 2.
Finishing up, once the sheet was done (six pages fit for Knuth's books, three across and two down), the system would pause, and the operator remove the photo paper, start it through the chemical developer, load another sheet, and push the button to continue the typesetting.
It's worth noting that the firmware that ran on the 8088 as supplied by Alphatype was not up to the job of handling dynamically downloaded Metafont characters, so Knuth re-wrote it from scratch. We're talking 7 simultaneous levels of interrupt (4 outline cards, 2 stepper motors that you had to accelerate properly and then keep going at a constant rate, and the RS-232 input coming from the DEC-20 mainframe with its own protocol). In assembly code. With the only debugging being from a 4x4 keyboard ("0-9 A-F") and a 16 character display. Fun times!
Now, if anybody asks, I can describe the replacement Autologic APS-5 that we replaced it with for the next volume. Teaser: Lower nominal resolution, but much nicer final images. No microcode required, but sent actual bitmaps, slowly but surely, and we were only able to do it because they accidentally sent a manual that specified the secret run-length encoding scheme.
Thank you many times! The minute you posted this I've wanted to write that I've discovered your 2007 interview (1) where some details (but this answer is more detailed, many thanks!) were able to give me some overall idea how it was done.
And what I was missing there were exactly these "alignment" problems: I couldn't imagine that the pictures and "big letters" could work on 5000 dpi when only a two letters were formed at once and something had to mechanically move all the time.
So yes, please more details, and also about APS-5 and "actual bitmaps"! And please try to also write (as much as you can estimate) the years when you used the said technologies. Which year was Alphatype CRS used? And which APS-5?
And just to show you why I care about the years: you said about the controller CPU it was 8008 in the 2007 interview, which is an 8-bit CPU (e.g. used on CP/M machines), and now 8088, which is a 16-bit with an 8-bit bus (the one in the first IBM PC). If you aren't sure about the CPU but if we'd know when the device started to sell maybe we can exclude the later.
Yeah, the naming conventions between 8008, 8080, 8086, 8088, and 80186 is enough to make you nuts.
Anyway, let's check the sources! Surfing over to https://www.saildart.org/[ALF,DEK]/ and clicking on ALPHA.LST on the left, shows code that looks like it's for an 8080 to me, but I'm rusty on this. The file itself is dated July 1980, but it's just a listing and not the sources themselves (not sure why).
Knuth starts the "Preface to the Third Edition" of Vol 2 with: "When the second edition of this book was completed in 1980, it represented the first major test case for prototype systems of electronic publishing called TeX and METAFONT. I am now pleased to celebrate the full development of those systems by returning to the book that inspired and shaped them." Here he's talking about our very first Alphatype production output, confirming it was 1980.
Note that the CRS wasn't an especially new model when we got ours, so it wouldn't be too surprising for the CPU to not be the latest and greatest as of 1980, especially as I got the feeling they were pretty price-sensitive designing it.
By the way, the mention of "fonts came on floppy disks" elsewhere was generally true back then (and selling font floppies was how the typesetter manufactures made some of their income), but we didn't use the floppy disk drives for Knuth's CM fonts at all. All required METAFONT-generated characters were sent down along with each print job. And, in fact, there wasn't enough RAM to hold all the characters for a typical job (remember, each different point size had different character shapes, like in the old lead type days!) so the DVI software on the mainframe had to know to mix in reloads of characters that had been dropped to make room for others, as it was creating output for each page. It's essentially the off-line paging problem: If you know the complete future of page accesses, how can you make an optimal choice of which pages to drop when necessary? That's my one paper with Knuth: "Optimal prepaging and font caching" TOPLAS Jan 85 (ugly scan of the 1982 tech report STAN-CS-82-901 at https://apps.dtic.mil/dtic/tr/fulltext/u2/a119439.pdf when it was still called "Optimal font caching"). Actually, the last full paragraph on page 15 of the later says that the plan to use the Alphatype CRS started two years before production happened, meaning that the CRS was available commercially by 1978.
> Surfing over to https://www.saildart.org/[ALF,DEK]/ and clicking on ALPHA.LST on the left, shows code that looks like it's for an 8080 to me
Wow, thanks for that! Yes it is surely 8080 code (at least, 8080 mnemonics, definitely not 8088 which are different, 8008 initially used other mnemonics than in that LST, then the 8080 mnemonics were applied to 8008 too, but 8008 needed more hardware to use, so it should be 8080 in CRS then).
Also thanks for the STAN-CS-82-901. Now the story of programming CRS is quite clear. And even as a poor scan, it can be compared to doi.org/10.1145/2363.2367
Did I understand correctly, for CRS, what was uploaded were never bitmap fonts but always the "curves" of the letters? I believe 100 bitmap images in the resolution of 1024x768 were too much for the whole setup, even only for the cache?
And... I'm not surprised that Knuth managed to develop that firmware after I've discovered this story ("The Summer Of 1960 (Time Spent with don knuth)"):
I wasn't around at the time and the only thing I know is from reading Knuth's books/papers/interviews. :-) But I imagine that (1) the typesetting machine wouldn't attach to your microcomputer like a peripheral, and probably came with its own computer or equivalent, (2) there wouldn't be too many bits to process: you'd first load your preferred high-resolution fonts onto it with floppy disks or whatever (the shapes for each letter), then the actual output sent to the device would just be the letters and their (relative) positions (not each bit of the page “image”).
Wow. That SE post on Computer Modern is really interesting. There are so many details and nuances regarding typography that it kinda makes me dizzy. It truly is an art! The time and thoroughness required to truly master it is humbling.
Searching for what the actual resolutions were, I saw in your older comment (1) that you mention "Alphatype CRS" and "more than 5000 DPI."
If you mention 1977, and there were already 5000+ DPI machines then, I'd really like to read a little more about that: it's impressively more than what the "laser printers" used by mortals were able to have. How did these high resolution machines work? How did they achieve the resolution physically, and how did they manage to process so much bits at the time when the RAM of microcomputers was counted in kilobytes?
1) https://news.ycombinator.com/item?id=17917367
P.S. Digging deeper: I've just found the reason we see today the text produced with TeX as "too thin": https://tex.stackexchange.com/questions/48369/are-the-origin... and linked there: https://tug.org/TUGboat/tb37-3/tb117ruckert.pdf