• It is indeed true that the first edition of Volumes 1–3 of The Art of Computer Programming and the second edition of Volume 1 (1968, 1969, 1973, 1973) had been done with hot-metal typesetting (pieces of lead, laid out to form the page) with a human doing the typesetting/compositing (with a Monotype machine). And that for the second edition of Volume 2 in 1977, the publisher Addison-Wesley was switching to a cheaper alternative. But the cheaper alternative was not electronic; it was phototypesetting (the letterforms made with light and lenses instead of metal). It was Knuth who noticed a book that had been electronically typeset (Patrick Winston's AI book), realized that electronic typesetting machines actually had enough resolution to print like “real books” (unlike typewriters or printers of the time), and decided to solve his problem for himself (as he felt that he could make shapes with 0s and 1s).
• The summer of 1977, Knuth spent in China, thinking he'd specified his program in enough detail and his two students would be able to complete the program by the time he came back. He came back and saw they had only implemented a small subset, and decided to spend his sabbatical year writing the program himself. He mentions that as he started writing it he realized the students' proto-TeX was an impressive effort, because there was a lot missing from his specifications. Knuth's article The Errors of TeX (reprinted in his collection Literate Programming) goes into excellent detail on the development process.
Also: a suggestion to try plain TeX (instead of LaTeX), if you haven't tried it. The book A Beginner's Book of TeX by Seroul and Levy is especially good. It might surprise you.
Searching for what the actual resolutions were,
I saw in your older comment (1) that you mention "Alphatype CRS" and "more than 5000 DPI."
If you mention 1977, and there were already 5000+ DPI machines then, I'd really like to read a little more about that: it's impressively more than what the "laser printers" used by mortals were able to have. How did these high resolution machines work? How did they achieve the resolution physically, and how did they manage to process so much bits at the time when the RAM of microcomputers was counted in kilobytes?
P.S. Digging deeper: I've just found the reason we see today the text produced with TeX as "too thin": https://tex.stackexchange.com/questions/48369/are-the-origin... and linked there: https://tug.org/TUGboat/tb37-3/tb117ruckert.pdf
You are very correct that in the early 1980's, even 64k of RAM was quite expensive, and enough memory to handle a complete frame buffer at even 1000dpi would be prohibitive. The Alphatype dealt with this by representing fonts in an outline format handled directly by special hardware. In particular, it had an S100 backplane (typical for microcomputers in the day) into which was plugged a CPU card (an 8088, I think), a RAM card (64k or less), and four special-purpose cards, each of which knew how to take a character outline, trace through it, and generate a single vertical slice of bits of the character's bitmap.
A bit more about the physical machine, to understand how things fit together: It was about the size and shape of a large clothes washer. Inside, on the bottom, was a CRT sitting on its back side, facing up. There was a mirror and lens mounted above it, on a gimbal system that could move it left/right and up/down via stepper motors (kind of like modern Coke machines that pick a bottle from the row/column you select and bring it to the dispenser area). And, at the back, there was a slot in which you'd place a big sheet of photo paper (maybe 3ft by 3ft) that would hang vertically.
OK, we're all set to go. With the paper in, the lens gets moved so that it's focused on the very top left of the paper, and the horizontal stepper motor, under control of the CPU, starts moving it rightwards. Simultaneously, the CPU tells the first decoder card to DMA the outline info for the first character on the page, and to get the first vertical slice ready. When the stepper motor says it's gotten to the right spot, the CPU tells the decoder card to send its vertical slice to the CRT, which flashes it, and thus exposes the photo paper. In the meantime, the CPU has told the second card to get ready with the second vertical slice, so that there can be a bit of double-buffering, with one slice ready to flash while the next one is being computed. When the continuously-moving horizontal stepper gives the word, the second slice is flashed, and so on. (Why two more outline cards? Well, there might be a kern between characters that slightly overlaps them (think "VA"), and the whole thing is so slow we don't want to need a second pass, so actually two cards might flash at once, one with the last slice or two of the "V" and the other with the fist slice of the "A".)
So, once a line is completed, the vertical stepper motor moves the lens down the page to the next baseline, and then the second line starts, this time right-to-left, to double throughput. But therein lies the first fallacy of the 5333dpi resolution: There is enough hysteresis in the worm gear drive that you don't really know where you are to 1/5333 of an inch. The system relies on the fact that nobody notices that alternate lines are slightly misaligned horizontally (which also makes it all the more important that you don't have to make a second pass to handle overlapping kerned characters; there it might be noticeable).
Looking closer at the CRT and lens, basically the height of the CRT (~1200 pixels, IIRC) get reduced onto the photo paper to a maximum font size of ~18pt (IIRC), or 1/4in, giving a nominal resolution of ~5000dpi on the paper. But this design means you can't typeset a character that was taller than a certain size without breaking it into vertical pieces, and setting them on separate baseline passes. Because of the hysteresis mentioned above, we had to make sure all split-up characters were only exposed on left-to-right passes, thus slowing things down. Even then, though, you could see that the pieces still didn't quite line up, and also suffered from some effects of the lack of sharpness of the entire optical system. You can actually see this in the published 2nd edition of Vol 2.
Finishing up, once the sheet was done (six pages fit for Knuth's books, three across and two down), the system would pause, and the operator remove the photo paper, start it through the chemical developer, load another sheet, and push the button to continue the typesetting.
It's worth noting that the firmware that ran on the 8088 as supplied by Alphatype was not up to the job of handling dynamically downloaded Metafont characters, so Knuth re-wrote it from scratch. We're talking 7 simultaneous levels of interrupt (4 outline cards, 2 stepper motors that you had to accelerate properly and then keep going at a constant rate, and the RS-232 input coming from the DEC-20 mainframe with its own protocol). In assembly code. With the only debugging being from a 4x4 keyboard ("0-9 A-F") and a 16 character display. Fun times!
Now, if anybody asks, I can describe the replacement Autologic APS-5 that we replaced it with for the next volume. Teaser: Lower nominal resolution, but much nicer final images. No microcode required, but sent actual bitmaps, slowly but surely, and we were only able to do it because they accidentally sent a manual that specified the secret run-length encoding scheme.
And what I was missing there were exactly these "alignment" problems: I couldn't imagine that the pictures and "big letters" could work on 5000 dpi when only a two letters were formed at once and something had to mechanically move all the time.
So yes, please more details, and also about APS-5 and "actual bitmaps"! And please try to also write (as much as you can estimate) the years when you used the said technologies. Which year was Alphatype CRS used? And which APS-5?
Anyway, let's check the sources! Surfing over to https://www.saildart.org/[ALF,DEK]/ and clicking on ALPHA.LST on the left, shows code that looks like it's for an 8080 to me, but I'm rusty on this. The file itself is dated July 1980, but it's just a listing and not the sources themselves (not sure why).
Knuth starts the "Preface to the Third Edition" of Vol 2 with: "When the second edition of this book was completed in 1980, it represented the first major test case for prototype systems of electronic publishing called TeX and METAFONT. I am now pleased to celebrate the full development of those systems by returning to the book that inspired and shaped them." Here he's talking about our very first Alphatype production output, confirming it was 1980.
Note that the CRS wasn't an especially new model when we got ours, so it wouldn't be too surprising for the CPU to not be the latest and greatest as of 1980, especially as I got the feeling they were pretty price-sensitive designing it.
By the way, the mention of "fonts came on floppy disks" elsewhere was generally true back then (and selling font floppies was how the typesetter manufactures made some of their income), but we didn't use the floppy disk drives for Knuth's CM fonts at all. All required METAFONT-generated characters were sent down along with each print job. And, in fact, there wasn't enough RAM to hold all the characters for a typical job (remember, each different point size had different character shapes, like in the old lead type days!) so the DVI software on the mainframe had to know to mix in reloads of characters that had been dropped to make room for others, as it was creating output for each page. It's essentially the off-line paging problem: If you know the complete future of page accesses, how can you make an optimal choice of which pages to drop when necessary? That's my one paper with Knuth: "Optimal prepaging and font caching" TOPLAS Jan 85 (ugly scan of the 1982 tech report STAN-CS-82-901 at https://apps.dtic.mil/dtic/tr/fulltext/u2/a119439.pdf when it was still called "Optimal font caching"). Actually, the last full paragraph on page 15 of the later says that the plan to use the Alphatype CRS started two years before production happened, meaning that the CRS was available commercially by 1978.
Wow, thanks for that! Yes it is surely 8080 code (at least, 8080 mnemonics, definitely not 8088 which are different, 8008 initially used other mnemonics than in that LST, then the 8080 mnemonics were applied to 8008 too, but 8008 needed more hardware to use, so it should be 8080 in CRS then).
Also thanks for the STAN-CS-82-901. Now the story of programming CRS is quite clear. And even as a poor scan, it can be compared to doi.org/10.1145/2363.2367
Did I understand correctly, for CRS, what was uploaded were never bitmap fonts but always the "curves" of the letters? I believe 100 bitmap images in the resolution of 1024x768 were too much for the whole setup, even only for the cache?
And... I'm not surprised that Knuth managed to develop that firmware after I've discovered this story ("The Summer Of 1960 (Time Spent with don knuth)"):
As you see I'm also really interested in everything you can tell about Autologic APS-5, both how it worked and how it was used in TeX context!
I wanted to answer your question properly and in detail, but will settle for a somewhat cryptic response (sorry): in short, by using plain TeX instead of LaTeX, you get a workflow where you have a better understanding of what's going on, more control, less fighting the system, much better error messages (as they're now actually related to the code you type, not to macros you didn't write and haven't seen), less craziness of TeX macros (please program in a real language and generate TeX), faster processing (compile time in milliseconds), opportunity to unlearn LaTeX and possibly consider systems like ConTeXt, more fun etc. (Some earlier related comments: https://news.ycombinator.com/item?id=14480085 https://news.ycombinator.com/item?id=18039679 https://news.ycombinator.com/item?id=15734980 https://news.ycombinator.com/item?id=15151894 https://news.ycombinator.com/item?id=19790191)
See e.g. https://www.abebooks.com/servlet/SearchResults?bi=0&bx=off&c...
“I didn’t know what to do. I had spent 15 years writing those books, but if they were going to look awful I didn’t want to write any more.” (Digital Typography, p. 5)
(An example of their better-but-still-not-good-enough fonts, though I suspect that by then, after seeing the earlier worse fonts, he had already decided to solve the problem himself: https://tex.stackexchange.com/questions/367058/wheres-an-exa...)
Knuth started his TeX-and-METAFONT typography project in order to specify the appearance of his books; he wrote TeX so that he could use METAFONT. The former turns out to be useful even without the latter.
The last time I wrote a resume I used LaTex to do it too, and I provide the LaTex source on my website. I've seen bits of it show up in other people's resumes, which is exactly what I want to happen! But what was funny was how often people would say "boy this looks so clean and professional!". Pretty sure I got some interviews just because of how "pretty" my resume was.
Definitely. I've had interviewers start by thanking me for my CV being well-formatted and also short.
Ha. I've just noticed that as I type this, I have literally in front of me on my desk Eijkhout's "Tex by Topic", Lamport's latex book and "The Latex Companion" by Goosens et al. This must be where I do most of my pretty document making.
I think I "get" it better now. It's just at that time there was another school of more pragmatic engineering aligned thought in Unix using Dec10 RUNOFF derived concepts which morphd into t/roff and stuck there.
Maybe it's like Emacs and vi? Dec 10 SOS editor was like ed which led to ex and vi. If you walked into the TECO door you went lisp and Emacs.
To launch into Tex and latex you needed a professor leading you there rather than an engineer doing nroff and t/roff
I started adding what basically are assertions to my TeX files. These have caught a fair number of errors. I also periodically run "static analysis" scripts to catch writing and coding errors but these tends to have many false positives in my experience. E.g., chktex and these: http://matt.might.net/articles/shell-scripts-for-passive-voi...
But I'm thinking there may be good coding styles or habits that could prevent errors too. Any ideas?
Another important observation is the TeX itself is as free of errors as any piece of software you are ever likely to use. LaTeX and many packages can sometimes have quirks because of a bug, but the underlying engine works the way it is intended to work, always.
So knowing this, my approach is to use a system that continuously shows the typeset output as I type, if I can. That way errors are usually obvious and easy to find. If that's not practical with your favorite editor, just manually run LaTex on your source frequently to see how your document is progressing. The times I've had to do serious puzzling over what was happening were almost always because I had pages of complex text entered before checking to see the output.
TeX most of the time doesn't care about whitespace. It makes its own, excellent, decisions on where to to wrap lines according to font size and margins, etc. So this means that you can just start every sentence on a new line without worrying about the right margin; TeX doesn't care. This makes editing and proofing your documents much easier, especially if your editor just autowraps the contents of long sentences without inserting line breaks: then it's easy to cut and paste and rearrange entire sentences while writing. So a three sentence paragraph would look like this in the source:
Don't go around saying the world owes you a living.
The world owes you nothing.
It was here first. 
Because TeX/LaTeX is a markup language, it's very easy to generate form letters, labels, etc. by writing a small python script. I've done this to generate spine labels for my books containing a 2-D barcode, Dewey Decimal call number, author, ISBN, and my name for my books. The python code looks up the dimensions of the book on the internet to control the best layout. The TeX/LaTeX part wasn't hard, and with TeX's Turing complete macro language I could have programmed almost all of this in pure TeX, but I found the combination of using python as a preprocessor for a TeX/LaTeX backend very powerful.
Fancy graphics are a challenge in any document, but a fantastic package for graphics is available for LaTeX called TikZ. It is a macro package, written in modern TeX/LaTeX macros. It, like most LaTeX packages, has excellent documentation. See the TikZ package documentation at . Photos and other graphics can be directly inserted into documents too, but TikZ is integrated with LaTeX so margins, reflowing text, captions, fonts come out perfectly.
For academic and more serious writing learn BibTeX, a bibliography generator/database. BibTeX citation information is widely available for the papers I care about so I don't even have to enter the Author/Title/Publisher/etc. myself.
I waste too much time thinking about Fonts because I like them. Once upon a time, when TeX was created, there were very few fonts available for digital use. Today, Font creation should probably be left to professionals and serious hobbyists so I don't use MetaFont anymore. It is a font creation system and has been a part of TeX from the very beginning and is still available to those that would like to design their own fonts. I used it to create my company's first Logo (I only needed 5 upper case letters). Today, there are so many great fonts available, and TeX/LaTeX can work with all modern fonts so making your own isn't a good use of your time. Make sure to consult up to date documentation on font handling because there has been a great deal of evolution in the use of fonts in TeX/LaTeX over the past few decades.
TeX is now 40 years old and has had many extension and additions so it is no fun to build from scratch. Fortunately for the Mac there is the comprehensive MacTex distribution containing every standard/optional part that you should need if you are using MacOS; on Windows there is proTeXt, but I haven't used it. Both MacTeX and proTeXt are just system specific distributions of the TeXLive distribution available on Linux that is updated every year.
Programmers are probably most productive in their favorite editors and these are likely to have a LaTeX mode. Emacs has a very capable mode, AUCTeX, for editing LaTeX documents. I even generate PDFs from my org-mode files by using org-mode support for LaTeX. However, I do this only for documents that are org-mode to begin with, not for ordinary documents.
There are also some nice standalone programs for preparing LaTeX documents. I've used the program TeXShop that comes with the bundle of TeX related stuff that is installed when you install the MacTeX distribution. It probably runs on Windows and Linux as well. This is a good TeX editor, but programmers will likely miss the power and flexibility of using a programming editor (git integration, custom key-bindings, etc.)
To try out LaTeX without needing to install any software there is the excellent web based LaTeX system call Overleaf . It also supports collaborative editing.
 attributed to Mark Twain
Lyx is so useful that I am sometimes amazed it is not more popular. All the power of LaTeX with the ease of use of MS Word. And free and opensource. What's not to love?
I honestly find MS Word extremely difficult to use. Like, I'm always fighting against the incorrect assumptions it's doing on my text, and against an unknown amount of invisible state that changes my formatting as I type.
But I consider it to be a timesaving tool that's more advanced than regular LaTeX. Often you don't need to deal with the fiddly bits, but you still need to understand how they work. Especially if you get a compilation error (er, typesetting error?). They're much rarer with LyX, but because simple problems (e.g. missing brace) are impossible, when you do occasionally get one it's bound to be a doozy! Plus when you need a LaTeX feature that LyX doesn't support natively (rare but inevitable) you still need to know how to write it in LaTeX plus how to use LyX's math macros or modules language if you want to make it seamless.
I think this is why LyX struggles to get a big user base. Beginners end up getting stuck and going back to LaTeX. Advanced users see it and think "I don't need this – I already know LaTeX!"
Really clever project though, and I hope there are a lot of people whom it does help.
See a marginally relevant xkcd here: https://xkcd.com/2109/
You can see roughly what you are doing while writing, which makes some things much easier. Especially typing in large formulae (and double-checking that you didn't make a mistake). Although the way you enter them is more typing than point-and-clicking.
For something like changing how the chapter titles look, you are pretty much back in the world of Latex, trying out some macro to change all of them.
Would you mind listing a few? I ask because, if my memory serves correctly, the only thing DVI ever accomplished for me was to make me want to poke my eyes out.
I'll refrain from elaborating on the difference between those (it's probably best if I don't), except to say that my jaws will probably shatter the floor if you tell me you don't see the difference. :-)
(Probably worth saying that I have the same issue with PS; it's not just DVI. I do sometimes wonder if I'm the only one who sees these.)
Given that dvi doesn’t involve pixels, and lets you position any character anywhere on the page, with precisely known rules for rounding into resolution-specific device-space, you’ll have to be more specific about what you’re blaming dvi itself for.
The "output device" is... my monitor?
The toolchain is TeXLive for generating the files, and the usual viewer for each file type on Windows (Acrobat Reader for PDFs, and Evince for DVI). If you think it's Evince's fault I'd love to hear better alternatives, because I haven't found a single viewer that views DVIs any differently. And for input files, you can generate files via LaTeX pretty easily:
% DVI: latex thisfile.tex
% PDF: pdflatex thisfile.tex
Remember, though, at the end of the day, I'm just an end-user. I just know that every time I try to view DVI and PS files I have to tear my eyes out, and that I don't have this struggle with PDFs. I neither know which particular person or place in the pipeline to assign the blame to, nor does knowing that make it any easier for me to read the text...
Oh, if that's the problem, then please just point to a better viewer for yesterday's display technology. Or even a decade ago's. I'll find you an older monitor from whatever era you had a good viewing experience on and try it on that. Because I'm one hundred percent sure an older display technology is not going to make it look better. You can see above that above pointed out that it's looked awful since 2003. I can vouch that it's consistently been awful since over a decade ago, and PDFs have consistently been fine... on every kind of display and resolution I've tried. I've absolutely never, ever had a good experience viewing DVIs.
But that's nothing to do with DVI itself; it's entirely the responsibility of the renderer.
Secondly can you please define ‘tear my eyes out’? You’ve not provided any attempt any any technical description of what it is you’re experiencing. How is anyone supposed to help you when you don’t say what the problem is?
On the other hand, given Knuth's goals of not having to re-do all the work when the technology changes, METAFONT is an “end-to-end” solution: starting with the font design (at a higher level than the outlines that are ultimately shipped for vector fonts), it goes all the way to generate the actual pixels for a given resolution (like 600 dpi or 2400 dpi), resulting in bitmap fonts (pre-rasterized) rather than vector fonts (rasterized on-the-fly by PDF viewer).
There are a few consequences of these differences, for on-screen viewing:
• Computer monitors are very low-resolution devices (even the most high-end “retina display”, “4K” or whatever) compared to print. There are several tricks required to make text look good on screen (see e.g. the chapters at rastertragedy.com for details): things like anti-aliasing/grayscaling (using pixels that are neither black nor white) and subpixel-rendering (using individual RGB subpixels that make up each screen pixel).
• There's a conflict between thinking of on-screen viewing as serving to approximate to what the printed page will look like, versus allowing deformations or even re-positioning characters from where they'd be printed, to simply look better on screen.
But all of this has nothing to do with the DVI format, ultimately (which is just an efficient encoding of the page layout, i.e. what characters from what fonts go where on the page). It's just that (1) many DVI viewers tend to use bitmap fonts for historical reasons, resulting in poor on-screen viewing, (2) some DVI viewers use their own libraries for rendering instead of using the best font-rendering libraries available on the system.
• Even if using a bitmap font, the DVI viewer has a choice: (1) it can take the bitmaps that were generated for a certain resolution (typically for print, so something like 600 dpi which is probably higher than your screen resolution), and try to render those bitmaps at the number of pixels available (with the typical problems that result), or (2) it can try to ask METAFONT to run again to generate the fonts at whatever low resolution is appropriate for your monitor (but still without anti-aliasing and subpixel rendering, as MF was designed for higher-resolution devices and doesn't bother with all that). DVI viewers seem to always take the first option, probably to prevent the user's disk from filling up with lots of bitmap fonts for every possible zoom level. So the font rendering ends up sub-par. If you print it on a high-resolution device the output will be fine.
• For screen display, the DVI viewer could choose to look up and use the corresponding vector font.
• Finally, the DVI viewer could just convert to PDF (with dvipdfm or whatever) and have the same on-screen experience as with something that was originally a PDF file. In fact, on my current macOS system, the two DVI-viewing options I have (Skim and TeXShop's viewer) both seem to do exactly that internally, and the output is excellent.
I also remember the horrible font rendering mentioned elsewhere in the thread, but that would go away when printed or converted to PDF/PS, so I always assumed it was a bug in the viewer.
That said, I always admired how small .dvi files were in comparison to PDF, and that they loaded much faster. Kudos!
I have some .dvi files lying around from 2009, but trying to open them with evince results in
kpathsea: Running mktexpk --mfmode / --bdpi 600 --mag 1+0/600 --dpi 600 aebx12
mktexpk: No such file or directory
and 'apt search mktexpk' doesn't find anything.
So, can't open DVIs anymore, it seems :(
Moving to position is just `123 456 Td`. Placing a string at current position is just `(ABC) Tj`. Placing each glyph at different positions is just `[12 (A) 34 (B) 56 (C)] TJ`. I can do minor edits in PDF using just vim and qpdf.
Can you please write a few details about that workflow? What do you do and how? I used to create .ps "by hand" (actually writing scripts) but I've never tried to start from PDF and then edit. How do you "remove compression, object streams and whatnot"? And some useful references (for those Tj TJ etc).
> QPDF is capable of creating linearized (also known as web-optimized) files and encrypted files. It is also capable of converting PDF files with object streams (also known as compressed objects) to files with no compressed objects or to generate object streams from files that don't have them (or even those that already do). QPDF also supports a special mode designed to allow you to edit the content of PDF files in a text editor. For more details, please see the documentation links below.
$ qpdf --stream-data=uncompress input.pdf output.pdf
The reference is well Adobe's official PDF reference downloadable from its website.
PDF also supports PostScript fonts, which are already compiled for your printer.
Still, DVI is simpler for many uses. I think PDF is too complicated.
Even so, you can't beat overleaf for the final stages of paper submission when all the authors are making tiny typo fixes and style edits in all parts of the document concurrently.
We're continuing to build on and improve the platform now that the ShareLaTeX integration is pretty much complete, so look out for more updates soon, and thanks again for all your support.
PS: Please do continue to tell all your friends nice things about Overleaf :)
You can pretty much write and execute your technical paper and your source code simultaneously in arbitrary languages.
I recommend the spacemacs distribution, as the vi interface gets it right.
(For more complex documents, I've had better luck manipulating TeX directly rather than via another layer, e.g. Org mode, which is fantastic for other things.)
"there’s even an open archive maintained by Cornell University where authors of papers in physics, chemistry, and other disciplines can directly submit their LaTeX manuscripts for open viewing."
for two reasons:
1) sounds like 'there's even a church at Vatican' seen that arXiv.org is the biggest ever collection of scientific articles submitted by their authors
2) and there's no chemistry section in arXiv.org, but a half is mathematics and some computer science
Apologies if this is stupid question.
There's so many because a few years ago they were a big hype and everybody and their cat built one.
Personally, I find LaTeX is great due to its native programmability - it facilitates a natural separation of content and presentation, combined with its markup language, can be very powerful. It also enables extensibility - community packages for almost every single type of content, e.g. organic chemistry, Feynman diagram, even music. A typical user only needs to \usepackage and stop worrying. It's CTAN ecosystem is just like a programming language like Perl, or recently, Node.js or Go, which is great.
But I find all hell breaks loose when you want to get a slightly different formatting than what's offered. Then suddenly the entire system becomes something you need to fight against. Previously, you could be a happy \usepackage code monkey, but now you need to know the system inside-out and hack a path ahead. Just like what happens when you use a software library in a slightly different way than the author expected, then suddenly you find yourself in a battle with the entire library, unfortunately, the same thing occurs in typesetting...
For example, with LaTeX you can add footnotes to essentially anywhere with guaranteed aesthetics. UNLESS you want to add a footnote for your title, then it turns out the existing infrastructure in the "article" template doesn't allow it at all, and you need to define and redefine and undefine some internal macros in your document to implement it, as I learned from https://tex.stackexchange.com/
And all the separation of content and presentation and its benefits ends at this point. It's no longer "what you think is what you get".
The same issue also occurs when you are trying to make a Beamer slideshow, most "environments" in LaTeX are designed for papers, not a slide. For example, when I want to put some images to the slide, often not fits in a "regular" geometric position, I find I have to keep hacking the width and position of the images and keep compiling until the result is acceptable. I don't know if there are better packages for typesetting slideshow, recommendations are welcome.
Another problem I've noticed is the phenomenon of confusing and outdated packages. Often there are more than one package for a specific task, some are old and limited but still used in many documents, others are new, the rest are competing implementations. Old ones are frequently mentioned in the old guides, it works for a while until you have a corner case, then it takes a few attempts before moving to the newer packages. Again, just like programming languages. A recent trend in programming is writing new, cleaner implementations for basic tasks, I don't know if the same thing is happening in the LaTeX community, if it is, I think it would be great.
On the other hand, I've yet to see a word processor which allows users to extend it and automate formatting and typesetting without doing some ugly macros hacked together and written in VBScript. So I see LaTeX as a valuable tool and I'll keep using LaTeX in the foreseeable future.
The final words, having https://tex.stackexchange.com/ is a great contribution to the LaTeX community, just like how https://stackoverflow.com/ helps for programming.
LaTeX was developed with the goal of freeing the user (a researcher, typically) from wasting time on layout and focusing on content (scientific research, usually).
Plain TeX is much more flexible, but you may argue that it is much lower level (or is it?).
For general typesetting, I can’t recommend ConTeXt enough. Its philosophy is nearer to TeX than LaTeX, and it gives you full control on layout, too. Much smaller package than the full TeX/LaTeX ecosystem. And it’s scriptable with Lua!
Thanks. I'm currently using XeTeX/XeLaTeX due to its newer codebase, native Unicode support, OpenType fonts, PDF outputs, etc. But perhaps it is the time to try ConTeXt. I was struggling previously with Lua in Awesome window manager as I find (from a Python background) the syntax is weird, but now I think seriously need to pick up a serious textbook to learn Lua, the Lua engine is embedded in everything, learning it opens a new world.
That's why for anything custom, I prefer to use plain TeX, which is totally fine.
The TeXBook is a great tutorial on TeX, it's up there as one of the great comp sci books, in the same league as: "The C Programming Language".
I think people seem to scared of plain TeX, it's totally approachable and easy to use.
There are a lot of special-purpose markup languages, like Markdown and AsciiDoc. They involve toolchains written in other languages to convert them to HTML. If you want to add features, you have to hack the toolchain.
For blog posts and simple websites, that's fine. But sometimes I want to build something nontrivial -- as an academic, the first thing that comes to mind is an academic paper, where I might want to have special markup for theorems, proofs, examples, figures, and tables, and have ways to automatically cross-reference them, generate tables of contents, embed automatically formatted bibliographies, and so on. Or generate figures from code (like TiKZ), or embed data analysis code right in the source and embed the results (via knitr or Sweave).
With pandoc, bookdown, and knitr, you can get pretty close to this. But what made LaTeX so powerful is that it is programmable in LaTeX. It can be extended: you can define new types of environments (example problems! homework exercises! every example shows equivalent source code in three different languages! category diagrams generated automatically! musical notation is typeset!), you can make new commands to automate drudgery (typesetting chemical equations! building complicated equations!), and you can do it all with at least some basic separation between content and presentation style. Converting the larger LaTeX documents I've made to Markdown or Org would be basically impossible without writing a bunch of scripts to extend the Markdown renderer and hack everything together. There's no equivalent of just writing a LaTeX package.
I'm not aware of another document preparation system that comes close to LaTeX. Org mode is probably the closest, since it has tools to embed code blocks in other languages and include their output, but after a week of fighting with conversions between markup languages I can really see the appeal of LaTeX's uniform syntax and built-in programmability.
The other promising option seems to be Pollen, a Racket-based programmable system. But it seems more like TeX than LaTeX: it provides the very basic tools to build a programmable document system, but not the higher-level conveniences (like cross-referencing commands and standard sectioning and environment commands) you would need to build yourself. Maybe someday...
It's used by academics to write conference papers, it's used for most of the Racket books for both Web and camera-ready typesetting for print, and I've even used it for embedded API docs.
(That manual itself was written in Scribble.)
(I'm actually thinking of moving from Scribble to Markdown for embedded API docs, as part of an open source ecosystem goal, to make docs for reusable modules more lightweight to add, but it's hard to give up some things about Scribble. For example, for one module, I needed substantial documentation about each opcode, and wanted it to be formatted similarly to an API function, but with different meta properties, and some tricky formatting. Scribble let me make a simple semantic form for that, separate from formatting, like LaTeX would.)
I'm trying to resist writing another book, since I know how much time it would suck up, but if I do, I'll probably look at PreTeXt and maybe Scribble as options.
There is some exploration at the site itself that is worth looking through if anyone plans to attempt LaTeX markdown being interpreted-and-served.
The Unix Philosophy in me thinks "use lynx. oh and forget https, use gopher to serve. when you hit a .tex switch to a program that's supposed to do that" etc etc etc but I definitely think the entirety of the internetworkings - i.e. not-exclusive-to-hackers - would benefit from *TeX markdown being as common as HTML (perhaps not AS common, but more common than currently).
Part of the reason that LaTeX works at all is that it targets, as you note, (essentially) .pdf.
Trying to pack so much precision into a web/os tool and make it performant, would seem beyond the reach of a real-world business case, as far as I can tell.
All the help system was based on it, generating Word, PDF and HTML based deliveries.
It is relatively popular at the enterprise level, where the licensing costs for such tools are peanuts vs the overall project costs.
It's a visual description based on a turing complete language, 95% of semantic meaning is lost once it's poured into a TeX document, end the last 5% are gone once it's compiled into a mess of pdf text boxes that ignore text flow and just look correct.
Literally any other document description language would be a better lingua franca than LaTeX, because a language should be used to communicate, and TeX is meant to print curves.
Keep in mind that one goal of TeX was to make it fairly easy for a human to write it down. Another goal is for another human to be able to understand that source quite well. Now picture both tasks with semantics-rich MathML.