Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The lingua franca of LaTeX (increment.com)
246 points by JohnHammersley on May 25, 2019 | hide | past | favorite | 133 comments



This is a good article! A couple of corrections to the early history in the introductory paragraphs:

• It is indeed true that the first edition of Volumes 1–3 of The Art of Computer Programming and the second edition of Volume 1 (1968, 1969, 1973, 1973) had been done with hot-metal typesetting (pieces of lead, laid out to form the page) with a human doing the typesetting/compositing (with a Monotype machine). And that for the second edition of Volume 2 in 1977, the publisher Addison-Wesley was switching to a cheaper alternative. But the cheaper alternative was not electronic; it was phototypesetting (the letterforms made with light and lenses instead of metal). It was Knuth who noticed a book that had been electronically typeset (Patrick Winston's AI book), realized that electronic typesetting machines actually had enough resolution to print like “real books” (unlike typewriters or printers of the time), and decided to solve his problem for himself (as he felt that he could make shapes with 0s and 1s).

• The summer of 1977, Knuth spent in China, thinking he'd specified his program in enough detail and his two students would be able to complete the program by the time he came back. He came back and saw they had only implemented a small subset, and decided to spend his sabbatical year writing the program himself. He mentions that as he started writing it he realized the students' proto-TeX was an impressive effort, because there was a lot missing from his specifications. Knuth's article The Errors of TeX (reprinted in his collection Literate Programming) goes into excellent detail on the development process.

Also: a suggestion to try plain TeX (instead of LaTeX), if you haven't tried it. The book A Beginner's Book of TeX by Seroul and Levy is especially good. It might surprise you.


> It was Knuth who noticed a book that had been electronically typeset (Patrick Winston's AI book), realized that electronic typesetting machines actually had enough resolution to print like “real books”

Searching for what the actual resolutions were, I saw in your older comment (1) that you mention "Alphatype CRS" and "more than 5000 DPI."

If you mention 1977, and there were already 5000+ DPI machines then, I'd really like to read a little more about that: it's impressively more than what the "laser printers" used by mortals were able to have. How did these high resolution machines work? How did they achieve the resolution physically, and how did they manage to process so much bits at the time when the RAM of microcomputers was counted in kilobytes?

1) https://news.ycombinator.com/item?id=17917367

P.S. Digging deeper: I've just found the reason we see today the text produced with TeX as "too thin": https://tex.stackexchange.com/questions/48369/are-the-origin... and linked there: https://tug.org/TUGboat/tb37-3/tb117ruckert.pdf


The Alphatype CRS ("Cathode Ray Setter") was nominally 5333dpi, but it's a bit of a fudge. As the person who wrote the DVI interface for it, and personally ran the entire Art of Computer Programming, Vol. 2, 2nd Edition through it, let me explain.

You are very correct that in the early 1980's, even 64k of RAM was quite expensive, and enough memory to handle a complete frame buffer at even 1000dpi would be prohibitive. The Alphatype dealt with this by representing fonts in an outline format handled directly by special hardware. In particular, it had an S100 backplane (typical for microcomputers in the day) into which was plugged a CPU card (an 8088, I think), a RAM card (64k or less), and four special-purpose cards, each of which knew how to take a character outline, trace through it, and generate a single vertical slice of bits of the character's bitmap.

A bit more about the physical machine, to understand how things fit together: It was about the size and shape of a large clothes washer. Inside, on the bottom, was a CRT sitting on its back side, facing up. There was a mirror and lens mounted above it, on a gimbal system that could move it left/right and up/down via stepper motors (kind of like modern Coke machines that pick a bottle from the row/column you select and bring it to the dispenser area). And, at the back, there was a slot in which you'd place a big sheet of photo paper (maybe 3ft by 3ft) that would hang vertically.

OK, we're all set to go. With the paper in, the lens gets moved so that it's focused on the very top left of the paper, and the horizontal stepper motor, under control of the CPU, starts moving it rightwards. Simultaneously, the CPU tells the first decoder card to DMA the outline info for the first character on the page, and to get the first vertical slice ready. When the stepper motor says it's gotten to the right spot, the CPU tells the decoder card to send its vertical slice to the CRT, which flashes it, and thus exposes the photo paper. In the meantime, the CPU has told the second card to get ready with the second vertical slice, so that there can be a bit of double-buffering, with one slice ready to flash while the next one is being computed. When the continuously-moving horizontal stepper gives the word, the second slice is flashed, and so on. (Why two more outline cards? Well, there might be a kern between characters that slightly overlaps them (think "VA"), and the whole thing is so slow we don't want to need a second pass, so actually two cards might flash at once, one with the last slice or two of the "V" and the other with the fist slice of the "A".)

So, once a line is completed, the vertical stepper motor moves the lens down the page to the next baseline, and then the second line starts, this time right-to-left, to double throughput. But therein lies the first fallacy of the 5333dpi resolution: There is enough hysteresis in the worm gear drive that you don't really know where you are to 1/5333 of an inch. The system relies on the fact that nobody notices that alternate lines are slightly misaligned horizontally (which also makes it all the more important that you don't have to make a second pass to handle overlapping kerned characters; there it might be noticeable).

Looking closer at the CRT and lens, basically the height of the CRT (~1200 pixels, IIRC) get reduced onto the photo paper to a maximum font size of ~18pt (IIRC), or 1/4in, giving a nominal resolution of ~5000dpi on the paper. But this design means you can't typeset a character that was taller than a certain size without breaking it into vertical pieces, and setting them on separate baseline passes. Because of the hysteresis mentioned above, we had to make sure all split-up characters were only exposed on left-to-right passes, thus slowing things down. Even then, though, you could see that the pieces still didn't quite line up, and also suffered from some effects of the lack of sharpness of the entire optical system. You can actually see this in the published 2nd edition of Vol 2.

Finishing up, once the sheet was done (six pages fit for Knuth's books, three across and two down), the system would pause, and the operator remove the photo paper, start it through the chemical developer, load another sheet, and push the button to continue the typesetting.

It's worth noting that the firmware that ran on the 8088 as supplied by Alphatype was not up to the job of handling dynamically downloaded Metafont characters, so Knuth re-wrote it from scratch. We're talking 7 simultaneous levels of interrupt (4 outline cards, 2 stepper motors that you had to accelerate properly and then keep going at a constant rate, and the RS-232 input coming from the DEC-20 mainframe with its own protocol). In assembly code. With the only debugging being from a 4x4 keyboard ("0-9 A-F") and a 16 character display. Fun times!

Now, if anybody asks, I can describe the replacement Autologic APS-5 that we replaced it with for the next volume. Teaser: Lower nominal resolution, but much nicer final images. No microcode required, but sent actual bitmaps, slowly but surely, and we were only able to do it because they accidentally sent a manual that specified the secret run-length encoding scheme.


Thank you many times! The minute you posted this I've wanted to write that I've discovered your 2007 interview (1) where some details (but this answer is more detailed, many thanks!) were able to give me some overall idea how it was done.

And what I was missing there were exactly these "alignment" problems: I couldn't imagine that the pictures and "big letters" could work on 5000 dpi when only a two letters were formed at once and something had to mechanically move all the time.

So yes, please more details, and also about APS-5 and "actual bitmaps"! And please try to also write (as much as you can estimate) the years when you used the said technologies. Which year was Alphatype CRS used? And which APS-5?

1) https://tug.org/interviews/fuchs.html


And just to show you why I care about the years: you said about the controller CPU it was 8008 in the 2007 interview, which is an 8-bit CPU (e.g. used on CP/M machines), and now 8088, which is a 16-bit with an 8-bit bus (the one in the first IBM PC). If you aren't sure about the CPU but if we'd know when the device started to sell maybe we can exclude the later.


Yeah, the naming conventions between 8008, 8080, 8086, 8088, and 80186 is enough to make you nuts.

Anyway, let's check the sources! Surfing over to https://www.saildart.org/[ALF,DEK]/ and clicking on ALPHA.LST on the left, shows code that looks like it's for an 8080 to me, but I'm rusty on this. The file itself is dated July 1980, but it's just a listing and not the sources themselves (not sure why).

Knuth starts the "Preface to the Third Edition" of Vol 2 with: "When the second edition of this book was completed in 1980, it represented the first major test case for prototype systems of electronic publishing called TeX and METAFONT. I am now pleased to celebrate the full development of those systems by returning to the book that inspired and shaped them." Here he's talking about our very first Alphatype production output, confirming it was 1980.

Note that the CRS wasn't an especially new model when we got ours, so it wouldn't be too surprising for the CPU to not be the latest and greatest as of 1980, especially as I got the feeling they were pretty price-sensitive designing it.

By the way, the mention of "fonts came on floppy disks" elsewhere was generally true back then (and selling font floppies was how the typesetter manufactures made some of their income), but we didn't use the floppy disk drives for Knuth's CM fonts at all. All required METAFONT-generated characters were sent down along with each print job. And, in fact, there wasn't enough RAM to hold all the characters for a typical job (remember, each different point size had different character shapes, like in the old lead type days!) so the DVI software on the mainframe had to know to mix in reloads of characters that had been dropped to make room for others, as it was creating output for each page. It's essentially the off-line paging problem: If you know the complete future of page accesses, how can you make an optimal choice of which pages to drop when necessary? That's my one paper with Knuth: "Optimal prepaging and font caching" TOPLAS Jan 85 (ugly scan of the 1982 tech report STAN-CS-82-901 at https://apps.dtic.mil/dtic/tr/fulltext/u2/a119439.pdf when it was still called "Optimal font caching"). Actually, the last full paragraph on page 15 of the later says that the plan to use the Alphatype CRS started two years before production happened, meaning that the CRS was available commercially by 1978.


> Surfing over to https://www.saildart.org/[ALF,DEK]/ and clicking on ALPHA.LST on the left, shows code that looks like it's for an 8080 to me

Wow, thanks for that! Yes it is surely 8080 code (at least, 8080 mnemonics, definitely not 8088 which are different, 8008 initially used other mnemonics than in that LST, then the 8080 mnemonics were applied to 8008 too, but 8008 needed more hardware to use, so it should be 8080 in CRS then).

Also thanks for the STAN-CS-82-901. Now the story of programming CRS is quite clear. And even as a poor scan, it can be compared to doi.org/10.1145/2363.2367

Did I understand correctly, for CRS, what was uploaded were never bitmap fonts but always the "curves" of the letters? I believe 100 bitmap images in the resolution of 1024x768 were too much for the whole setup, even only for the cache?

And... I'm not surprised that Knuth managed to develop that firmware after I've discovered this story ("The Summer Of 1960 (Time Spent with don knuth)"):

https://news.ycombinator.com/item?id=2856567

As you see I'm also really interested in everything you can tell about Autologic APS-5, both how it worked and how it was used in TeX context!


Thank you so much for your posting. This is the exact kind of thing I enjoy reading here


Wow this is wonderful, thank you!


I wasn't around at the time and the only thing I know is from reading Knuth's books/papers/interviews. :-) But I imagine that (1) the typesetting machine wouldn't attach to your microcomputer like a peripheral, and probably came with its own computer or equivalent, (2) there wouldn't be too many bits to process: you'd first load your preferred high-resolution fonts onto it with floppy disks or whatever (the shapes for each letter), then the actual output sent to the device would just be the letters and their (relative) positions (not each bit of the page “image”).


Wow. That SE post on Computer Modern is really interesting. There are so many details and nuances regarding typography that it kinda makes me dizzy. It truly is an art! The time and thoroughness required to truly master it is humbling.


Most I believe were a CRT focused down to the letter size and a movable plate.


Curious about the suggestion to try plain TeX. Could you expand on what are the advantages? Do you mean it as a learning stepping stone or for actual usage?


I meant for actual usage, at least for a while.

I wanted to answer your question properly and in detail, but will settle for a somewhat cryptic response (sorry): in short, by using plain TeX instead of LaTeX, you get a workflow where you have a better understanding of what's going on, more control, less fighting the system, much better error messages (as they're now actually related to the code you type, not to macros you didn't write and haven't seen), less craziness of TeX macros (please program in a real language and generate TeX), faster processing (compile time in milliseconds), opportunity to unlearn LaTeX and possibly consider systems like ConTeXt, more fun etc. (Some earlier related comments: https://news.ycombinator.com/item?id=14480085 https://news.ycombinator.com/item?id=18039679 https://news.ycombinator.com/item?id=15734980 https://news.ycombinator.com/item?id=15151894 https://news.ycombinator.com/item?id=19790191)


Not OP, but for me it helped to understand what is actually happening. Definitely recommended. Nowadays I use mostly Context (mainly because fonts and creating your own layout is made easy), and never plain Tex, but still I think reading the Texbook was one of the best decisions I've made concerning desktop publishing.


Upvoted. I hope GP elucidates.


Thank you for the background and recommendations. Just read the very interesting abstract[1] of the book by Seroul and Levy; queued. (Wish it were a tad less pricey than €64).

[1] https://www.springer.com/gp/book/9780387975627


If you buy a used copy it will probably be much cheaper.

See e.g. https://www.abebooks.com/servlet/SearchResults?bi=0&bx=off&c...


Ah, thank you. That's much better.


What was Knuth's complaint about phototypesetting? I know nothing about this, but isn't the problem that TeX solves that of layout of characters? Which seems like it would still need to be done by either hand or electronically for phototypesetting.


The fonts weren't as good, even the best that Addison-Wesley could come up with.

“I didn’t know what to do. I had spent 15 years writing those books, but if they were going to look awful I didn’t want to write any more.” (Digital Typography, p. 5)

(An example of their better-but-still-not-good-enough fonts, though I suspect that by then, after seeing the earlier worse fonts, he had already decided to solve the problem himself: https://tex.stackexchange.com/questions/367058/wheres-an-exa...)

Knuth started his TeX-and-METAFONT typography project in order to specify the appearance of his books; he wrote TeX so that he could use METAFONT. The former turns out to be useful even without the latter.


I used LaTex in college, to typeset all my essays. I also used it in my creative writing class, and people were amazed that I was able to add line numbers to my work so we could easily discuss it by referring to line numbers! I'm pretty sure I got better grades in all my writing based classes based solely on the fact that I used LaTex to typeset my work.

The last time I wrote a resume[0] I used LaTex to do it too, and I provide the LaTex source[1] on my website. I've seen bits of it show up in other people's resumes, which is exactly what I want to happen! But what was funny was how often people would say "boy this looks so clean and professional!". Pretty sure I got some interviews just because of how "pretty" my resume was.

[0] https://www.jedberg.net/Jeremy_Edberg_Resume.pdf

[1] https://www.jedberg.net/Jeremy_Edberg_Resume.tex


Pretty sure I got some interviews just because of how "pretty" my resume was.

Definitely. I've had interviewers start by thanking me for my CV being well-formatted and also short.

Ha. I've just noticed that as I type this, I have literally in front of me on my desk Eijkhout's "Tex by Topic", Lamport's latex book and "The Latex Companion" by Goosens et al. This must be where I do most of my pretty document making.


As a t/roff and eqn and tbl user I never found the story compelling. The uplift cost to latex in 1982 was north of my painpoint and in 1990 I managed to do things in tbl and eqn I never imagined possible. Tex font fascism was also a bummer. A phototypesetting expert I knew preferred the actual type to digital type, the microfilm typeset stuff was amazing (I saw galley proofs in bromide and they were crisp and clean)

I think I "get" it better now. It's just at that time there was another school of more pragmatic engineering aligned thought in Unix using Dec10 RUNOFF derived concepts which morphd into t/roff and stuck there.

Maybe it's like Emacs and vi? Dec 10 SOS editor was like ed which led to ex and vi. If you walked into the TECO door you went lisp and Emacs.

To launch into Tex and latex you needed a professor leading you there rather than an engineer doing nroff and t/roff


Systems engineers (for example the K&R "C Programming Language" book) didn't need elaborate math notation, like Knuth needed.


Eqn was invented precisely to do elaborate math notation. Integrals, sweeping braces, majuscule and minescule letters, suffixes, Greek..


Interesting to hear from the pre-Tex world. I'd like to see some examples of expert use of tbl and eqn; outside of manpages the roff ecosystem seems to have vanished.


The BSDs ship with a lot of the original non-man-page Unix documentation: the system manager’s manual, the programmer’s supplementary documentation, and the user’s supplementary documentation. These are written to be typeset with troff, not printed on a teletype like the man pages can be, so they tend to be a bit more refined. https://svnweb.freebsd.org/base/head/share/doc/ They are missing some chapters (eg eqn and tbl) due to copyright disputes but you can find them in TUHS archives https://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/doc


This http://www.nesssoftware.com/home/mwc/manual.php was set with the Coherent version of nroff. Result is damned excellent.


roff is probably very uncommon in these days. But not unheard of. A recent example is Donovan and Kernighan’s book The Go Programming Language which was typeset using an XML and groff toolchain. It was featured on HN a few years ago, see https://news.ycombinator.com/item?id=11470905


Since xml2rfc and the rise of markdown I would agree in std land because troff markup for RFCs was a big thing


Random LaTeX question: Does anyone have any tips for writing LaTeX code to avoid typos or other mistakes? I.e., using the fact that LaTeX is a programming language and not just markup language to catch mistakes?

I started adding what basically are assertions to my TeX files. These have caught a fair number of errors. I also periodically run "static analysis" scripts to catch writing and coding errors but these tends to have many false positives in my experience. E.g., chktex and these: http://matt.might.net/articles/shell-scripts-for-passive-voi...

But I'm thinking there may be good coding styles or habits that could prevent errors too. Any ideas?


My observation is that TeX/LaTeX are designed for use by humans but unlike other programming, the literal content of the document occupies more of the source being generated by the author than the elements used to control the layout (macros, elemental operators, etc.); consequently, errors like improper nesting of environments can produce puzzling error messages. An XML style markup language would produce easier to understand error messages, but I believe that TeX/LaTeX makes the proper trade off by reducing the authoring effort to create content at the expense of harder to debug documents when complex formatting is being done.

Another important observation is the TeX itself is as free of errors as any piece of software you are ever likely to use. LaTeX and many packages can sometimes have quirks because of a bug, but the underlying engine works the way it is intended to work, always.

So knowing this, my approach is to use a system that continuously shows the typeset output as I type, if I can. That way errors are usually obvious and easy to find. If that's not practical with your favorite editor, just manually run LaTex on your source frequently to see how your document is progressing. The times I've had to do serious puzzling over what was happening were almost always because I had pages of complex text entered before checking to see the output.

TeX most of the time doesn't care about whitespace. It makes its own, excellent, decisions on where to to wrap lines according to font size and margins, etc. So this means that you can just start every sentence on a new line without worrying about the right margin; TeX doesn't care. This makes editing and proofing your documents much easier, especially if your editor just autowraps the contents of long sentences without inserting line breaks: then it's easy to cut and paste and rearrange entire sentences while writing. So a three sentence paragraph would look like this in the source:

    Don't go around saying the world owes you a living.
    The world owes you nothing. 
    It was here first. [1]
Rarely, whitespace can matter, for example when defining special environments and macros for typesetting mathematics or a programming sample. In this case you can make use of the % (percent character) which starts a comment that continues to the end of line to cut off trailing whitespace. I find this handy when defining my own commands to keep the LaTeX easy to read by breaking it up into lines without worrying about extra spaces being inserted. (Don't be scared by this next example; it's not something that ordinary users would create. The point is it is possible and can be found when needed on tex.stackexchange.com. It's very fancy wizard level typesetting. TeX/LaTeX allows crazy powerful manipulation of text. See [2] to see the results that this produces--really, take a look.) Note the use of %.

    \newcommand\overlineset[2]{%
      \stackengine{2pt}{$#1$}{\makebox[\widthof{$#1$}]{%
          $\scriptscriptstyle\hrulefill\,#2\,\hrulefill$}}%
              {O}{c}{F}{T}{S}%
    }
Don't fuss with margins, heading font, caption location for figures, and so forth while creating your masterpiece. Get the content down while making sure that LaTeX is formatting it without error. Afterwards, since everything is parameterized, you can go back and modify the appearance and get it to look just right and it will be consistent across the entire document. I find this so very superior to the What-You-See-Is-All-You-Get style of document preparation in Word.

Because TeX/LaTeX is a markup language, it's very easy to generate form letters, labels, etc. by writing a small python script. I've done this to generate spine labels for my books containing a 2-D barcode, Dewey Decimal call number, author, ISBN, and my name for my books. The python code looks up the dimensions of the book on the internet to control the best layout. The TeX/LaTeX part wasn't hard, and with TeX's Turing complete macro language I could have programmed almost all of this in pure TeX, but I found the combination of using python as a preprocessor for a TeX/LaTeX backend very powerful.

Fancy graphics are a challenge in any document, but a fantastic package for graphics is available for LaTeX called TikZ. It is a macro package, written in modern TeX/LaTeX macros. It, like most LaTeX packages, has excellent documentation. See the TikZ package documentation at [3]. Photos and other graphics can be directly inserted into documents too, but TikZ is integrated with LaTeX so margins, reflowing text, captions, fonts come out perfectly.

For academic and more serious writing learn BibTeX, a bibliography generator/database. BibTeX citation information is widely available for the papers I care about so I don't even have to enter the Author/Title/Publisher/etc. myself.

I waste too much time thinking about Fonts because I like them. Once upon a time, when TeX was created, there were very few fonts available for digital use. Today, Font creation should probably be left to professionals and serious hobbyists so I don't use MetaFont anymore. It is a font creation system and has been a part of TeX from the very beginning and is still available to those that would like to design their own fonts. I used it to create my company's first Logo (I only needed 5 upper case letters). Today, there are so many great fonts available, and TeX/LaTeX can work with all modern fonts so making your own isn't a good use of your time. Make sure to consult up to date documentation on font handling because there has been a great deal of evolution in the use of fonts in TeX/LaTeX over the past few decades.

TeX is now 40 years old and has had many extension and additions so it is no fun to build from scratch. Fortunately for the Mac there is the comprehensive MacTex distribution containing every standard/optional part that you should need if you are using MacOS; on Windows there is proTeXt, but I haven't used it. Both MacTeX and proTeXt are just system specific distributions of the TeXLive distribution available on Linux that is updated every year.

Programmers are probably most productive in their favorite editors and these are likely to have a LaTeX mode. Emacs has a very capable mode, AUCTeX, for editing LaTeX documents. I even generate PDFs from my org-mode files by using org-mode support for LaTeX. However, I do this only for documents that are org-mode to begin with, not for ordinary documents.

There are also some nice standalone programs for preparing LaTeX documents. I've used the program TeXShop that comes with the bundle of TeX related stuff that is installed when you install the MacTeX distribution. It probably runs on Windows and Linux as well. This is a good TeX editor, but programmers will likely miss the power and flexibility of using a programming editor (git integration, custom key-bindings, etc.)

To try out LaTeX without needing to install any software there is the excellent web based LaTeX system call Overleaf [4]. It also supports collaborative editing.

[1] attributed to Mark Twain

[2] https://tex.stackexchange.com/questions/122117/overline-cont...

[3] https://www.bu.edu/math/files/2013/08/tikzpgfmanual.pdf

[4] https://www.overleaf.com


Maybe you want to check out proselint? [0]

[0] https://github.com/amperser/proselint


My life has gotten easier since I started using ShareLatex (now Overleaf). It's kind of like Google Docs for LaTeX, and it checks your code for errors as you type.


The old ispell used to be able to deal with TeX/LaTeX input. Though that is for the text, not {La,}TeX code. You can try using TeXworks. You can try typesetting when you want to test and see the output in a pdf preview window. tex.stackexchange.com is a good resource.


Let's not forget to show some love for the tool that makes LaTeX usable by mere mortals:

https://www.lyx.org/

Lyx is so useful that I am sometimes amazed it is not more popular. All the power of LaTeX with the ease of use of MS Word. And free and opensource. What's not to love?


> the ease of use of MS Word.

I honestly find MS Word extremely difficult to use. Like, I'm always fighting against the incorrect assumptions it's doing on my text, and against an unknown amount of invisible state that changes my formatting as I type.


I've used LyX extensively and love it. I found that it made typing up lecture notes literally three times as fast. I can't imagine how I could have finished my PhD thesis without it. (Actually I can: with months more spent writing up!)

But I consider it to be a timesaving tool that's more advanced than regular LaTeX. Often you don't need to deal with the fiddly bits, but you still need to understand how they work. Especially if you get a compilation error (er, typesetting error?). They're much rarer with LyX, but because simple problems (e.g. missing brace) are impossible, when you do occasionally get one it's bound to be a doozy! Plus when you need a LaTeX feature that LyX doesn't support natively (rare but inevitable) you still need to know how to write it in LaTeX plus how to use LyX's math macros or modules language if you want to make it seamless.

I think this is why LyX struggles to get a big user base. Beginners end up getting stuck and going back to LaTeX. Advanced users see it and think "I don't need this – I already know LaTeX!"


I always thought I'd find Lyx more useful than I do. I tried it many times, and it works; it just doesn't offer a speed or ease advantage over writing LaTeX for me out of the box. Maybe that's because I used LaTeX for a while before discovering Lyx, but the speed improvements are overcome by the clunkiness and unfamiliarity of the interface, where the ease of using a familiar text editor gives LaTeX the edge.

Really clever project though, and I hope there are a lot of people whom it does help.


If you care about precise formatting, it's much easier to use than word. LyX encourages you to use defined styles and avoid formatting quirks, compared with Word where you need to be very careful not to insert little inconsistencies.


The main problem with lyx is with the lyx file format. If the "native" file format were plain TeX or LaTeX files it would be much more useful, facilitating the collaboration of lyx and non-lyx users on the same file.


Yeah, Word-style "WYSIWYG" versus LaTeX-style source-code + compiling is really a pretty big philosophical difference. Some people much prefer the latter, and will avoid WYSIWYG LaTeX tools. The marked up source code gives you much better control, and you can use semantic macros that for example allow you to change how you format vectors or chapter titles etc. across the whole document easily.

See a marginally relevant xkcd here: https://xkcd.com/2109/


Maybe you know this, but LyX is somewhere in the middle.

You can see roughly what you are doing while writing, which makes some things much easier. Especially typing in large formulae (and double-checking that you didn't make a mistake). Although the way you enter them is more typing than point-and-clicking.

For something like changing how the chapter titles look, you are pretty much back in the world of Latex, trying out some macro to change all of them.


I use Plain TeX with .dvi output. One advantage is that it can use the DVI format, which is in many ways much better than PDF for many things. (You can also convert DVI to PDF and to other formats; I wrote a program to convert DVI to PBM (without using PostScript), and use that to print out the documents (through foo2zjs, which converts the PBM into the format needed by the printer).)


> DVI format, which is in many ways much better than PDF for many things

Would you mind listing a few? I ask because, if my memory serves correctly, the only thing DVI ever accomplished for me was to make me want to poke my eyes out.


Ouch! Sorry! Anything specific? (DVI is my design.)


Haha wow! Yeah sure, here's an example of what I mean: https://imgur.com/a/eX2BcIC

I'll refrain from elaborating on the difference between those (it's probably best if I don't), except to say that my jaws will probably shatter the floor if you tell me you don't see the difference. :-)

(Probably worth saying that I have the same issue with PS; it's not just DVI. I do sometimes wonder if I'm the only one who sees these.)


Typically with PDF you use vector fonts: the font designer creates the shapes of the letters, which are encoded as outlines. Your PDF viewer takes these outlines, and given your chosen zoom level and monitor resolution (which together determine how many pixels are available for displaying the letter "e", say), it rasterizes (converts to pixels) on-the-fly. In doing this (for low-resolution devices like computer monitors) it is further aided by “font hinting” which specifies how to hammer the font shape into the available pixel grid.

On the other hand, given Knuth's goals of not having to re-do all the work when the technology changes, METAFONT is an “end-to-end” solution: starting with the font design (at a higher level than the outlines that are ultimately shipped for vector fonts), it goes all the way to generate the actual pixels for a given resolution (like 600 dpi or 2400 dpi), resulting in bitmap fonts (pre-rasterized) rather than vector fonts (rasterized on-the-fly by PDF viewer).

There are a few consequences of these differences, for on-screen viewing:

• Computer monitors are very low-resolution devices (even the most high-end “retina display”, “4K” or whatever) compared to print. There are several tricks required to make text look good on screen (see e.g. the chapters at rastertragedy.com for details): things like anti-aliasing/grayscaling (using pixels that are neither black nor white) and subpixel-rendering (using individual RGB subpixels that make up each screen pixel).

• There's a conflict between thinking of on-screen viewing as serving to approximate to what the printed page will look like, versus allowing deformations or even re-positioning characters from where they'd be printed, to simply look better on screen.

But all of this has nothing to do with the DVI format, ultimately (which is just an efficient encoding of the page layout, i.e. what characters from what fonts go where on the page). It's just that (1) many DVI viewers tend to use bitmap fonts for historical reasons, resulting in poor on-screen viewing, (2) some DVI viewers use their own libraries for rendering instead of using the best font-rendering libraries available on the system.

• Even if using a bitmap font, the DVI viewer has a choice: (1) it can take the bitmaps that were generated for a certain resolution (typically for print, so something like 600 dpi which is probably higher than your screen resolution), and try to render those bitmaps at the number of pixels available (with the typical problems that result), or (2) it can try to ask METAFONT to run again to generate the fonts at whatever low resolution is appropriate for your monitor (but still without anti-aliasing and subpixel rendering, as MF was designed for higher-resolution devices and doesn't bother with all that). DVI viewers seem to always take the first option, probably to prevent the user's disk from filling up with lots of bitmap fonts for every possible zoom level. So the font rendering ends up sub-par. If you print it on a high-resolution device the output will be fine.

• For screen display, the DVI viewer could choose to look up and use the corresponding vector font.

• Finally, the DVI viewer could just convert to PDF (with dvipdfm or whatever) and have the same on-screen experience as with something that was originally a PDF file. In fact, on my current macOS system, the two DVI-viewing options I have (Skim and TeXShop's viewer) both seem to do exactly that internally, and the output is excellent.


There’s no context given, so it’s hard to tell why you’re ascribing the problem to dvi, rather than it being used sub-optimally. What’s the full tool chain being used? What’s the output device?

Given that dvi doesn’t involve pixels, and lets you position any character anywhere on the page, with precisely known rules for rounding into resolution-specific device-space, you’ll have to be more specific about what you’re blaming dvi itself for.


Happy to provide context if you tell me what to provide...

The "output device" is... my monitor?

The toolchain is TeXLive for generating the files, and the usual viewer for each file type on Windows (Acrobat Reader for PDFs, and Evince for DVI). If you think it's Evince's fault I'd love to hear better alternatives, because I haven't found a single viewer that views DVIs any differently. And for input files, you can generate files via LaTeX pretty easily:

  % DVI: latex    thisfile.tex
  % PDF: pdflatex thisfile.tex
  \documentclass{article}
  \usepackage{lipsum}
  \begin{document}\lipsum\end{document}
If this is using it "sub-optimally" then I guess I don't know how to use it "optimally", and I'm happy to hear how.

Remember, though, at the end of the day, I'm just an end-user. I just know that every time I try to view DVI and PS files I have to tear my eyes out, and that I don't have this struggle with PDFs. I neither know which particular person or place in the pipeline to assign the blame to, nor does knowing that make it any easier for me to read the text...


Presumably Evince could do a better job of it then.


What viewer would you recommend then? Would you mind posting a screenshot coming from the optimal viewer you have in mind? Like I said, I haven't found any viewer that does a better job.


I’m not current, so I don’t know if anyone has bothered to do a dvi viewer optimized for today’s display technology. Given the billions of dollars invested in the pdf ecosystem, though, it’s a reasonable place to live.


> I’m not current, so I don’t know if anyone has bothered to do a dvi viewer optimized for today’s display technology.

Oh, if that's the problem, then please just point to a better viewer for yesterday's display technology. Or even a decade ago's. I'll find you an older monitor from whatever era you had a good viewing experience on and try it on that. Because I'm one hundred percent sure an older display technology is not going to make it look better. You can see above that above pointed out that it's looked awful since 2003. I can vouch that it's consistently been awful since over a decade ago, and PDFs have consistently been fine... on every kind of display and resolution I've tried. I've absolutely never, ever had a good experience viewing DVIs.


1982 DataDisc displays that Knuth developed everything on? Sorry you’re unhappy, but given that dvi is literally a dump of TeX’s internal results on layout positioning, it contains all the information that any other system could possibly use. Perhaps your concerns have more to do with font rendering?


Indeed -- I think it's clear dataflow's main issue is with the font rendering. The image shows subpixel antialiasing on the PDF version, and not on the DVI, so naturally they look quite different.

But that's nothing to do with DVI itself; it's entirely the responsibility of the renderer.


I find these responses baffling. I'm just an end-user. All I see is that every time I get a DVI file, I want to tear my eyes out, no matter where or when I open it. First my assessment gets questioned, then when I spend time installing software and compiling an example just to demonstrate the concrete problem upon request -- which I have no reason to believe was novel or previously unknown in any way -- I'm promptly shut down and told to respect the file format and instead blame all the viewers in existence. Great -- so what was/am I supposed to do with this information? Are my eyes supposed to see the file clearly now that the blame got assigned somewhere? Or am I supposed to write my own DVI viewer tomorrow afternoon? How is this intended to be helpful?


It's not helpful if you're expecting to be handed polished products that do what you want. It is helpful if you want to understand what's going on. I think it would be interesting to get a high quality DVI viewer based on modern graphics tech, but of course such a thing will take time and effort. I plan to meet with Dr. Fuchs in the next few weeks to talk about this and related issues.


It’s not helpful if what you’re interested in is a DVI viewer that gives just as smooth user experience as well as, say, Preview, Adobe Reader or Foxit Reader. Because that doesn’t currently exist. But it’s useful if you want to understand why the DVI format was invented and why there’s a difference between viewing DVI and PDF files.


It’s not clear at all what the problem is. Imgur renders both of your images with very high compression on my devices so I can’t spot any difference.

Secondly can you please define ‘tear my eyes out’? You’ve not provided any attempt any any technical description of what it is you’re experiencing. How is anyone supposed to help you when you don’t say what the problem is?


Hm. Looks like the PDF sample has some light (auto)hinting and a bit of linear alpha blending and gamma correction applied (compare diagonal stems, e.g. on the "A"), while the DVI sample doesn't. I suppose this is less about DVI vs. PDF and more about Adobe doing text rendering properly, while Evince is probably using Cairo, which doesn't.


The DVI one looks a lot better to me. The PDF has some chromatic aberration-like effect on the letters that's really disgusting.


Both images appear heavily compressed to the point of being useless for comparison. The parent is not doing a very good job of explaining what the problem is.


looks like it's a mobile imgur thing, the images are less compressed on the desktop version


My experience with DVI must have been from 2003, and at that time, the viewer simply wasn't good. It was very clunky to move in the page or between pages, and there was only a magnifying glass zoom, no options to comfortably change page-wide zoom levels.

I also remember the horrible font rendering mentioned elsewhere in the thread, but that would go away when printed or converted to PDF/PS, so I always assumed it was a bug in the viewer.

That said, I always admired how small .dvi files were in comparison to PDF, and that they loaded much faster. Kudos!


Do you have any different of an experience today compared to 2003? My screenshots were from today and I don't see any difference.


I haven't used latex in a while (mostly writing in markdown these days), so haven't tried it in a while.

I have some .dvi files lying around from 2009, but trying to open them with evince results in

kpathsea: Running mktexpk --mfmode / --bdpi 600 --mag 1+0/600 --dpi 600 aebx12 mktexpk: No such file or directory

and 'apt search mktexpk' doesn't find anything.

So, can't open DVIs anymore, it seems :(


I think it's part of texlive-binaries (apt-file search mktexpk).


One of the primary advantages of the DVI format is that it is very simple: you can write a parser for it in an afternoon. It is basically a sequence of bytecode instructions of the form “move to position (h, v)” and “place the glyph at position p in font f”. This makes it easy to make minor edits and things like that.


But isn't PDF basically the same for simple text? At least once you use a tool to simplify your PDF by removing compression, object streams and whatnot.

Moving to position is just `123 456 Td`. Placing a string at current position is just `(ABC) Tj`. Placing each glyph at different positions is just `[12 (A) 34 (B) 56 (C)] TJ`. I can do minor edits in PDF using just vim and qpdf.


> At least once you use a tool to simplify your PDF by removing compression, object streams and whatnot.

Can you please write a few details about that workflow? What do you do and how? I used to create .ps "by hand" (actually writing scripts) but I've never tried to start from PDF and then edit. How do you "remove compression, object streams and whatnot"? And some useful references (for those Tj TJ etc).


Not OP, but I personally use QPDF.

> QPDF is capable of creating linearized (also known as web-optimized) files and encrypted files. It is also capable of converting PDF files with object streams (also known as compressed objects) to files with no compressed objects or to generate object streams from files that don't have them (or even those that already do). QPDF also supports a special mode designed to allow you to edit the content of PDF files in a text editor. For more details, please see the documentation links below.

$ qpdf --stream-data=uncompress input.pdf output.pdf

http://qpdf.sourceforge.net/


I use QPDF too. Specifically its QDF mode. Essentially it tries to make the document as human-editable as possible. There's also a fix-qdf utility that fixes the values of offsets at the end of the file, so that you need not be careful not to change file size while editing.

The reference is well Adobe's official PDF reference downloadable from its website.


Oh that's cool! Thanks for sharing that!


It is much simpler, and many kind of post-processing can be done on it better too, and also you can compile the fonts per printer (DVI doesn't actually care about the font glyphs at all, and does not require that it is even a font glyph as is normally considered) as needed. If you have a printer that implements it, you could even do such things as cutting holes in pages, printing raised or engraved text, invisible ink, etc, without causing problems with software that does not support these things.


PDF too supports external font files, a multitude of font formats, and special ink layers (these are called “separations”).

PDF also supports PostScript fonts, which are already compiled for your printer.


I didn't know that, but OK, so it does (although PostScript fonts won't be so good if you want to use bitmap fonts and stuff like that instead (I use METAFONT to compile bitmap fonts for the printer, and than use my own software to rasterize the page, by copying the generated font glyphs into the positions specified by the DVI file)).

Still, DVI is simpler for many uses. I think PDF is too complicated.


PDF also supports bitmap fonts. It’s certainly complicated though!


And whilst this is shameless self-promotion, if you want to try out LaTeX without installing it yourself, please check out https://www.overleaf.com (I'm one of the founders)


Virtually everyone at school used Overleaf. Impressive bit of kit, particularly since the version 2 update. I was fond of how v1 kept everything in a git repo though.


I liked that about v1 as well, but it seems like they have brought it over to v2 [0]. Trying it on a v2 project it only seems to have one commit for everything up until the first clone?

[0]: https://www.overleaf.com/blog/bringing-the-git-bridge-to-v2-...


I'll come in and say that Overleaf has been invaluable when you're working in teams and need real-time editing and/or your team members don't know how to/refuse to use Git (cough some academic settings cough)


I prefer git for the bulk of the writing process because it lets you make well structured commits, use your favorite editor, and doesn't depend on having a strong internet connection.

Even so, you can't beat overleaf for the final stages of paper submission when all the authors are making tiny typo fixes and style edits in all parts of the document concurrently.


Thank you for an absolutely awesome piece of software, my senior capstone in undergrad would have been so much worse without overleaf.


Thanks for all the lovely comments about Overleaf! I'd somehow missed them, and reading them all together now is a very nice way to end the day!

We're continuing to build on and improve the platform now that the ShareLaTeX integration is pretty much complete, so look out for more updates soon, and thanks again for all your support.

PS: Please do continue to tell all your friends nice things about Overleaf :)


Thank you so much for overleaf! It’s been such a great tool for multi-author papers but even for writing on your own it’s better than standalone latex. Unlike typical latex that would just refuse to compile if you forget to close a brace, overleaf/sharelatex will make a best effort and just mark the part of the source where you made a mistake


Overleaf got me through every math/CS course in my undergrad. Thank you for your amazing work.


Congrats on overleaf. Me and my colleagues use it everyday and we love it!


Please do more shameless promotion, latex is amazing and overleaf makes it very accessible.


For really mind-blowing possibilities, try org-mode[1] under emacs, along with org-babel[2] and LaTeX export.

You can pretty much write and execute your technical paper and your source code simultaneously in arbitrary languages.

I recommend the spacemacs[3] distribution, as the vi interface gets it right.

[1] https://orgmode.org/manual/index.html#Top

[2] https://orgmode.org/manual/Working-with-Source-Code.html#Wor...

[3] http://spacemacs.org/


Equally, AUCTeX in Emacs for direct editing of LaTeX (best TeX editor there is).

(For more complex documents, I've had better luck manipulating TeX directly rather than via another layer, e.g. Org mode, which is fantastic for other things.)


Agreed. Writing with AUCTex plus helm-bibtex is like a dream. I like org-mode but I find it a too limiting for actually writing papers in.


Would you mind elaborating what is so great about AUCTeX? What makes it the best LaTeX editor?


This quote is amusing:

"there’s even an open archive maintained by Cornell University where authors of papers in physics, chemistry, and other disciplines can directly submit their LaTeX manuscripts for open viewing."

for two reasons:

1) sounds like 'there's even a church at Vatican' seen that arXiv.org is the biggest ever collection of scientific articles submitted by their authors

2) and there's no chemistry section in arXiv.org, but a half is mathematics and some computer science


Just a curious question. I love LaTeX. Can we build a programming language for the web which just focusses on the writing and takes care of the styling for Web, Mobile etc. I mean Markdown is already there doing somewhat similar but it still needs to set up properly and styled to use it correctly. Can LaTeX type tool be developed for web where the user writes what he wants to write and generates a static web book or blog post or article confirming to the same style?

Apologies if this is stupid question.


There's thousands of tools like this, called "static site generators". Most include a barebones default theme like you describe.

There's so many because a few years ago they were a big hype and everybody and their cat built one.



These days you can just publish a PDF and every browser will display it properly.


Just curious with regard to presentations: Does any known tech corp use beamer for presentations?


I've seen people from IBM giving talks using Beamer.


I know there's a smaller but vocal group in tech that openly attacks LaTeX, e.g. "LaTeX considered harmful". While I don't completely agree with them, but I think there are some legitimate and genuine criticisms here.

Personally, I find LaTeX is great due to its native programmability - it facilitates a natural separation of content and presentation, combined with its markup language, can be very powerful. It also enables extensibility - community packages for almost every single type of content, e.g. organic chemistry, Feynman diagram, even music. A typical user only needs to \usepackage and stop worrying. It's CTAN ecosystem is just like a programming language like Perl, or recently, Node.js or Go, which is great.

But I find all hell breaks loose when you want to get a slightly different formatting than what's offered. Then suddenly the entire system becomes something you need to fight against. Previously, you could be a happy \usepackage code monkey, but now you need to know the system inside-out and hack a path ahead. Just like what happens when you use a software library in a slightly different way than the author expected, then suddenly you find yourself in a battle with the entire library, unfortunately, the same thing occurs in typesetting...

For example, with LaTeX you can add footnotes to essentially anywhere with guaranteed aesthetics. UNLESS you want to add a footnote for your title, then it turns out the existing infrastructure in the "article" template doesn't allow it at all, and you need to define and redefine and undefine some internal macros in your document to implement it, as I learned from https://tex.stackexchange.com/

And all the separation of content and presentation and its benefits ends at this point. It's no longer "what you think is what you get".

The same issue also occurs when you are trying to make a Beamer slideshow, most "environments" in LaTeX are designed for papers, not a slide. For example, when I want to put some images to the slide, often not fits in a "regular" geometric position, I find I have to keep hacking the width and position of the images and keep compiling until the result is acceptable. I don't know if there are better packages for typesetting slideshow, recommendations are welcome.

Another problem I've noticed is the phenomenon of confusing and outdated packages. Often there are more than one package for a specific task, some are old and limited but still used in many documents, others are new, the rest are competing implementations. Old ones are frequently mentioned in the old guides, it works for a while until you have a corner case, then it takes a few attempts before moving to the newer packages. Again, just like programming languages. A recent trend in programming is writing new, cleaner implementations for basic tasks, I don't know if the same thing is happening in the LaTeX community, if it is, I think it would be great.

On the other hand, I've yet to see a word processor which allows users to extend it and automate formatting and typesetting without doing some ugly macros hacked together and written in VBScript. So I see LaTeX as a valuable tool and I'll keep using LaTeX in the foreseeable future.

The final words, having https://tex.stackexchange.com/ is a great contribution to the LaTeX community, just like how https://stackoverflow.com/ helps for programming.


>But I find all hell breaks loose when you want to get a slightly different formatting than what's offered. Then suddenly the entire system becomes something you need to fight against.

LaTeX was developed with the goal of freeing the user (a researcher, typically) from wasting time on layout and focusing on content (scientific research, usually).

Plain TeX is much more flexible, but you may argue that it is much lower level (or is it?).

For general typesetting, I can’t recommend ConTeXt enough. Its philosophy is nearer to TeX than LaTeX, and it gives you full control on layout, too. Much smaller package than the full TeX/LaTeX ecosystem. And it’s scriptable with Lua!


> For general typesetting, I can’t recommend ConTeXt enough.

Thanks. I'm currently using XeTeX/XeLaTeX due to its newer codebase, native Unicode support, OpenType fonts, PDF outputs, etc. But perhaps it is the time to try ConTeXt. I was struggling previously with Lua in Awesome window manager as I find (from a Python background) the syntax is weird, but now I think seriously need to pick up a serious textbook to learn Lua, the Lua engine is embedded in everything, learning it opens a new world.


I'd wager that you could make the same arguments against any other language and library/framework. What you describe sounds like a universal software development lament, not something restricted to Tex/LaTex. I suspect the underlying problems may be essential, not just incident to Tex/LaTex, and so some other language and library would land in the same swamp.


Good point! I just realized all my arguments still held when the word "LaTeX" is replaced with a "$programming_language"...


> But I find all hell breaks loose when you want to get a slightly different formatting than what's offered. Then suddenly the entire system becomes something you need to fight against.

That's why for anything custom, I prefer to use plain TeX, which is totally fine.

The TeXBook is a great tutorial on TeX, it's up there as one of the great comp sci books, in the same league as: "The C Programming Language".

I think people seem to scared of plain TeX, it's totally approachable and easy to use.


I'm a little sad that there still doesn't seem to be a "LaTeX for the Web", a document markup system with similar philosophy as LaTeX but for HTML output. LaTeX is basically only suited for static PDF output, which only survives because there don't seem to be tools that do an equally good job generating HTML documents.

There are a lot of special-purpose markup languages, like Markdown and AsciiDoc. They involve toolchains written in other languages to convert them to HTML. If you want to add features, you have to hack the toolchain.

For blog posts and simple websites, that's fine. But sometimes I want to build something nontrivial -- as an academic, the first thing that comes to mind is an academic paper, where I might want to have special markup for theorems, proofs, examples, figures, and tables, and have ways to automatically cross-reference them, generate tables of contents, embed automatically formatted bibliographies, and so on. Or generate figures from code (like TiKZ), or embed data analysis code right in the source and embed the results (via knitr or Sweave).

With pandoc, bookdown, and knitr, you can get pretty close to this. But what made LaTeX so powerful is that it is programmable in LaTeX. It can be extended: you can define new types of environments (example problems! homework exercises! every example shows equivalent source code in three different languages! category diagrams generated automatically! musical notation is typeset!), you can make new commands to automate drudgery (typesetting chemical equations! building complicated equations!), and you can do it all with at least some basic separation between content and presentation style. Converting the larger LaTeX documents I've made to Markdown or Org would be basically impossible without writing a bunch of scripts to extend the Markdown renderer and hack everything together. There's no equivalent of just writing a LaTeX package.

I'm not aware of another document preparation system that comes close to LaTeX. Org mode is probably the closest, since it has tools to embed code blocks in other languages and include their output, but after a week of fighting with conversions between markup languages I can really see the appeal of LaTeX's uniform syntax and built-in programmability.

The other promising option seems to be Pollen[0], a Racket-based programmable system. But it seems more like TeX than LaTeX: it provides the very basic tools to build a programmable document system, but not the higher-level conveniences (like cross-referencing commands and standard sectioning and environment commands) you would need to build yourself. Maybe someday...

[0] https://docs.racket-lang.org/pollen/


Have you looked at Scribble? It's very programmable, and evaluated similarly to TeX or LaTeX, but in a more powerful and straightforward language (Racket).

It's used by academics to write conference papers, it's used for most of the Racket books for both Web and camera-ready typesetting for print, and I've even used it for embedded API docs.

https://docs.racket-lang.org/scribble/

(That manual itself was written in Scribble.)

(I'm actually thinking of moving from Scribble to Markdown for embedded API docs, as part of an open source ecosystem goal, to make docs for reusable modules more lightweight to add, but it's hard to give up some things about Scribble. For example, for one module, I needed substantial documentation about each opcode, and wanted it to be formatted similarly to an API function, but with different meta properties, and some tricky formatting. Scribble let me make a simple semantic form for that, separate from formatting, like LaTeX would.)


That looks nice, but it’s no match for LaTeX in the academic publishing world, simply because virtually all technical conferences and journals only provide Word and LaTeX templates.


Scribble can export to LaTeX and includes templates for a couple different CS conferences, actually, though probably one would have to extend it a bit to use it for other venues.


You should take a look at PreTeXt. https://pretextbook.org/


I actually was just looking at it a few days ago. Interesting project, and looks designed exactly to produce academic books and texts. But it's not programmable in the TeX sense, where you can extend it to do new things within the language -- it's just that PreTeXt's base features include most of the things I want.

I'm trying to resist writing another book, since I know how much time it would suck up, but if I do, I'll probably look at PreTeXt and maybe Scribble as options.


Here an interesting attempt to apply LaTeX to a javascript package called MathML with which I'm unfamiliar.

https://www.maths.nottingham.ac.uk/plp/pmadw/lm.html

There is some exploration at the site itself that is worth looking through if anyone plans to attempt LaTeX markdown being interpreted-and-served.

The Unix Philosophy in me thinks "use lynx. oh and forget https, use gopher to serve. when you hit a .tex switch to a program that's supposed to do that" etc etc etc but I definitely think the entirety of the internetworkings - i.e. not-exclusive-to-hackers - would benefit from *TeX markdown being as common as HTML (perhaps not AS common, but more common than currently).


It seems that the one-size-fits-all tool is a tough row to hoe.

Part of the reason that LaTeX works at all is that it targets, as you note, (essentially) .pdf.

Trying to pack so much precision into a web/os tool and make it performant, would seem beyond the reach of a real-world business case, as far as I can tell.


Note for any college students (especially undergrad): Do your STEM presentations in Beamer to get extra brownie points.


Just curious with regard to presentations: Does any known tech corp use beamer?


Maybe I dreamed it but I could swear that I read that Gödel, Escher, Bach: An Eternal Golden Braid was typeset by Hofstadter in TeX. That, by itself, although fairly amazing, wasn't the most noteworthy thing to me but that he had first read—uploaded to his brain—all the docs for TeX before he wrote a single word and let the experience of using it call forth from memory the relevant documentation.


While METAFONT, and to a lesser extent, TeX, were to become frequently featured characters in Hofstadter's Metamagical Themas days, GEB was pre-TeX. It was created using Pentti Kanerva's TV-Edit at Stanford (with the data on punched paper tape). The preface to the 20th anniversary edition of GEB tells the tale.


Ah. Thank you. Well, I guess I did dream it.


What you might be (mis-)remembering is Eric S. Raymond’s description of the production of The New Hacker's Dictionary printed book:

http://catb.org/jargon/html/personality.html


You are absolutely correct! That is precisely what I mis-remembe... er, mis-attributed.


Good tutorial for Latex?


There are tons of resources at https://www.tug.org/texlive/. If you download it there are many included.


Also the article itself has some pointers.


While I was big into LaTeX during the university, even wrote my thesis with it, using the MikTeX distribution, nowadays I rather use the likes of Word, FrameMaker, DocBook or DITA based tooling.


It was quite long ago a saw a new project that opted to utilise DocBook. Is it still popular?


I used it on a project three years ago.

All the help system was based on it, generating Word, PDF and HTML based deliveries.

It is relatively popular at the enterprise level, where the licensing costs for such tools are peanuts vs the overall project costs.


Neat! Would you minding listing some of those toolchain vendors? I think about pros and cons of various publishing systems from time to time and would love some new input.


FrameMaker, OxygenXML, RoboHelp, XMetaL, XMLmind, Arbortext.


Oh, lots of stuff to look up. Thank you!


If we want to get remotely intelligent machines, LaTeX needs to die.

It's a visual description based on a turing complete language, 95% of semantic meaning is lost once it's poured into a TeX document, end the last 5% are gone once it's compiled into a mess of pdf text boxes that ignore text flow and just look correct.

Literally any other document description language would be a better lingua franca than LaTeX, because a language should be used to communicate, and TeX is meant to print curves.


The use of LaTeX should not and does not prevent the advancement of machine intelligence, just as the stubborn use of blackboards and chalk by mathematicians as prime communication devices. Now, when machines get intelligent enough they’ll figure out LaTeX and more.

Keep in mind that one goal of TeX was to make it fairly easy for a human to write it down. Another goal is for another human to be able to understand that source quite well. Now picture both tasks with semantics-rich MathML.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: