
The lingua franca of LaTeX - JohnHammersley
https://increment.com/open-source/the-lingua-franca-of-latex/
======
svat
This is a good article! A couple of corrections to the early history in the
introductory paragraphs:

• It is indeed true that the first edition of Volumes 1–3 of _The Art of
Computer Programming_ and the second edition of Volume 1 (1968, 1969, 1973,
1973) had been done with hot-metal typesetting (pieces of lead, laid out to
form the page) with a human doing the typesetting/compositing (with a Monotype
machine). And that for the second edition of Volume 2 in 1977, the publisher
Addison-Wesley was switching to a cheaper alternative. But the cheaper
alternative was not electronic; it was _phototypesetting_ (the letterforms
made with light and lenses instead of metal). It was Knuth who noticed a book
that _had_ been electronically typeset (Patrick Winston's AI book), realized
that electronic typesetting machines actually had enough resolution to print
like “real books” (unlike typewriters or printers of the time), and decided to
solve his problem for himself (as he felt that he could make shapes with 0s
and 1s).

• The summer of 1977, Knuth spent in China, thinking he'd specified his
program in enough detail and his two students would be able to complete the
program by the time he came back. He came back and saw they had only
implemented a small subset, and decided to spend his sabbatical year writing
the program himself. He mentions that as he started writing it he realized the
students' proto-TeX was an impressive effort, because there was a lot missing
from his specifications. Knuth's article _The Errors of TeX_ (reprinted in his
collection _Literate Programming_ ) goes into excellent detail on the
development process.

Also: a suggestion to try plain TeX (instead of LaTeX), if you haven't tried
it. The book _A Beginner 's Book of TeX_ by Seroul and Levy is especially
good. It might surprise you.

~~~
acqq
> It was Knuth who noticed a book that had been electronically typeset
> (Patrick Winston's AI book), realized that electronic typesetting machines
> actually had enough resolution to print like “real books”

Searching for what the actual resolutions were, I saw in your older comment
(1) that you mention "Alphatype CRS" and "more than 5000 DPI."

If you mention 1977, and there were already 5000+ DPI machines then, I'd
really like to read a little more about that: it's impressively more than what
the "laser printers" used by mortals were able to have. How did these high
resolution machines work? How did they achieve the resolution physically, and
how did they manage to process so much bits at the time when the RAM of
microcomputers was counted in kilobytes?

1)
[https://news.ycombinator.com/item?id=17917367](https://news.ycombinator.com/item?id=17917367)

P.S. Digging deeper: I've just found the reason we see today the text produced
with TeX as "too thin": [https://tex.stackexchange.com/questions/48369/are-
the-origin...](https://tex.stackexchange.com/questions/48369/are-the-original-
cm-fonts-better-than-the-current-type1-fonts/361722#361722) and linked there:
[https://tug.org/TUGboat/tb37-3/tb117ruckert.pdf](https://tug.org/TUGboat/tb37-3/tb117ruckert.pdf)

~~~
drfuchs
The Alphatype CRS ("Cathode Ray Setter") was nominally 5333dpi, but it's a bit
of a fudge. As the person who wrote the DVI interface for it, and personally
ran the entire Art of Computer Programming, Vol. 2, 2nd Edition through it,
let me explain.

You are very correct that in the early 1980's, even 64k of RAM was quite
expensive, and enough memory to handle a complete frame buffer at even 1000dpi
would be prohibitive. The Alphatype dealt with this by representing fonts in
an outline format handled directly by special hardware. In particular, it had
an S100 backplane (typical for microcomputers in the day) into which was
plugged a CPU card (an 8088, I think), a RAM card (64k or less), and four
special-purpose cards, each of which knew how to take a character outline,
trace through it, and generate a single vertical slice of bits of the
character's bitmap.

A bit more about the physical machine, to understand how things fit together:
It was about the size and shape of a large clothes washer. Inside, on the
bottom, was a CRT sitting on its back side, facing up. There was a mirror and
lens mounted above it, on a gimbal system that could move it left/right and
up/down via stepper motors (kind of like modern Coke machines that pick a
bottle from the row/column you select and bring it to the dispenser area).
And, at the back, there was a slot in which you'd place a big sheet of photo
paper (maybe 3ft by 3ft) that would hang vertically.

OK, we're all set to go. With the paper in, the lens gets moved so that it's
focused on the very top left of the paper, and the horizontal stepper motor,
under control of the CPU, starts moving it rightwards. Simultaneously, the CPU
tells the first decoder card to DMA the outline info for the first character
on the page, and to get the first vertical slice ready. When the stepper motor
says it's gotten to the right spot, the CPU tells the decoder card to send its
vertical slice to the CRT, which flashes it, and thus exposes the photo paper.
In the meantime, the CPU has told the second card to get ready with the second
vertical slice, so that there can be a bit of double-buffering, with one slice
ready to flash while the next one is being computed. When the continuously-
moving horizontal stepper gives the word, the second slice is flashed, and so
on. (Why two more outline cards? Well, there might be a kern between
characters that slightly overlaps them (think "VA"), and the whole thing is so
slow we don't want to need a second pass, so actually two cards might flash at
once, one with the last slice or two of the "V" and the other with the fist
slice of the "A".)

So, once a line is completed, the vertical stepper motor moves the lens down
the page to the next baseline, and then the second line starts, this time
right-to-left, to double throughput. But therein lies the first fallacy of the
5333dpi resolution: There is enough hysteresis in the worm gear drive that you
don't really know where you are to 1/5333 of an inch. The system relies on the
fact that nobody notices that alternate lines are slightly misaligned
horizontally (which also makes it all the more important that you don't have
to make a second pass to handle overlapping kerned characters; there it might
be noticeable).

Looking closer at the CRT and lens, basically the height of the CRT (~1200
pixels, IIRC) get reduced onto the photo paper to a maximum font size of ~18pt
(IIRC), or 1/4in, giving a nominal resolution of ~5000dpi on the paper. But
this design means you can't typeset a character that was taller than a certain
size without breaking it into vertical pieces, and setting them on separate
baseline passes. Because of the hysteresis mentioned above, we had to make
sure all split-up characters were only exposed on left-to-right passes, thus
slowing things down. Even then, though, you could see that the pieces still
didn't quite line up, and also suffered from some effects of the lack of
sharpness of the entire optical system. You can actually see this in the
published 2nd edition of Vol 2.

Finishing up, once the sheet was done (six pages fit for Knuth's books, three
across and two down), the system would pause, and the operator remove the
photo paper, start it through the chemical developer, load another sheet, and
push the button to continue the typesetting.

It's worth noting that the firmware that ran on the 8088 as supplied by
Alphatype was not up to the job of handling dynamically downloaded Metafont
characters, so Knuth re-wrote it from scratch. We're talking 7 simultaneous
levels of interrupt (4 outline cards, 2 stepper motors that you had to
accelerate properly and then keep going at a constant rate, and the RS-232
input coming from the DEC-20 mainframe with its own protocol). In assembly
code. With the only debugging being from a 4x4 keyboard ("0-9 A-F") and a 16
character display. Fun times!

Now, if anybody asks, I can describe the replacement Autologic APS-5 that we
replaced it with for the next volume. Teaser: Lower nominal resolution, but
much nicer final images. No microcode required, but sent actual bitmaps,
slowly but surely, and we were only able to do it because they accidentally
sent a manual that specified the secret run-length encoding scheme.

~~~
acqq
Thank you many times! The minute you posted this I've wanted to write that
I've discovered your 2007 interview (1) where some details (but this answer is
more detailed, many thanks!) were able to give me some overall idea how it was
done.

And what I was missing there were exactly these "alignment" problems: I
couldn't imagine that the pictures and "big letters" could work on 5000 dpi
when only a two letters were formed at once and something had to mechanically
move all the time.

So yes, please more details, and also about APS-5 and "actual bitmaps"! And
please try to also write (as much as you can estimate) the years when you used
the said technologies. Which year was Alphatype CRS used? And which APS-5?

1)
[https://tug.org/interviews/fuchs.html](https://tug.org/interviews/fuchs.html)

~~~
acqq
And just to show you why I care about the years: you said about the controller
CPU it was 8008 in the 2007 interview, which is an 8-bit CPU (e.g. used on
CP/M machines), and now 8088, which is a 16-bit with an 8-bit bus (the one in
the first IBM PC). If you aren't sure about the CPU but if we'd know when the
device started to sell maybe we can exclude the later.

~~~
drfuchs
Yeah, the naming conventions between 8008, 8080, 8086, 8088, and 80186 is
enough to make you nuts.

Anyway, let's check the sources! Surfing over to
[https://www.saildart.org/[ALF,DEK]/](https://www.saildart.org/\[ALF,DEK\]/)
and clicking on ALPHA.LST on the left, shows code that looks like it's for an
8080 to me, but I'm rusty on this. The file itself is dated July 1980, but
it's just a listing and not the sources themselves (not sure why).

Knuth starts the "Preface to the Third Edition" of Vol 2 with: "When the
second edition of this book was completed in 1980, it represented the first
major test case for prototype systems of electronic publishing called TeX and
METAFONT. I am now pleased to celebrate the full development of those systems
by returning to the book that inspired and shaped them." Here he's talking
about our very first Alphatype production output, confirming it was 1980.

Note that the CRS wasn't an especially new model when we got ours, so it
wouldn't be too surprising for the CPU to not be the latest and greatest as of
1980, especially as I got the feeling they were pretty price-sensitive
designing it.

By the way, the mention of "fonts came on floppy disks" elsewhere was
generally true back then (and selling font floppies was how the typesetter
manufactures made some of their income), but we didn't use the floppy disk
drives for Knuth's CM fonts at all. All required METAFONT-generated characters
were sent down along with each print job. And, in fact, there wasn't enough
RAM to hold all the characters for a typical job (remember, each different
point size had different character shapes, like in the old lead type days!) so
the DVI software on the mainframe had to know to mix in reloads of characters
that had been dropped to make room for others, as it was creating output for
each page. It's essentially the off-line paging problem: If you know the
complete future of page accesses, how can you make an optimal choice of which
pages to drop when necessary? That's my one paper with Knuth: "Optimal
prepaging and font caching" TOPLAS Jan 85 (ugly scan of the 1982 tech report
STAN-CS-82-901 at
[https://apps.dtic.mil/dtic/tr/fulltext/u2/a119439.pdf](https://apps.dtic.mil/dtic/tr/fulltext/u2/a119439.pdf)
when it was still called "Optimal font caching"). Actually, the last full
paragraph on page 15 of the later says that the plan to use the Alphatype CRS
started two years before production happened, meaning that the CRS was
available commercially by 1978.

~~~
acqq
> Surfing over to
> [https://www.saildart.org/[ALF,DEK]/](https://www.saildart.org/\[ALF,DEK\]/)
> and clicking on ALPHA.LST on the left, shows code that looks like it's for
> an 8080 to me

Wow, thanks for that! Yes it is surely 8080 code (at least, 8080 mnemonics,
definitely not 8088 which are different, 8008 initially used other mnemonics
than in that LST, then the 8080 mnemonics were applied to 8008 too, but 8008
needed more hardware to use, so it should be 8080 in CRS then).

Also thanks for the STAN-CS-82-901. Now the story of programming CRS is quite
clear. And even as a poor scan, it can be compared to
doi.org/10.1145/2363.2367

Did I understand correctly, for CRS, what was uploaded were never bitmap fonts
but always the "curves" of the letters? I believe 100 bitmap images in the
resolution of 1024x768 were too much for the whole setup, even only for the
cache?

And... I'm not surprised that Knuth managed to develop that firmware after
I've discovered this story ("The Summer Of 1960 (Time Spent with don knuth)"):

[https://news.ycombinator.com/item?id=2856567](https://news.ycombinator.com/item?id=2856567)

As you see I'm also really interested in everything you can tell about
Autologic APS-5, both how it worked and how it was used in TeX context!

------
jedberg
I used LaTex in college, to typeset all my essays. I also used it in my
creative writing class, and people were amazed that I was able to add line
numbers to my work so we could easily discuss it by referring to line numbers!
I'm pretty sure I got better grades in all my writing based classes based
solely on the fact that I used LaTex to typeset my work.

The last time I wrote a resume[0] I used LaTex to do it too, and I provide the
LaTex source[1] on my website. I've seen bits of it show up in other people's
resumes, which is exactly what I want to happen! But what was funny was how
often people would say "boy this looks so clean and professional!". Pretty
sure I got some interviews just because of how "pretty" my resume was.

[0]
[https://www.jedberg.net/Jeremy_Edberg_Resume.pdf](https://www.jedberg.net/Jeremy_Edberg_Resume.pdf)

[1]
[https://www.jedberg.net/Jeremy_Edberg_Resume.tex](https://www.jedberg.net/Jeremy_Edberg_Resume.tex)

~~~
EliRivers
_Pretty sure I got some interviews just because of how "pretty" my resume
was._

Definitely. I've had interviewers start by thanking me for my CV being well-
formatted and also short.

Ha. I've just noticed that as I type this, I have literally in front of me on
my desk Eijkhout's "Tex by Topic", Lamport's latex book and "The Latex
Companion" by Goosens et al. This must be where I do most of my pretty
document making.

------
ggm
As a t/roff and eqn and tbl user I never found the story compelling. The
uplift cost to latex in 1982 was north of my painpoint and in 1990 I managed
to do things in tbl and eqn I never imagined possible. Tex font fascism was
also a bummer. A phototypesetting expert I knew preferred the actual type to
digital type, the microfilm typeset stuff was amazing (I saw galley proofs in
bromide and they were crisp and clean)

I think I "get" it better now. It's just at that time there was another school
of more pragmatic engineering aligned thought in Unix using Dec10 RUNOFF
derived concepts which morphd into t/roff and stuck there.

Maybe it's like Emacs and vi? Dec 10 SOS editor was like ed which led to ex
and vi. If you walked into the TECO door you went lisp and Emacs.

To launch into Tex and latex you needed a professor leading you there rather
than an engineer doing nroff and t/roff

~~~
pjc50
Interesting to hear from the pre-Tex world. I'd like to see some examples of
expert use of tbl and eqn; outside of manpages the roff ecosystem seems to
have vanished.

~~~
fanf2
The BSDs ship with a lot of the original non-man-page Unix documentation: the
system manager’s manual, the programmer’s supplementary documentation, and the
user’s supplementary documentation. These are written to be typeset with
troff, not printed on a teletype like the man pages can be, so they tend to be
a bit more refined.
[https://svnweb.freebsd.org/base/head/share/doc/](https://svnweb.freebsd.org/base/head/share/doc/)
They are missing some chapters (eg eqn and tbl) due to copyright disputes but
you can find them in TUHS archives [https://minnie.tuhs.org/cgi-
bin/utree.pl?file=V7/usr/doc](https://minnie.tuhs.org/cgi-
bin/utree.pl?file=V7/usr/doc)

------
btrettel
Random LaTeX question: Does anyone have any tips for writing LaTeX code to
avoid typos or other mistakes? I.e., using the fact that LaTeX is a
programming language and not just markup language to catch mistakes?

I started adding what basically are assertions to my TeX files. These have
caught a fair number of errors. I also periodically run "static analysis"
scripts to catch writing and coding errors but these tends to have many false
positives in my experience. E.g., chktex and these:
[http://matt.might.net/articles/shell-scripts-for-passive-
voi...](http://matt.might.net/articles/shell-scripts-for-passive-voice-weasel-
words-duplicates/)

But I'm thinking there may be good coding styles or habits that could prevent
errors too. Any ideas?

~~~
todd8
My observation is that TeX/LaTeX are designed for use by humans but unlike
other programming, the literal content of the document occupies more of the
source being generated by the author than the elements used to control the
layout (macros, elemental operators, etc.); consequently, errors like improper
nesting of environments can produce puzzling error messages. An XML style
markup language would produce easier to understand error messages, but I
believe that _TeX /LaTeX makes the proper trade off by reducing the authoring
effort_ to create content at the expense of harder to debug documents when
complex formatting is being done.

Another important observation is the TeX itself is as free of errors as any
piece of software you are ever likely to use. LaTeX and many packages can
sometimes have quirks because of a bug, but _the underlying engine works the
way it is intended to work, always._

So knowing this, my approach is to _use a system that continuously shows the
typeset output_ as I type, if I can. That way errors are usually obvious and
easy to find. If that's not practical with your favorite editor, just manually
run LaTex on your source frequently to see how your document is progressing.
The times I've had to do serious puzzling over what was happening were almost
always because I had pages of complex text entered before checking to see the
output.

 _TeX most of the time doesn 't care about whitespace._ It makes its own,
excellent, decisions on where to to wrap lines according to font size and
margins, etc. So this means that you can just start every sentence on a new
line without worrying about the right margin; TeX doesn't care. This makes
editing and proofing your documents much easier, especially if your editor
just autowraps the contents of long sentences without inserting line breaks:
then it's easy to cut and paste and rearrange entire sentences while writing.
So a three sentence paragraph would look like this in the source:

    
    
        Don't go around saying the world owes you a living.
        The world owes you nothing. 
        It was here first. [1]
    

Rarely, whitespace can matter, for example when defining special environments
and macros for typesetting mathematics or a programming sample. In this case
you can make use of the % (percent character) which starts a comment that
continues to the end of line to cut off trailing whitespace. I find this handy
when defining my own commands to keep the LaTeX easy to read by breaking it up
into lines without worrying about extra spaces being inserted. (Don't be
scared by this next example; it's not something that ordinary users would
create. The point is it is possible and can be found when needed on
tex.stackexchange.com. It's very fancy wizard level typesetting. TeX/LaTeX
allows crazy powerful manipulation of text. See [2] to see the results that
this produces--really, take a look.) Note the use of %.

    
    
        \newcommand\overlineset[2]{%
          \stackengine{2pt}{$#1$}{\makebox[\widthof{$#1$}]{%
              $\scriptscriptstyle\hrulefill\,#2\,\hrulefill$}}%
                  {O}{c}{F}{T}{S}%
        }
    

_Don 't fuss with margins, heading font, caption location for figures, and so
forth while creating your masterpiece._ Get the content down while making sure
that LaTeX is formatting it without error. Afterwards, since everything is
parameterized, you can go back and modify the appearance and get it to look
just right and it will be consistent across the entire document. I find this
so very superior to the What-You-See-Is-All-You-Get style of document
preparation in Word.

Because TeX/LaTeX is a markup language, it's very easy to generate form
letters, labels, etc. by writing a small python script. I've done this to
generate spine labels for my books containing a 2-D barcode, Dewey Decimal
call number, author, ISBN, and my name for my books. The python code looks up
the dimensions of the book on the internet to control the best layout. The
TeX/LaTeX part wasn't hard, and with TeX's Turing complete macro language I
could have programmed almost all of this in pure TeX, but _I found the
combination of using python as a preprocessor for a TeX /LaTeX backend very
powerful._

Fancy graphics are a challenge in any document, but _a fantastic package for
graphics is available for LaTeX called TikZ._ It is a macro package, written
in modern TeX/LaTeX macros. It, like most LaTeX packages, has excellent
documentation. See the TikZ package documentation at [3]. Photos and other
graphics can be directly inserted into documents too, but TikZ is integrated
with LaTeX so margins, reflowing text, captions, fonts come out perfectly.

 _For academic and more serious writing learn BibTeX,_ a bibliography
generator/database. BibTeX citation information is widely available for the
papers I care about so I don't even have to enter the
Author/Title/Publisher/etc. myself.

I waste too much time thinking about Fonts because I like them. Once upon a
time, when TeX was created, there were very few fonts available for digital
use. Today, Font creation should probably be left to professionals and serious
hobbyists so I don't use MetaFont anymore. It is a font creation system and
has been a part of TeX from the very beginning and is still available to those
that would like to design their own fonts. I used it to create my company's
first Logo (I only needed 5 upper case letters). Today, there are so many
great fonts available, and TeX/LaTeX can work with all modern fonts so making
your own isn't a good use of your time. _Make sure to consult up to date
documentation on font handling because there has been a great deal of
evolution in the use of fonts in TeX /LaTeX over the past few decades._

TeX is now 40 years old and has had many extension and additions so _it is no
fun to build from scratch._ Fortunately for the Mac there is the comprehensive
MacTex distribution containing every standard/optional part that you should
need if you are using MacOS; on Windows there is proTeXt, but I haven't used
it. Both MacTeX and proTeXt are just system specific distributions of the
TeXLive distribution available on Linux that is updated every year.

Programmers are probably most productive in their favorite editors and these
are likely to have a LaTeX mode. _Emacs has a very capable mode, AUCTeX,_ for
editing LaTeX documents. I even generate PDFs from my org-mode files by using
org-mode support for LaTeX. However, I do this only for documents that are
org-mode to begin with, not for ordinary documents.

There are also some nice standalone programs for preparing LaTeX documents.
I've used the program TeXShop that comes with the bundle of TeX related stuff
that is installed when you install the MacTeX distribution. It probably runs
on Windows and Linux as well. This is a good TeX editor, but programmers will
likely miss the power and flexibility of using a programming editor (git
integration, custom key-bindings, etc.)

 _To try out LaTeX without needing to install any software there is the
excellent web based LaTeX system call Overleaf [4]._ It also supports
collaborative editing.

[1] attributed to Mark Twain

[2] [https://tex.stackexchange.com/questions/122117/overline-
cont...](https://tex.stackexchange.com/questions/122117/overline-containing-
text-or-other-symbols?rq=1)

[3]
[https://www.bu.edu/math/files/2013/08/tikzpgfmanual.pdf](https://www.bu.edu/math/files/2013/08/tikzpgfmanual.pdf)

[4] [https://www.overleaf.com](https://www.overleaf.com)

------
lyxfan
Let's not forget to show some love for the tool that makes LaTeX usable by
mere mortals:

[https://www.lyx.org/](https://www.lyx.org/)

Lyx is so useful that I am sometimes amazed it is not more popular. All the
power of LaTeX with the ease of use of MS Word. And free and opensource.
What's not to love?

~~~
FabHK
Yeah, Word-style "WYSIWYG" versus LaTeX-style source-code + compiling is
really a pretty big philosophical difference. Some people much prefer the
latter, and will avoid WYSIWYG LaTeX tools. The marked up source code gives
you much better control, and you can use semantic macros that for example
allow you to change how you format vectors or chapter titles etc. across the
whole document easily.

See a marginally relevant xkcd here:
[https://xkcd.com/2109/](https://xkcd.com/2109/)

~~~
improbable22
Maybe you know this, but LyX is somewhere in the middle.

You can see roughly what you are doing while writing, which makes some things
much easier. Especially typing in large formulae (and double-checking that you
didn't make a mistake). Although the way you enter them is more typing than
point-and-clicking.

For something like changing how the chapter titles look, you are pretty much
back in the world of Latex, trying out some macro to change all of them.

------
zzo38computer
I use Plain TeX with .dvi output. One advantage is that it can use the DVI
format, which is in many ways much better than PDF for many things. (You can
also convert DVI to PDF and to other formats; I wrote a program to convert DVI
to PBM (without using PostScript), and use that to print out the documents
(through foo2zjs, which converts the PBM into the format needed by the
printer).)

~~~
mehrdadn
> DVI format, which is in many ways much better than PDF for many things

Would you mind listing a few? I ask because, if my memory serves correctly,
the only thing DVI ever accomplished for me was to make me want to poke my
eyes out.

~~~
drfuchs
Ouch! Sorry! Anything specific? (DVI is my design.)

~~~
mehrdadn
Haha wow! Yeah sure, here's an example of what I mean:
[https://imgur.com/a/eX2BcIC](https://imgur.com/a/eX2BcIC)

I'll refrain from elaborating on the difference between those (it's probably
best if I don't), except to say that my jaws will probably shatter the floor
if you tell me you don't see the difference. :-)

(Probably worth saying that I have the same issue with PS; it's not just DVI.
I do sometimes wonder if I'm the only one who sees these.)

~~~
drfuchs
There’s no context given, so it’s hard to tell why you’re ascribing the
problem to dvi, rather than it being used sub-optimally. What’s the full tool
chain being used? What’s the output device?

Given that dvi doesn’t involve pixels, and lets you position any character
anywhere on the page, with precisely known rules for rounding into resolution-
specific device-space, you’ll have to be more specific about what you’re
blaming dvi itself for.

~~~
mehrdadn
Happy to provide context if you tell me what to provide...

The "output device" is... my monitor?

The toolchain is TeXLive for generating the files, and the usual viewer for
each file type on Windows (Acrobat Reader for PDFs, and Evince for DVI). If
you think it's Evince's fault I'd love to hear better alternatives, because I
haven't found a _single_ viewer that views DVIs any differently. And for input
files, you can generate files via LaTeX pretty easily:

    
    
      % DVI: latex    thisfile.tex
      % PDF: pdflatex thisfile.tex
      \documentclass{article}
      \usepackage{lipsum}
      \begin{document}\lipsum\end{document}
    

If this is using it "sub-optimally" then I guess I don't know how to use it
"optimally", and I'm happy to hear how.

Remember, though, at the end of the day, I'm just an end-user. I just know
that every time I try to view DVI and PS files I have to tear my eyes out, and
that I don't have this struggle with PDFs. I neither know which particular
person or place in the pipeline to assign the blame to, nor does knowing that
make it any easier for me to read the text...

~~~
drfuchs
Presumably Evince could do a better job of it then.

~~~
mehrdadn
What viewer would you recommend then? Would you mind posting a screenshot
coming from the optimal viewer you have in mind? Like I said, I haven't found
any viewer that does a better job.

~~~
drfuchs
I’m not current, so I don’t know if anyone has bothered to do a dvi viewer
optimized for today’s display technology. Given the billions of dollars
invested in the pdf ecosystem, though, it’s a reasonable place to live.

~~~
mehrdadn
> I’m not current, so I don’t know if anyone has bothered to do a dvi viewer
> optimized for today’s display technology.

Oh, if that's the problem, then please just point to a better viewer for
yesterday's display technology. Or even a decade ago's. I'll find you an older
monitor from whatever era you had a good viewing experience on and try it on
that. Because I'm one hundred percent sure an older display technology is not
going to make it look better. You can see above that above pointed out that
it's looked awful since 2003. I can vouch that it's consistently been awful
since over a decade ago, and PDFs have consistently been fine... on every kind
of display and resolution I've tried. I've absolutely never, ever had a good
experience viewing DVIs.

~~~
drfuchs
1982 DataDisc displays that Knuth developed everything on? Sorry you’re
unhappy, but given that dvi is literally a dump of TeX’s internal results on
layout positioning, it contains all the information that any other system
could possibly use. Perhaps your concerns have more to do with font rendering?

~~~
jfk13
Indeed -- I think it's clear mehrdadn's main issue is with the font rendering.
The image shows subpixel antialiasing on the PDF version, and not on the DVI,
so naturally they look quite different.

But that's nothing to do with DVI itself; it's entirely the responsibility of
the renderer.

~~~
mehrdadn
I find these responses baffling. I'm just an end-user. All I see is that every
time I get a DVI file, I want to tear my eyes out, no matter where or when I
open it. First my assessment gets questioned, then when I spend time
installing software and compiling an example just to demonstrate the concrete
problem upon request -- which I have no reason to believe was novel or
previously unknown in any way -- I'm promptly shut down and told to respect
the file format and instead blame all the viewers in existence. Great -- so
what was/am I supposed to do with this information? Are my eyes supposed to
see the file clearly now that the blame got assigned somewhere? Or am I
supposed to write my own DVI viewer tomorrow afternoon? How is this intended
to be helpful?

~~~
raphlinus
It's not helpful if you're expecting to be handed polished products that do
what you want. It is helpful if you want to understand what's going on. I
think it would be interesting to get a high quality DVI viewer based on modern
graphics tech, but of course such a thing will take time and effort. I plan to
meet with Dr. Fuchs in the next few weeks to talk about this and related
issues.

------
JohnHammersley
And whilst this is shameless self-promotion, if you want to try out LaTeX
without installing it yourself, please check out
[https://www.overleaf.com](https://www.overleaf.com) (I'm one of the founders)

~~~
inamberclad
Virtually everyone at school used Overleaf. Impressive bit of kit,
particularly since the version 2 update. I was fond of how v1 kept everything
in a git repo though.

~~~
stroebjo
I liked that about v1 as well, but it seems like they have brought it over to
v2 [0]. Trying it on a v2 project it only seems to have one commit for
everything up until the first clone?

[0]: [https://www.overleaf.com/blog/bringing-the-git-bridge-
to-v2-...](https://www.overleaf.com/blog/bringing-the-git-bridge-to-v2-its-
here-in-beta)

------
xorand
This quote is amusing:

"there’s even an open archive maintained by Cornell University where authors
of papers in physics, chemistry, and other disciplines can directly submit
their LaTeX manuscripts for open viewing."

for two reasons:

1) sounds like 'there's even a church at Vatican' seen that arXiv.org is the
biggest ever collection of scientific articles submitted by their authors

2) and there's no chemistry section in arXiv.org, but a half is mathematics
and some computer science

------
ai_ia
Just a curious question. I love LaTeX. Can we build a programming language for
the web which just focusses on the writing and takes care of the styling for
Web, Mobile etc. I mean Markdown is already there doing somewhat similar but
it still needs to set up properly and styled to use it correctly. Can LaTeX
type tool be developed for web where the user writes what he wants to write
and generates a static web book or blog post or article confirming to the same
style?

Apologies if this is stupid question.

~~~
skrebbel
There's thousands of tools like this, called "static site generators". Most
include a barebones default theme like you describe.

There's so many because a few years ago they were a big hype and everybody and
their cat built one.

------
jermaink
Just curious with regard to presentations: Does any known tech corp use beamer
for presentations?

~~~
gh02t
I've seen people from IBM giving talks using Beamer.

------
segfaultbuserr
I know there's a smaller but vocal group in tech that openly attacks LaTeX,
e.g. "LaTeX considered harmful". While I don't completely agree with them, but
I think there are some legitimate and genuine criticisms here.

Personally, I find LaTeX is great due to its native programmability - it
facilitates a natural separation of content and presentation, combined with
its markup language, can be very powerful. It also enables extensibility -
community packages for almost every single type of content, e.g. organic
chemistry, Feynman diagram, even music. A typical user only needs to
\usepackage and stop worrying. It's CTAN ecosystem is just like a programming
language like Perl, or recently, Node.js or Go, which is great.

But I find all hell breaks loose when you want to get a slightly different
formatting than what's offered. Then suddenly the entire system becomes
something you need to fight against. Previously, you could be a happy
\usepackage code monkey, but now you need to know the system inside-out and
hack a path ahead. Just like what happens when you use a software library in a
slightly different way than the author expected, then suddenly you find
yourself in a battle with the entire library, unfortunately, the same thing
occurs in typesetting...

For example, with LaTeX you can add footnotes to essentially anywhere with
guaranteed aesthetics. UNLESS you want to add a footnote for your title, then
it turns out the existing infrastructure in the "article" template doesn't
allow it at all, and you need to define and redefine and undefine some
internal macros in your document to implement it, as I learned from
[https://tex.stackexchange.com/](https://tex.stackexchange.com/)

And all the separation of content and presentation and its benefits ends at
this point. It's no longer "what you think is what you get".

The same issue also occurs when you are trying to make a Beamer slideshow,
most "environments" in LaTeX are designed for papers, not a slide. For
example, when I want to put some images to the slide, often not fits in a
"regular" geometric position, I find I have to keep hacking the width and
position of the images and keep compiling until the result is acceptable. I
don't know if there are better packages for typesetting slideshow,
recommendations are welcome.

Another problem I've noticed is the phenomenon of confusing and outdated
packages. Often there are more than one package for a specific task, some are
old and limited but still used in many documents, others are new, the rest are
competing implementations. Old ones are frequently mentioned in the old
guides, it works for a while until you have a corner case, then it takes a few
attempts before moving to the newer packages. Again, just like programming
languages. A recent trend in programming is writing new, cleaner
implementations for basic tasks, I don't know if the same thing is happening
in the LaTeX community, if it is, I think it would be great.

On the other hand, I've yet to see a word processor which allows users to
extend it and automate formatting and typesetting without doing some ugly
macros hacked together and written in VBScript. So I see LaTeX as a valuable
tool and I'll keep using LaTeX in the foreseeable future.

The final words, having
[https://tex.stackexchange.com/](https://tex.stackexchange.com/) is a great
contribution to the LaTeX community, just like how
[https://stackoverflow.com/](https://stackoverflow.com/) helps for
programming.

~~~
lifepillar
>But I find all hell breaks loose when you want to get a slightly different
formatting than what's offered. Then suddenly the entire system becomes
something you need to fight against.

LaTeX was developed with the goal of freeing the user (a researcher,
typically) from wasting time on layout and focusing on content (scientific
research, usually).

Plain TeX is much more flexible, but you may argue that it is much lower level
(or is it?).

For general typesetting, I can’t recommend ConTeXt enough. Its philosophy is
nearer to TeX than LaTeX, and it gives you full control on layout, too. Much
smaller package than the full TeX/LaTeX ecosystem. And it’s scriptable with
Lua!

~~~
segfaultbuserr
> _For general typesetting, I can’t recommend ConTeXt enough._

Thanks. I'm currently using XeTeX/XeLaTeX due to its newer codebase, native
Unicode support, OpenType fonts, PDF outputs, etc. But perhaps it is the time
to try ConTeXt. I was struggling previously with Lua in Awesome window manager
as I find (from a Python background) the syntax is weird, but now I think
seriously need to pick up a serious textbook to learn Lua, the Lua engine is
embedded in everything, learning it opens a new world.

------
capnrefsmmat
I'm a little sad that there still doesn't seem to be a "LaTeX for the Web", a
document markup system with similar philosophy as LaTeX but for HTML output.
LaTeX is basically only suited for static PDF output, which only survives
because there don't seem to be tools that do an equally good job generating
HTML documents.

There are a lot of special-purpose markup languages, like Markdown and
AsciiDoc. They involve toolchains written in other languages to convert them
to HTML. If you want to add features, you have to hack the toolchain.

For blog posts and simple websites, that's fine. But sometimes I want to build
something nontrivial -- as an academic, the first thing that comes to mind is
an academic paper, where I might want to have special markup for theorems,
proofs, examples, figures, and tables, and have ways to automatically cross-
reference them, generate tables of contents, embed automatically formatted
bibliographies, and so on. Or generate figures from code (like TiKZ), or embed
data analysis code right in the source and embed the results (via knitr or
Sweave).

With pandoc, bookdown, and knitr, you can get pretty close to this. But what
made LaTeX so powerful is that it is programmable _in LaTeX_. It can be
extended: you can define new types of environments (example problems! homework
exercises! every example shows equivalent source code in three different
languages! category diagrams generated automatically! musical notation is
typeset!), you can make new commands to automate drudgery (typesetting
chemical equations! building complicated equations!), and you can do it all
with at least some basic separation between content and presentation style.
Converting the larger LaTeX documents I've made to Markdown or Org would be
basically impossible without writing a bunch of scripts to extend the Markdown
renderer and hack everything together. There's no equivalent of just writing a
LaTeX package.

I'm not aware of another document preparation system that comes close to
LaTeX. Org mode is probably the closest, since it has tools to embed code
blocks in other languages and include their output, but after a week of
fighting with conversions between markup languages I can really see the appeal
of LaTeX's uniform syntax and built-in programmability.

The other promising option seems to be Pollen[0], a Racket-based programmable
system. But it seems more like TeX than LaTeX: it provides the very basic
tools to build a programmable document system, but not the higher-level
conveniences (like cross-referencing commands and standard sectioning and
environment commands) you would need to build yourself. Maybe someday...

[0] [https://docs.racket-lang.org/pollen/](https://docs.racket-
lang.org/pollen/)

~~~
neilv
Have you looked at Scribble? It's very programmable, and evaluated similarly
to TeX or LaTeX, but in a more powerful and straightforward language (Racket).

It's used by academics to write conference papers, it's used for most of the
Racket books for both Web and camera-ready typesetting for print, and I've
even used it for embedded API docs.

[https://docs.racket-lang.org/scribble/](https://docs.racket-
lang.org/scribble/)

(That manual itself was written in Scribble.)

(I'm actually thinking of moving from Scribble to Markdown for embedded API
docs, as part of an open source ecosystem goal, to make docs for reusable
modules more lightweight to add, but it's hard to give up some things about
Scribble. For example, for one module, I needed substantial documentation
about each opcode, and wanted it to be formatted similarly to an API function,
but with different meta properties, and some tricky formatting. Scribble let
me make a simple semantic form for that, separate from formatting, like LaTeX
would.)

~~~
Cyph0n
That looks nice, but it’s no match for LaTeX in the academic publishing world,
simply because virtually all technical conferences and journals only provide
Word and LaTeX templates.

~~~
capnrefsmmat
Scribble can export to LaTeX and includes templates for a couple different CS
conferences, actually, though probably one would have to extend it a bit to
use it for other venues.

------
chess93
Note for any college students (especially undergrad): Do your STEM
presentations in Beamer to get extra brownie points.

------
jermaink
Just curious with regard to presentations: Does any known tech corp use
beamer?

------
Wistar
Maybe I dreamed it but I could swear that I read that Gödel, Escher, Bach: An
Eternal Golden Braid was typeset by Hofstadter in TeX. That, by itself,
although fairly amazing, wasn't the most noteworthy thing to me but that he
had first read—uploaded to his brain—all the docs for TeX before he wrote a
single word and let the experience of using it call forth from memory the
relevant documentation.

~~~
stan_rogers
While METAFONT, and to a lesser extent, TeX, were to become frequently
featured characters in Hofstadter's _Metamagical Themas_ days, GEB was pre-
TeX. It was created using Pentti Kanerva's TV-Edit at Stanford (with the data
on punched paper tape). The preface to the 20th anniversary edition of GEB
tells the tale.

~~~
Wistar
Ah. Thank you. Well, I guess I did dream it.

------
mymythisisthis
Good tutorial for Latex?

~~~
wglb
There are tons of resources at
[https://www.tug.org/texlive/](https://www.tug.org/texlive/). If you download
it there are many included.

~~~
wglb
Also the article itself has some pointers.

------
pjmlp
While I was big into LaTeX during the university, even wrote my thesis with
it, using the MikTeX distribution, nowadays I rather use the likes of Word,
FrameMaker, DocBook or DITA based tooling.

~~~
cpach
It was quite long ago a saw a new project that opted to utilise DocBook. Is it
still popular?

~~~
pjmlp
I used it on a project three years ago.

All the help system was based on it, generating Word, PDF and HTML based
deliveries.

It is relatively popular at the enterprise level, where the licensing costs
for such tools are peanuts vs the overall project costs.

~~~
cpach
Neat! Would you minding listing some of those toolchain vendors? I think about
pros and cons of various publishing systems from time to time and would love
some new input.

~~~
pjmlp
FrameMaker, OxygenXML, RoboHelp, XMetaL, XMLmind, Arbortext.

~~~
cpach
Oh, lots of stuff to look up. Thank you!

------
j-pb
If we want to get remotely intelligent machines, LaTeX needs to die.

It's a visual description based on a turing complete language, 95% of semantic
meaning is lost once it's poured into a TeX document, end the last 5% are gone
once it's compiled into a mess of pdf text boxes that ignore text flow and
just look correct.

Literally any other document description language would be a better lingua
franca than LaTeX, because a language should be used to communicate, and TeX
is meant to print curves.

~~~
ezequiel-garzon
The use of LaTeX should not and does not prevent the advancement of machine
intelligence, just as the stubborn use of blackboards and chalk by
mathematicians as prime communication devices. Now, when machines get
intelligent enough they’ll figure out LaTeX and more.

Keep in mind that one goal of TeX was to make it fairly easy for a human to
write it down. Another goal is for another _human_ to be able to understand
that source quite well. Now picture both tasks with semantics-rich MathML.

