Hacker News new | past | comments | ask | show | jobs | submit login
A Guide to Writing Mathematics [pdf] (ucdavis.edu)
132 points by zwliew 25 days ago | hide | past | favorite | 24 comments



This is a sensible guide, by Kevin P. Lee, aimed at undergraduates taking math classes.

A similar guide, aimed at people writing research papers, is “How to Write Mathematics” by Paul Halmos (1970) [1].

They both start from a similar assumption:

Lee: “When you write a paper in a math class, your goal will be to communicate mathematical reasoning and ideas clearly to another person. The writing done in a math class is very similar to the writing done for other classes. You are probably already used to writing papers in other subjects like psychology, history, and literature. You can follow many of the same guidelines in a mathematics paper as you would in a paper written about these other subjects.”

Halmos: “The basic problem in writing mathematics is the same as in writing biology, writing a novel, or writing directions for assembling a harpsichord: the problem is to communicate an idea.”

[1] https://www.mathematik.uni-marburg.de/~agricola/material/hal...


As a math major at UCD, I never saw Lee's essay, so I'm guessing it's from the past few years.

I did read Halmos's, though. It was helpful for me because I was starting from a very programming centric frame of mind and I was surprised at how "conversational" math writing was. Reading it helped me start to learn how to express ideas precisely and clearly without a strict code-like structure.


Thanks for linking this! I like Halmos‘ clarifications on the editorial “we“. I don’t know if this is because I‘m living in a non English speaking country but at our university students often get told to use ”we” instead of ”I” in their papers. I always found this weird. If you‘re the only author you can‘t just refer to yourself as ”we” — just to avoid the use of ”I”. It sounds wrong — especially if the reader knows that you’re the only author — and eventually leads to absurd constructs such as the example in the essay: ”We thank our wife for her help”.


I understand the problem. I used to manage an English academic writing program at a university in Japan, and what guidance to give to students about pronoun usage was a frequent topic of discussion among the teachers.

One problem was that the students had learned a moderately informal version of English in which first-person pronouns are common. Also, they were young and used to writing and speaking about themselves. That led to what some teachers perceived as excessive use of “I” for the research papers the students were being taught to write.

Another issue was that the teachers themselves all had academic backgrounds, most with doctorates, and, we discovered through our discussions, pronoun usage varies a lot by field. Curious, I once looked through journals in a variety of fields—sociology, nursing, physics, gender studies, literature—and found that in some fields the authors never seemed to refer to themselves by “I” or “we” while in others it was common.

The use of “we” in mathematical writing, especially proofs, may be a special case. The “we” in a sentence like “If we assume that M is a compact metric space, then we can prove that ...” doesn’t really refer to the author or authors; it seems to have a more abstract referent.

Paul Halmos, by the way, was an excellent teacher as well as writer of mathematics. I was fortunate to take several classes from him when I was an undergraduate at the University of California, Santa Barbara, in the 1970s. Though I ended up not going into mathematics, I still have a very fond memories of learning with him.


I'm going to read through the guide linked in the post, but I don't have time (Or really the desire) to read through the guide aimed at people writing research papers.

Do you have TL;DR for the difference between aimed at undergraduates vs research papers in terms of writing?


I don't know if anyone else experienced something similar in physics, but it was not until I read the textbooks of David Griffiths (his books on E&M and quantum especially), that either of these subjects became clear to me.

I think the reason is that his writing was like a clear 1-on-1 tutorial session, with many of the "writing mathematics" practices described in the article here. It had a conversational style as if it was someone trying to explain but in written words. I recall phrases like "now look at the expression we have here, what does it tell us?" Or, "what follows is a somewhat long derivation, but you will find the effort of working through it pays off".

Most other textbooks read like stilted reference manuals by comparison, with "exercises left to the reader".


It has been a while since I read Griffiths. If I recall I liked his style for E&M, but the quantum mechanics book was far too discursive, full of unnecessary jokes and cutesy phrases that muddled the discussion instead of aiding understanding. I remember being frequently confused about when Griffiths was making a physical argument versus a mathematical argument when deriving results.

My professor didn’t use Griffiths for the exercises, so I actually sold my copy back to the bookstore and bought a used copy of Shankar’s book: it is certainly drier, but I think it’s much more clear and precise.


There is also “Mathematical Writing” by Knuth: https://jmlr.csail.mit.edu/reviewing-papers/knuth_mathematic...


What's not in the paper but I find quite annoying in scientific articles is excessively loaded notation. Some variables get to have like 4 or 5 subscripts and equations become unreadable.

I really appreciate when papers get to say at some point "for the sake of clarity, let drop this or that notation" that are not that relevant to the key ideas. Usually it's in the good quality papers.


You mean you didn't find the triple-curly Greek xsi letter intuitive, even when subscripted with mu-nu, and superscripted double prime??


More of a rant, but since I began to implement algorithms from various papers for my PhD, my aversion to mathematical notation grows steadily. More often than not it's imprecise and hard to reproduce.



I also sometimes have to translate mathematical notation to algorithms. As a non-mathematician, math notation seems similar to naming all your variables 'a', 'b', 'c' etc. and then writing a key elsewhere that explains that 'b' stands for 'car deceleration rate'.

This would be insane to do in code, why is it normal in mathematics?

Note: I had this example in my head while writing https://www.traffic-simulation.de/info/IDMsstar.png


Because one of the main purposes of mathematical symbols is to be manipulated by humans by hand, so short variable names are essential.

For programming, the same constraint does not apply.


Right, so it's all stems from mathematicians working with pencil and paper.

Are mathematicians ever frustrated that they don't understand what the variables mean? Or are you able to look at the above equation and infer the meaning of the variables based on experience?

I understand that there are symbols such as "delta" which basically always mean the same thing. But would you have been able to tell what "b" meant without someone telling you?


> Are mathematicians ever frustrated that they don't understand what the variables mean?

Yes, if the presenter does not define them, or if the variables don't have a "usual" definition within the field.

> But would you have been able to tell what "b" meant without someone telling you?

Probably not.

Mathematicians, being humans, are not perfect by any means. They will sometimes fail to define things properly. But they are not stupid - they do have a cultural norm of defining everything, that is obeyed most of the time. Conversely, readers of mathematics are not perfect either. They will sometimes skip over the definitional part of a paper/book and jump straight to the results, and wonder why they can't understand.


Variable names in mathematics have no intrinsic meaning (as they have, for example, in physics). In a mathematical text, every occuring variable must be properly defined. This is most commonly done before they are used, with formulations like “let x be …” or “x := …”, or immediately after they have just been used in a formula, with something like “where x denotes …”. Failing to do so is just as much of a mistake in mathematics as it is in programming. (In an homework assignment or exam, doing so will lose you points.)

In praxis one should be aware of the following points:

  - In programming, the computer will complain if an undefined variable is used. In mathematics, this is sadly missing. (The next best things are other proofreaders, i.e. other mathematicians.)

  - Variable names aren’t just picked at random (or as a, b, c, …), but nearly always follow sensible patterns. (Natural numbers are n, m, k, l, …; vectors are v, w, u, …; indices are i, j, k, l, …; radius is r; …) Different authors may use different conventions, but they still allow mathematicians to kind of understand what the variable means just from looking at its name.

  - Every area of mathematics has certain keywords which the reader has to be aware of. Again, some authors may use (slightly) different conventions, but there are typically only few conventions out there, and they often don’t differ much. (Example: The space of homomorphisms/linear maps between two vector spaces V and W is commonly denoted by Hom(V, W), hom(V,W), ℒ(V, W) or Lin(V, W).) One can oftentimes tell what a keyword means just from it’s name, its signature, and its usage. Keywords also often consist of more than one letter or are typeset in a special way to distinguish them from regular variables.)
Good mathematical writers will oftentimes go out of their way to explain their notation at the beginning of their text, just to be sure.

> Are mathematicians ever frustrated that they don't understand what the variables mean?

So to answer the question: if a mathematician doesn’t understand what a variable or a notation means, then one of the following has happend:

  - The variable was already introduced beforehand, but the reader forgot about it. (This is the most common scenario.)

  - The variable is explained in the upcoming line. (Also very common. The reader will—of course—only notice this after going through the previous part of the text multiple times in seach of just this explanation.)

  - It is a standard notation that the reader is not familiar with. (Often happens if the reader is missing the background knowledge assumed by the author, or if the author uses some outdated notation (e.g. because they have been dead for over 50 years).)

  - The author made a simple mistake while writing. (Typo; forgot to change a variable name after shuffeling things around).

  - The author actually forgot to define the variable: a mistake that is hopefully catched by their peers.

  - The explanatory text was left out for time reasons (giving a talk, writing some rough/informal lecture notes, quickly scribbling down homework in the morning).


Thank you for the comprehensive write up! It does make me feel better to know that there should always be an accompanying explanation of what the variables are.


The anti-dote to your problem is to try to write a serious mathematical paper yourself. After spending 6 months writing it, and how many months/years of research, you will realize that writing a paper whose technical results are easy to reproduce and has precise notation, but which also conveys understanding to other human beings is an extremely difficult task, that even the smartest people on the planet can't always do.

For most people, with their limited writing skills, understanding and precision have a tradeoff, and there is no perfect paper.


Absolutely! I'm just not sure if we have chosen the right notation to describe our ideas. There are many interesting developments in formalizing mathematics and I hope that the field of mathematics is gravitating towards such solutions.


your objection might be to the author of the notation, not the notation itself. or maybe to the review system that's supposed to catch this kind of imprecision. as with any language, it's possible to write nonsense or even self-conflicting statements in mathematical notation.


Well, I believe the authors do their best to convey their ideas, but due to the fact that the notation is mostly informal it's easy to miss details. Also, different institutes might use their own special notation to describe the same ideas and even individual authors prefer some notation over another.

All in all, imprecise probably was the wrong term. The statements are precise for the people working in this specific "bubble" of research, because they might know the implicit assumptions made and in the worst case, the only one who knows the implicit assumptions is the author.


TBF, if key assumptions are left implicit, it’s not math.


Math without notation is 1000x less precise and harder to reproduce, unfortunately.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: