
What do grad students in math do all day? - ColinWright
https://gist.github.com/stoutbeard/4158578
======
algorias
An incredibly accurate depiction of research in any theoretical field, I'd
say. Compound that with the fact that during your education you're mostly
presented with texts that summarize decades or more of research into a scant
few pages as if the people involved had just flowed naturally from one idea to
the next, from a problem statement to the incredibly complex idea that unlocks
the proof.

When it's finally your turn to try your hand at actual research, it turns out
that your contributions are barely a couple of side notes on a restricted
subset of a problem in the hope that someone will use that information to find
out something that is actually relevant in practice.

Rather than turning me off from academia, it makes me marvel at the tower of
minuscule pebbles upon which our modern civilization rests. One day, I might
get to place a few more of them on top.

~~~
wheaties
Agreed. After doing 2 years I left with a masters and never looked back. What
is even scarier than the minuscule contributions you might make is the few
that have done anything but minuscule. Those people are amazingly smart.

By the way, cats do like vacuums...
<http://www.youtube.com/watch?v=bCzkm2z4-6g>

~~~
SilasX
If they're so smart, why aren't they any better at bringing newcomers up to
speed?

~~~
omaranto
Some thing are hard no matter how you try to explain them. There's lots of
stuff that seems to require working through it yourself to understand.

~~~
SilasX
Oh, I don't disagree; some topics are inherently intractable. It's just that,
in all my experiences, the numbers are more like 1 truly intractable topic for
every thousand that some expert will mistakenly claim is intractable simply
because they haven't put the effort into tracing out the connections to other
topics.

The woman top row of this xkcd pretty much sums up my life:

<http://xkcd.com/566/>

------
smoyer
"Can use use a vacuum cleaner to clean a cat?" ... Yes, but I've found that
the cat might not like it. I'd also recommend using the upholstery tool and
whatever you do, don't allow the cats tail (or any other part) to come into
contact with the rotating brushes. For a moderately dirty cat, the vacuum's
suction should be enough. If you have a really dirty cat, or a lot of cats to
clean at once, try an automatic car wash instead (just make sure it's the kind
where the doors close before the wash starts, otherwise too many escape).

~~~
joshrotenberg
My cats hate the vacuum but one of them absolutely loves to be swept on his
back with the broom. He's old school like that.

------
AlisdairO
I did my PhD in the database area, and I really relate to some of the stuff in
this article - most particularly the overwhelming difficulty (and pressure
involved) in doing something genuinely new. I came in with all these wonderful
ideas, and within a few months realised that not only had they all been done,
most of them had been done decades ago. Quite the comedown!

It does give you a bit of a cynical outlook on new tech in general.

~~~
stochastician
One of the things I've found is that, to really consider myself a "master" of
a particular field, I have to know where the research frontier is. "What are
the current unanswered problems? How do you extend method X to cover case Y?"

------
dougk16
Fun read, well-written. I can relate to this somewhat. I recently proofread a
paper discussing new applications for hermite polynomials (for english
mistakes), and I was blown away by the years of study it would take me to even
begin to understand what was going on with my amateur math skills. I guess it
can be similar to what a non-programmer feels like when looking at code.

------
n00b101
I prefer the explanation that I developed back when I was a math student:

Imagine that all of mathematics is represented as a solid sphere (ball). At
the core (origin) of this sphere are the most basic concepts in math, that we
all learn in elementary and high school. On top of the core are many layers of
knowledge that are all interconnected, but lead in different directions from
the origin. These layers have names like algebra, calculus, and geometry. On
top of those are other layers with names like topology, set theory, number
theory, analysis, etc. These layers continue, like an onion, all the way to
the surface of the sphere.

All mathematics students begin at the core and climb outward, towards the
surface of the sphere. They choose different directions and set out to learn
and practice everything that they encounter on their paths through the sphere.
Eventually, after many years of study, the students who survive finally reach
some point on the outside surface of the sphere. Which exact point on the
surface each student reaches depends on the direction that he chose at the
start. Once the students reach the surface of the sphere, they have understood
everything that is known to mankind about some specific series of subjects
within their speciality. Standing on the surface of the sphere, they must then
must work to add a new layer to the sphere, to add something new and original
to human knowledge.

As time goes by, new theories are developed and added as new layers onto the
sphere. In ancient times, a great mathematician could learn everything in the
whole of the sphere within a single lifetime. The sphere has grown
exponentially, however, and in modern times no one person could ever visit
every place in the sphere within a single lifetime. The sphere has become so
vast as to defy comprehension, as generation after generation has expanded it
with new layers.

When the mathematics students were starting out in the core of the sphere,
they were all in the same place. They could easily see and speak to one
another. However, when they reach the surface of the sphere, it is as if they
are scattered across different points of the surface of a huge planet. A
student might be lucky to find himself at a popular spot on the surface, where
there are perhaps a handful or even a few dozen other students who he can talk
to. Another student, less fortunate, may find himself stranded in a deserted
place where there is no one that can hear him and there is no one for him to
speak to. The surface is a lonely place, where few souls are encountered and
if you do encounter some wandering traveller then you are unlikely to speak to
the same language and must communicate by crude gestures like waving of hands.

As more and more matter is added to sphere, the dwellers on its surface drift
further and further apart, as the surface area expands. Furthermore, the
surface of the sphere grows farther and farther way from the core of the
sphere. It takes longer and longer for the students to reach the surface of
the sphere, as there is more and more volume to traverse.

~~~
algorias
Agreed, but as a slight counterpoint, some research is about stripping away
unnecessary middle layers, to find abstractions, a quicker way to the surface
in other words.

If we didn't do this, research would inevitably stop, unless eternal life lies
inside a 100 year radius of the sphere (and we can keep learning and getting
smarter forever, which I doubt)

~~~
mturmon
And, some stuff has been forgotten. Even if it used to seem pretty important.

Speaking about spheres, there's lots of stuff about spherical trigonometry
that you can find in old books (say, 1880s to 1920s [1]). They used to think
it was math, but it has been determined to not be math any more. It's now
stuff that "everyone knows".

[1] [http://ebooks.library.cornell.edu/cgi/t/text/pageviewer-
idx?...](http://ebooks.library.cornell.edu/cgi/t/text/pageviewer-
idx?c=math;cc=math;rgn=full%20text;idno=00640001;didno=00640001;view=image;seq=9;node=00640001%3A5;page=root;size=100)

------
chris_wot
Speaking of vacuum cleaners, Stephen Pile, my very much favourite author ever,
wrote an entry in his hilarious _Book of Heroic Failures_ about the man who
almost invented the vacuum cleaner. I reproduce it here verbatim, under the
principles of fair use (transformative, educational, not for profit and a tiny
amount of his work that won't impinge on his profits - buy his book!)

 _The Man Who Almost Invented The Vacuum Cleaner_

The man officially credited with inventing the vacuum cleaner is Hubert Cecil
Booth. However, he got the idea from a man who almost invented it.

In 1901 Booth visited a London music-hall. On the bill was an American
inventor with his wonder machine for removing dust from carpets. The machine
comprised a box about one foot square with a bag on top.

After watching the act -- which made everyone in the front six rows sneeze --
Booth went round to the inventor's dressing room.

"It should suck not blow," said Booth, coming straight to the point. "Suck?",
exclaimed the enraged inventor. "Your machine just moves the dust around the
room," Booth informed him. "Suck? Suck? Sucking is not possible," was the
inventor's reply and he stormed out. Booth proved that it was by the simple
expedient of kneeling down, pursing his lips and sucking the back of an
armchair. "I almost choked," he said afterwards.

\---

There's a story here somewhere that aught to relate to mathematics, if only I
could find it...

------
lutusp
> Fortunately, math has an incredibly powerful tool that helps bridge the gap.
> Namely, when we come up with concepts, we also come up with very explicit
> symbols and notation, along with logical rules for manipulating them. It's a
> bit like being handed the technical specifications and diagrams for building
> a vacuum cleaner out of parts.

Just to play the devil's advocate ... math has wonderful symbols and methods,
but they don't really assist in comprehension unless everyone agrees on their
meaning, and then only if everyone already understands the underlying
concepts, the axioms. For example, starting in 1910, Bertrand Russell and
Alfred North Whitehead published "Principia Mathematica" (a borrowed title):

<http://en.wikipedia.org/wiki/Principia_Mathematica>

But, notwithstanding their high intellectual level and the ambitions behind
the project, and notwithstanding the system of symbols used, Russell and
Whitehead missed a crucial, central point -- their plan to systematize
mathematics, place it on a solid logical foundation, make it immune from
uncertainty and doubt, was doomed from the start. Kurt Gödel demonstrated this
a few years later:

[http://en.wikipedia.org/wiki/G%C3%B6dels_incompleteness_theo...](http://en.wikipedia.org/wiki/G%C3%B6dels_incompleteness_theorems)

So much for the power of symbols. The consequences of the Incompleteness
theorems are often overstated, but they do falsify the idea that mathematics
is logically consistent, or that a set of symbols, however clear and
unambiguous, will prevent basic misunderstandings in even the most fertile
minds.

~~~
cwzwarich
> The consequences of the Incompleteness theorems are often overstated, but
> they do falsify the idea that mathematics is logically consistent, or that a
> set of symbols, however clear and unambiguous, will prevent basic
> misunderstandings in even the most fertile minds.

The incompleteness theorems don't falsify the idea that mathematics is
logically consistent; they do falsify the idea that any reasonable
mathematical theory could _prove_ its own logical consistency.

They also don't remove the possibility that there is some metamathematical
justification for an unambiguous interpretation of the concept of the natural
numbers or even of a set (see all of the work in large cardinals, culminating
in Hugh Woodin's recent work on extender models for supercompact cardinals).

~~~
lutusp
> The incompleteness theorems don't falsify the idea that mathematics is
> logically consistent; they do falsify the idea that any reasonable
> mathematical theory could prove its own logical consistency.

Yes, true, but I think that amounts to the same thing. Our inability to prove
the thesis casts in doubt our right to assert it at all. That's certainly true
for any other mathematical idea. No one was willing to say that Fermat's Last
Theorem, or the Four-Color Map Theorem, were proven, until they were.

Nevertheless, I shouldn't have said that the Theorems "falsify" the logical
consistency of mathematics. They prevent the notion from being demonstrated,
but doesn't invalidate its existence as a hypothesis.

> They also don't remove the possibility that there is some metamathematical
> justification for ...

Yes, but that's not a positive claim, it's the assertion that it can't be
ruled out. And such an effort might fall afoul of the "sufficiently complex"
criterion of Godel's Theorems, which brings us full circle.

------
jhales
I'm just finishing a math PhD. Talking to CS PhD's my impression is that we do
pretty much the same thing, we just work on different domains.

~~~
jackpirate
I'm a CS PhD, and I agree. I always tell people I'm studying math because it
gives them much more of a flavor for what I'm actually doing everyday. When I
say computer science, people think I'm just programming something.

Also, when people ask me, "So What do you _actually_ do studying math?" I
reply, "Sit and stare at the wall for several hours a day. Occasionally I
write something down."

~~~
army
It does depend a lot on the sub-field of CS.

For systems work you spend a lot of time implementing ideas and running
experiments. In a lot of cases it isn't necessarily that difficult to find an
interesting idea that's probably viable. The real work is spending a lot of
time writing code and work out all the annoying details to get to the point
where you have a decent proof of concept.

------
graycat
Yes, when as a math grad student I saw some of that, I developed some opinions
and ideas to look for more productive approaches.

Somewhere I read: "There is a famous recipe for rabbit stew that starts out,
'First catch a rabbit'.", and I changed that to, "There is a recipe for how to
do applied mathematics, first get an application.".

For more, commonly the main criteria for 'research' is that it be "new,
correct, and significant". And quite broadly in some powerful places, e.g., a
famous David report, there were complaints that a result in math that met the
first two but had no visible applications, inside or outside math, was likely
short on "significant". So, eventually it dawned on me that if start with an
application (something significant in the real world, although inside math
would do also but tends to be more difficult and less highly valued outside
math) and get a good solution for that application, then have "significant"
handled. Yes, these thoughts did occur to me, but they were only secondary: My
real interest was 'significant' outside of math and, in particular, in my bank
account.

So, on to "new, correct": In math, "correct" is comparatively easy -- just
work in the style of definitions, theorems, and proofs where it is fairly easy
to check math correctness.

That leaves the part "new": Surprise! If start with a significant problem from
the real world, then likely there is no solid solution for that problem on the
shelves of the research libraries. Why? Because it's a really complicated real
world out there! So, find in your real problem where current math doesn't
really provide a solution and then do some more math to get some math for a
better solution for the real problem. Now maybe the math just did that was
"new" is not as earth shaking for pure math as, say, resolving the Riemann
hypothesis, or, now, P versus NP, but still have covered "new, correct, and
significant" and, besides, may have something powerful and valuable for the
real problem outside math.

And that new math result got for that one real problem has a nice property:
Given a new result in 'pure' math, the probability of an application in the
next 12 months is small. Given a new result in math that has an application,
the probability of another application in the next 12 months is nicely higher.
Moreover, that probability appears to be monotone increasing with the number
of known applications. Indeed, one skeptical way to evaluate such a result is
to look for two significant applications instead of just one!

There is more going for this approach: Are taking math directions and 'values'
based heavily on what solves some problems outside math. Well, where'd we get
calculus? Sure, trying to make sense out of elliptical orbits of planets. And
calculus is the main well spring of the part of math called 'analysis' that is
so far by a wide margin the most applicable part of math (I know, number
theory can do good things for computer security; maybe some people studying
string theory in physics will want some topology; and people in logic may
value work in foundations). But, tough not to notice that calculus led to the
study of heat flow and Fourier theory which did great things for signal
processing.

But in part the OP is correct: When I went through measure theory, it seemed
fantastic stuff, especially since finally I had a better theory of integration
for applications. But in Rudin's 'Real and Complex Analysis' he discussed
regular Borel measures, and I never saw just why he cared about the 'regular'.
Maybe if I'd go back and think about those few pages for a few days I'd see
it. Yet, if I did see it, then I'd write it down so I wouldn't have to work to
see it again, and I wish that Rudin had done that in his book. The precise
definitions, theorems, and proofs are crucial, but too often pure math is
written with too little explanation of the view from 50,000 feet.

My view is that the key to much more value from computing over the next few
decades will be some novel uses of math and its techniques of definitions,
theorems, and proofs. Why? Because for what to do in building our hardware and
system software, applications, and larger systems, we need more powerful tools
than intuitive heuristics or just programming what in principle we see how to
do manually.

So, right, in the short term, my approach to math is to do 'applied' math
where essentially we start with an application. Then in the longer term my
hope is that such math, as calculus did, will lead to new, grand, powerful
structures in pure math.

Yes, if the pure mathematicians can make good progress as isolated from
applications as in the OP, then good for them, but I concluded that, in
effect, good math needs some good applications from outside math.

Much of this 'philosophy' has come to pass whether deliberately or not: Quite
broadly it is accepted that the best research 'mathematizes' its field. So,
yes, the leading example is mathematical physics, but math is now just crucial
in mechanical, electrical, electronic, and civil engineering, statistics, and
operations research. Other fields that try to be more mathematical include
finance, economics, psychology, sociology, and, now, genetics. And of course
computer science is becoming increasingly mathematical.

As powerful as math has been for these other fields, pure math has essentially
been left suffering as the applications, grants, and students based on
applications of math go to fields outside math. So, if I were a chair or a
dean over a math department, then I would welcome serious attention on
important problems from outside math. I would keep fully high standards of
definitions, theorems, and proofs. But, for more, first cut it would seem that
the criteria "new, correct, and significant" would be easier than those
criteria with also "applicable, powerful, and valuable" outside of math, but
my view is that this is false, that being applicable is easier just to
publishable papers but more importantly to real significance both inside and
outside math.

~~~
army
Mathematizing a fields is a bit of a double edged sword though and I think
needs a great deal of skepticism and caution. Yes, if you do it right you can
get insights and prove things more rigorously. If you do it wrong, the
mathematics can easily become divorced from reality: it really does require
ongoing skepticism and awareness of the connection between whatever
mathematical abstractions you've built and reality.

In computer science, assuming constant factors in algorithms don't matter is a
useful approximation that makes it much easier to do mathematical analysis.
But constant factors are very important in practice.

Also, plenty of academic economics research has gone off the deep end with
more and more elaborate mathematical models based on assumptions that don't
jive with reality.

~~~
BrianEatWorld
Excellent observation. In many ways, mathematization has killed many branches
of not just academic economics, but practical economics as well. Just look at
how many failed policies in the past were driven by the false assumption that
if we change X by so many units, such and such economic indicator will move by
Y.

------
zachrose
> "It is a tool that does suck up dust to make what you walk on in a home
> tidy."

This is essentially the idea behind Natural Semantic Metalanguage[1], Andrzej
Bogusławski's theory of cross-cultural semantics from the 70s.

[1] <http://en.wikipedia.org/wiki/Natural_semantic_metalanguage>

------
CurtMonash
Funny stuff, and no doubt often accurate, but it wasn't my experience. I
wanted to work in game theory, and there weren't an game theorists in the
Harvard mathematics department; there were, however, in the business school.
So I went over there, and Elon Kohlberg told me of an outstanding problem, and
gave me a couple of papers to explain. I worked until I had a proof of the
conjecture, and that was that, even though it had in fact been proved
elsewhere (the Mertens-Neyman Theorem) a couple of months before I finished.

The whole thing took about a year, after which I stared my post-doc in public
policy. After that, I became a stock analyst. After that, I tried some
startups (not successfully). Then I became a self-employed industry analyst.

What could be more straightforward than that? :)

------
pmiller2
I used to say what I did every day consisted of counting, coloring, drawing
pictures, and talking to people. And mostly, it did.

------
not-gro-tsen
As a math professor, I was enthusiastic when I read this short piece in Math
Horizons, and promptly put it on my office door. The metaphor is a great hook,
is well carried out, and leaves the reader with a pretty much spot-on idea of
what math research is all about. I am delighted to see it reach a wider
audience on the web.

------
brador
I know a lot of guys who started Math Phds...I know no one who actually
finished one.

------
mrcactu5
math research would be much more relevant if they talked to engineers (and
hopefully are listened to).

In college a I was told "quantum mechanics = linear algebra" then I hear
"electrical engineering = linear algebra".

They were right.

The PhD is a little different. It becomes so specialized... your papers are
geared towards only a handful of experts. Literally you and a few other people
are the only in the world who understand it. I thought mathematicians had a
lot more common ground, but they don't.

------
lxwang
Yeah, if you're lucky enough to never have to teach, pass quals, take classes
of your own, etc. In reality, research takes up less than 30% of your average
day.

------
jakejake
I read the title wrong and became curious what a grad student in math does all
day after they graduate? Is there a lot of work in that field?

~~~
djcapelis
In the US, the NSA hires a lot of math folks:
<https://www.nsa.gov/careers/career_fields/mathematics.shtml>

------
xarien
The blog post certainly illustrates the importance of a good copywriter...

"Tool to get rid of dust."

~~~
whatshisface
Tool not get rid of all dust, only dust tool can get to.

~~~
xarien
You're focusing on the specifics too much. This is actually very akin to a
startup. How can you convey what you do in a clear and concise sentence
without muddying it with details. My point in the original reply is that
conveying the idea of a vacuum cleaner even with the harsh restrictions set,
can still be done without sounding like a robot or caveman.

~~~
whatshisface
There's a very big difference between a tool that gets rid of _all_ dust, and
a tool that only gets rid of a small subset of dust.

It may be a specific, but so is the fact that it gets rid of dust...

Today on HN: Math analogy spawns discussion on marketing copy and the
properties of vacuum cleaners.

------
jcfrei
... or rather, what does a grad student do in any quantitative field?

~~~
podperson
There are collaborative fields, e.g. Most experimental fields.

------
baby
I go to my master classes or slack mostly. There's this one guy in my class
that do read math books though. But really, it doesn't change anything.

~~~
ubercow13
Masters or PhD?

~~~
baby
master

~~~
antidaily
jedi master?

------
Evbn
Sounds like what math grad students did 50 years ago, before we solved all the
pure math even tenuously related to the 4 dimensional reality we live in.

~~~
rsofaer
You are just wrong. For example, algebraic topology is one of the most
abstract subjects I hear conversations about week. It has applications in
physics, data analysis, and numerical computing.

Also, saying that reality is 4-dimensional is hooding yourself. The machine
learning techniques we use everywhere depend on the math of higher-dimensional
spaces, and they wouldn't work if reality couldn't be meaningfully viewed as
many-dimensional.

~~~
Evbn
Please share a link to an example paper published by a pure math department.

~~~
xyzzyz
<http://arxiv.org/pdf/1303.2807.pdf>

Went to the Arxiv into the Algebraic Topology category. Any comments?

------
CallingIit
I thought they worked it out with a pencil as they saying goes.

