
I no longer understand my PhD dissertation - refrigerator
https://medium.com/@fjmubeen/ai-no-longer-understand-my-phd-dissertation-and-what-this-means-for-mathematics-education-1d40708f61c#.ysddlmlsa
======
csense
TLDR: The author independently re-discovered what you may know as Old Code
Syndrome.

I think that's because mathematical papers place too much value on terseness
and abstraction over exposition and intuition.

This guy's basically in the position of a fairly new developer who's just been
asked to do non-trivial update of his own code for the first time. All those
clever one-liners he put into his code made him feel smart and got the job
done at the time. But he's now beginning to realize that if he keeps doing
that, he's going to be cursed by his future self when he pulls up the code a
few months later (never mind five years!) and has zero memory of how it
actually works.

I'm not intending to disparage the author; I've been there, and if you've been
a software developer for a while you've likely been there too.

Any decent programmer with enough experience will tell you the fix is to add
some comments (more expository text than "it is obvious that..." or "the
reader will quickly see..."), unit tests (concrete examples of abstract
concepts), give variables and procedures descriptive names (The Wave
Decomposition Lemma instead of Lemma 4.16), etc.

~~~
wfo
It would be really nice if all it took to understand difficult mathematics
were some easy programming tricks.

The problem with looking at old code is you forget what is going on or what
the purpose of different components are. The problem with looking at old
mathematics is that it is genuinely very difficult to understand. You work
very hard to be an expert in a field and get to a level where you can read a
cutting-edge research paper. Then if you let that knowledge atrophy, you won't
be able to understand it without a lot of re-learning when you look at it
again.

Unfortunately cute tricks like comments and concrete examples won't save you
here (if concrete examples even exist -- oftentimes their constructions are so
convoluted the abstract theorem is far easier to understand, and often times
they are misleading. The theorem covers all cases, but all of the easily
understandable concrete examples are completely trivial and don't require the
theorem at all.)

Programming has existed for, say, 50-100 years. We have recorded mathematical
history going back thousands, with contributions from most of the most
brilliant human beings to ever exist. Do you think perhaps there's a reason
why a simple and easy trick like commenting and renaming lemmas has been
discovered and solidified as standard practice in programming, but hasn't been
adopted in mathematics? Are mathematicians really just SO dumb?

The answer is those tricks just aren't good enough. Mathematicians do
exposition. They do plenty of explaining. Any textbook and even many research
papers spend a huge amount of time explaining what is going on as clearly as
is possible. As it turns out the explanation helps, but the material is just
plain hard.

~~~
hinkley
It's also the jargon.

I remember years ago telling a coworker that I was an ACM member and
subscribed to the SIGPLAN proceedings. He looked at me and with all sincerity
asked, "You can understand those things??"

To which I responded, "About half," but I totally sympathized with his
question. Both Math and CS need the reincarnation of Richard Feynman to come
and shake things up a bit. There's too much of the 'dazzle them with bullshit'
going on. It's no wonder that it takes so long for basic research to see
application in real scenarios. You people bury your research under layers of
obfuscation about half the time. Does it really help anybody to do that? Why
do you do that?

"If you can't explain it simply, you don't understand it well enough." is my
new favorite Albert Einstein quote.

~~~
claudius
> "If you can't explain it simply, you don't understand it well enough." is my
> new favorite Albert Einstein quote.

It’s also one of his more moronic quotes. Certainly on a very abstract,
dumbed-down level everything can be explained simply; sure, if you cannot give
your parents a rough idea what you’re doing, you might want to look into more
examples. But there are plenty of things which require a very extensive basis
to be understood thoroughly.

For example, it is very easy to summarise what a (mathematical) group is and
for anyone with a basic understanding of abstract maths, it will be
understandable. It’s also very simple to find some examples (integers with
addition, ℝ/{0} with multiplication etc.) which might be understandable by
laypeople, but you will either confuse the latter or only give examples and
not the actually important content.

Further, when you have “simply explained” what a group is, can you go on and
equally “simply explain“ what a the fundamental representation is and how
irreducible representations come about? You just need a certain level of
knowledge (e.g., linear algebra) already and not every paper can include a
full introduction into representation theory.

~~~
EvanPlaice
"You just need a certain level of knowledge (e.g., linear algebra) already and
not every paper can include a full introduction into representation theory."

Then why is paper still the overwhelmingly preferred medium?

Using hypertext it would be trivial to link to an external source describing
the specific concept used from linear algebra.

Not providing supporting links is only good for an audience that holds the
entirety of mathematical knowledge in their heads (ie mathematicians in
academia).

The rest of the world, incl those who have since moved on like the author,
don't fit into that category.

Therefore, the work can only be accurately read and understood by the tiny
minority of specialists capable of decoding the intent of the work.

Limited reach = limited value to society.

Is the intent of a PHD really to advance the field of mathematics? Or is it
just another 'measuring stick' for individuals to prove to others how 'smart'
they are?

------
MicroBerto
My response is that for every 100 of these types of papers, one of them may
prove to be pivotal or inspirational in something truly groundbreaking and
functionally useful. For this reason, I am all for 100 different people
spending their time doing things like this, because eventually one of them
will make an impact that is greater than 100x the efforts of 100 normal men.

It's just a different kind of "brick in the wall" \- only the diamonds in the
rough can turn out to be hugely important for something else in the future.

~~~
mfn
Great point. I think this applies to scientific research in general, which is
why the constant emphasis on only funding research with clear and immediate
economic payoff seems a bit shortsighted.

In reality, chances are that most research won't lead to anything significant
- but the 1% that does will have outsized impact that will more than pay for
the rest. And we don't know which 1% this will be in advance.

~~~
dnautics
Unfortunately, you can very easily tell that what is excused as basic science
is often just 'solutions looking for a problem'. Most "basic science" these
days is just professors looking to get paid with no accountability.

Although there have been a few Fouriers across history, the most compelling
brilliant scientists had one foot in applied science working on theory in the
spare time. Euler, Gauss, Faraday, Langmuir. Most of the best basic science
had goals "to explain something pressing" (which is not the same as "do
something because there will be no payoff") anyways, like Planck, Einstein,
Peter Mitchell.

There's something to be gained about cutting your teeth on problems with
results, instead of just lollygagging about in theoryland.

Feynman talked about this in an exercise where he described the motion of
spinning discs - a very "applied" problem, and remarked that later those
insights proved useful in a separate, unrelated problem.

------
closure
This does not surprise me in the least.

Math was always extremely easy for me growing up. Up through my first
differential equations class I found almost everything trivial to learn (the
one exception is that I always found proving things difficult).

I made the mistake of minoring in math and that slowly killed my enjoyment of
it. Once I got to differential geometry and advanced matrix theory it all just
became too abstract and I just wanted to get away from it.

For several years after college I would routinely pull my advanced calculus
text out and do problems "for fun". After a while I stopped doing that. Within
a few years of no longer being exposed to math, I found it all incredibly
foreign and challenging, to the point where I would say I have a bit of an
aversion/phobia to it.

I'm trying to reverse that now by tackling a topic I'm interested in but have
previously avoided due to the math-heavy nature of it - type theory.

Hopefully I can find the joy in math again through this.

I think my point is that you can lose competence in math very very quickly
through lack of constant exposure.

The same is probably true of programming but I hope to never end up in that
position.

~~~
eru
> (the one exception is that I always found proving things difficult)

Proofs are at the heart of math. Everything else is something different.

~~~
jacobolus
V.I. Arnold:

“Proofs are to mathematics what spelling (or even calligraphy) is to poetry.
Mathematical works do consist of proofs, just as poems do consist of
characters.”

~~~
phamilton
I've always loved the idea that the academic disciplines form a cycle.

Arts -> Social Sciences -> Life Sciences -> Physical Sciences -> Mathematics
-> Arts

Mathematics was/is a very creative process for me.

~~~
rekado
I very much like the related point Paul Lockhart makes in "A Mathematician's
Lament"[1]: that mathematics is an art form and ought to be taught like one.

[1]:
[http://mysite.science.uottawa.ca/mnewman/LockhartsLament.pdf](http://mysite.science.uottawa.ca/mnewman/LockhartsLament.pdf)

------
tokenadult
An interesting read. But I think the author should have explicitly written out
the point he is really making: you can't be too careful about making your
writing clear, even to yourself. I recall reading (I'd point to the book with
a link if I could remember in what book I read this) that mathematicians who
occasionally write expository articles on mathematics for the general public
are often told by their professional colleagues, fellow research
mathematicians, "Hey, I really liked your article [name of popular article]
and I got a lot out of reading it." The book claimed that if mathematicians
made a conscious effort to write understandably to members of the general
public, their mathematics research would have more influence on other research
mathematicians. That sounds like an important experiment to try for an early-
career mathematician.

More generally, in the very excellent book _The Sense of Style: The Thinking
Person 's Guide to Writing in the 21st Century_,[1] author and researcher
Steven Pinker makes the point that the hardest thing for any writer to do is
to avoid the "curse of knowledge," assuming that readers know what you know as
they read your writing. It's HARD to write about something you know well
without skipping lots of steps in reasoning and details of the topic that are
unknown to most of your readers. This is one of the best reasons for any
writer to submit manuscripts to an editor (or a set of friends, as Paul Graham
does) before publishing.

And, yes, if you think what I wrote above is unclear, as I fear it is, please
let me know what's confusing about what I wrote. I'd be glad to hear your
suggestions of how to make my main point more clear. I'm trying to say that
anyone who writes anything has to put extra effort into making his point
clear.

[1]
[http://www.amazon.com/gp/product/B00INIYG74/](http://www.amazon.com/gp/product/B00INIYG74/)

~~~
Houshalter
This is a really common experience. See _Explainers Shoot High, Aim Low!_ :
[http://lesswrong.com/lw/kh/explainers_shoot_high_aim_low/](http://lesswrong.com/lw/kh/explainers_shoot_high_aim_low/)

>We miss the mark by several major grades of expertise. Aiming for outside
academics gets you an article that will be popular among specialists in your
field. Aiming at grade school (admittedly, naively so) will hit
undergraduates. This is not because your audience is more stupid than you
think, but because your words are far less helpful than you think. You're way
way overshooting the target. Aim several major gradations lower, and you may
hit your mark.

This phenomenon has a name, the Illusion of Transparency:
[http://lesswrong.com/lw/ke/illusion_of_transparency_why_no_o...](http://lesswrong.com/lw/ke/illusion_of_transparency_why_no_one_understands/)

Also see some of the other posts there on the issue, it's quite interesting.

------
GreaterFool
I've been working with Haskell* for a couple of years and it is quite often
that I work with code that I don't fully understand. I'll come across a terse
bit of code, then carefully take it apart to see what it does (by taking bits
and pieces out and giving them names instead of passing in using point-free
notation and also adding type annotations). Once I see the whole picture, I
make my own change and then _carefully re-assemble the original terse bit of
code_. One could ask the question: wasn't the verbose version better? I'm
going to lean on the side of no. If I left this verbose and other bits verbose
then it would be hard to see the whole picture.

I think doing maths would be better if it was done interactively with
software. If equations were code then you could blow it up and look into fine
details and then shrink it to a terse form while software keeps track of the
transformations to make sure what you write is equivalent. Maybe it's time to
add a laptop to that paper pad?

* not arguing anything language specific here except that Haskell makes use of variety of notations that makes the code shorter and more like mahts. More so than most languages.

~~~
forgotpwtomain
> If I left this verbose and other bits verbose then it would be hard to see
> the whole picture.

I really can't sympathize with this. How exactly is this helping any one at
all, if you have to struggle with it yourself? Is it a bunch of dense
monolithic code? Decompose into smaller methods / separate files. Setup your
text-editor/IDE in an effective way for quickly navigating across large chunks
of related code. Imho there is a world of difference between terseness that
helps readability and code re-factoring vs. terseness that makes you want to
bang your head against the monitor.

That said I think the requirements, or say qualities which define good code
and good dissertation's are quite different. Code needs to be maintained,
refactored and altered throughout it's lifetime, a dissertation might only
need to be built up and understood once to prove a particular result which can
be re-used after that.

~~~
GreaterFool
The way I see it, I have limited capacity to build a mental picture of
whatever I am working on. When I'm looking at pages and pages of verbose and
repetitive code, it is quite hard. What does this bit do? Just checking the
error condition and re-throwing the error. What does that bit do? Same boring
stuff. Where is the meat?

When I'm looking at few lines of terse but complicated code, it is easier; it
is all meat and little fat. Just enough to make a good steak.

But this only works if I understand the mechanics of that terse code. So when
I work on something else for a while and I come back to some code for which I
no longer have an accurate mental picture in my brain I need to refresh my
memory.

I think mathematics is the same way. Imagine a full A4 page of equations. It
is really hard, at least for me, to hold in my brain a mental model of what it
all means. Sure, there's a ton of background that I need to be familiar with,
but it's not in my mental picture. Imagine this: suppose you wrote rules for
how addition works, and then multiplication, and then build it all up so you
can do linear algebra. That's too much!

When I advocate terse code I don't mean it in a "here's my obfuscated C code
sense". I mean that when I write "f . g . h" there might be more going on here
than meets the eye, but as long as you know the rules of what . means in this
context, it is super easy to follow.

------
jholman
All of these arguments are arguments for replacing the mathematics curriculum
with video gaming. Games require generalized problem solving (arguably better-
generalized than math, and arguably better-transferrable to other domains).
Games build character: grit and tenacity, cautious optimism, etc blah blah
etc. And games are fun (for many more people than find math fun).

Guess math teachers should start learning to play League of Legends and
Pokemon.

Alternatively, I guess we need better reasons than those to teach a subject.

~~~
YeGoblynQueenne
Don't laugh it off. Personally, I learned to code by building Magic: the
Gathering decks and playing at tournaments at high school. I learned things
about resource management, reducing solutions, integrating disparate
components into a functional system and so on, and even a bit of probabilities
along the way. Not to mention what it did for my ability to concentrate and
analyse an adversarial situation.

If you think about it, a lot of education is really a kind of game and games
themselves are often educational, usually by accident.

Frex, I think a lot of people would recognise the value of teaching kids to
play chess in order to improve their concentration and problem-solving skills.
Well, why not more modern board games?

------
pmarreck
Math seems to have a very _ephemeral_ lifetime in the brain. I skipped a year
of college once, and when I returned I realized I had to basically abandon
_any_ major with a math requirement, because I had seemingly forgotten
_everything_.

I'm currently struggling with an online Machine Learning class (the Coursera
one... at the tender age of 43), and I can only take it (so far, at least...
just failed my first quiz... fortunately I can review and re-take) because I
was rather _obsessed_ with matrices, oh, about 28 years ago. "You mean I can
rotate things in an n-dimensional space without using trig?"

~~~
baby
> because I had seemingly forgotten everything

graduated in math, let me tell you this is how all my fellow students felt
like every semester in every class.

------
shalmanese
I'm truly shocked by the multiple people in the thread who claim that Math
knowledge can be completely erased through as little as a year of non-
practice.

For me, Math has always resembled riding a bike more than anything else. Sure,
the first few moments, the path is a bit overgrown and all the weeds need to
be cleared off but it was always significantly easier revisiting a topic than
understanding it for the first time.

For those who forget so quickly, I wonder if you felt like you truly
understood it in the first place?

~~~
marcosdumay
I've recently taken a calculus problem (multidimensional real optimization) I
guess for the first time since I was an student.

While your description has some merit, there's a huge amount of trivia in the
format of "I can do X, I just have to do Y first", "X has no known solution,
try something else", and "operation X is very useful, try it". That goes away,
and everything gets way harder.

~~~
markus2012
I've always felt that if I had done all of my calculus with Mathematica I
would have left college with an excellent grasp on how to use higher level
functions provided by Mathematica that would have largely abstracted away all
of this.

Of course, the higher level functions might get covered in cobwebs - but I
suspect not the same way; I would have kept these higher level skills up to
date because:

\- I recently went though a couple of books on Bayes and computer vision. I
would have used Mathematica - refreshing my memory. \- I sometimes need to do
some stats / analysis - Refresh... \- I recently picked up a Student's guide
to Maxwell's equations - Refresh... \- I need to help my children with
Calculus...

If I had been using a high level tool my whole life I think I actually would
make use of calculus and other mathematics.

I'm curious if anyone else feels the same.

~~~
sdenton4
Yeah, you should always, always code what you're thinking about, IMHO. I once
turned in a take home differential geometry final in the form of an ipython
notebook because I found computing curvature coefficients so tedious.
Debugging the thing to pass all my unit tests (not to mention solving the test
question) probably have me the best understanding of anyone in the class.

~~~
vidarh
I had a similar experience - I learned symbolic differentiation largely
because I happened to pick up a book on Prolog about the time we started
covering it at school, and the book gave symbolic differentiation as an
example. Not having a Prolog interpreter, I rewrote the thing in Pascal, and
then wrote an expression parser for it. Debugging my Pascal translation really
hammered home the rules for me at the time (and subsequently writing an
expression parser for it was what got me interested in compilers).

------
chipsy
It speaks to how finite we are around "knowledge." At the moment we reach
understanding, we experience a sophomoric feeling of confidence. But as it
fades farther and farther from our working memory, we become less fluent and
more hesitant. The emerging pattern becomes one of "I can understand these
interesting concepts, but it takes a lot of work and they don't last, so I
have to choose to understand practically and situationally." And then in the
end our bodies and personalities turn out to control our minds more than we
might want to believe, as we turn away from one problem and towards a
different one on some whim, never able to view the whole.

As I recognize this more in myself, I am more inclined to become a bit of a
librarian and develop better methods of personal note-taking and information
retrieval, so that I lose less each time my mind flutters. At the moment
that's turned into a fascination with mind maps - every time I need to
critically-think through a problem I start mapping it. In the future I might
look into ways of searching through those maps.

~~~
timroy
You might also check out spaced repetition software. I've started putting
anything I want to remember for the long haul into org-drill, an SRS package
for Emacs, though Anki and Mnemosyne are more well known. I even schedule
articles that I want to return to.

------
jimbokun
When I was taking machine learning courses and reading machine learning
textbooks a few years ago, I have fond recollections of the derivations from
Tom Mitchell's textbook.

[http://www.cs.cmu.edu/~tom/mlbook.html](http://www.cs.cmu.edu/~tom/mlbook.html)

Where other textbooks tended to jump two or three steps ahead with a comment
about the steps being "obvious" or "trivial", Mitchell would just include each
little step.

Yes, you could argue it was my responsibility to remember all of my calculus
and linear algebra. But it is kind to the reader to spell out the little
steps, for those of us who maybe forgot some of our calculus tricks, or maybe
don't even have all of the expected pre-requisites but are trying to press on
anyway. Or actually know how to perform the steps but have to stop and puzzle
through which particular combination of steps you are describing as "obvious"
in this particular instance.

I just remember how nice it was to have those extra steps spelled out, and how
much more pleasant it made reading Tom's book.

So thanks, Dr. Mitchell!

------
ikeboy
> I have attempted to deliver [these lectures] in a spirit that should be
> recommended to all students embarking on the writing of their PhD theses:
> imagine that you are explaining your ideas to your former smart, but
> ignorant, self, at the beginning of your studies!

-Richard Feynman

~~~
j2kun
After spending the last five years trying to explain math to a general
technical audience, I'd agree with this, but with one minor change:

s/smart/dumb

:)

~~~
repsilat
I find it gets harder and harder to do as you progressively become more
affected by the topic. I like to think I'm pretty good at explaining concepts
in physics to lay audiences, but the more real physics I do the more I think
like a physicist, and the less I can see what an explanation looks like to the
layman.

Quantum mechanics is the worst. I dislike a lot of the "popular science"
language and analogues used to describe it, but the real (academic)
pedagogical material is completely inappropriate for regular people. I'm
worried that I'll be an inscrutable physicist before I grok it well enough to
explain to a highschooler, though.

------
nanis
First, I am not sure _Functional Analysis_ is as obscure as some other areas.
But, second, this just shows, once again, that one ought never to use
"clearly," "obviously" etc in proofs.

It is the same principle as writing programs so they are easier for the next
programmer to read. That person may be you.

~~~
vidarh
I don't read maths papers, but in CS papers too, words like "clearly" and
"obviously" etc. serve as red flags to me that screams "hand-waving or big
gaps coming up".

Often it's understandable short-cuts, but often it also turns out that the
author has left out very substantial chunks of knowledge, or sometimes clearly
don't understand why they got the results they did.

In CS papers there's an additional red flag: Maths. Outside of a few maths
heavy areas of CS where it is justified, if a CS paper is full of equations,
it's a good sign they'll have glossed over a lot of essential information,
such as parameters that often turns out to be essential to be able to
replicate their results. Not always, but often enough for me to be vary.

I'm guessing it is because in the instances that include pseudo-code or
working code, it is instantly obvious that something is missing, both to the
author and to reviewers, but when it's obscured in equations it takes more
effort to identify the same flaw because so many steps are often legitimately
left out because of conventions that it's non-trivial for someone not steeped
in the same notation to determine which bits should be defined and which bits
are not necessary. I'm sure most of the time it's not intentional. But taking
that shortcut seems to make it a lot easier to forget which additional
information is actually necessary. And the irony is that I've seen plenty of
example where the equations have taken up just as much space as pseudo-code or
even working-but-naive implementations would have taken.

~~~
reikonomusha
Wait, let me get this straight. It's a red flag when a computer science paper
has math in it? Computer science (in the asymptotic limit) _is_ math. And the
papers ideally should read like math papers. Otherwise it's not CS.

------
jedberg
> Beyond scarcely stretching the boundaries of obscure mathematical knowledge,
> what tangible difference has a PhD made to my life?

The same thing a bachelors degree does for everyone else. You've proven that
you can start, stick with, and complete a task that takes multiple years and a
complicated set of steps.

~~~
tluyben2
Which feels like a 'life is suffering' weirdness. I did it and never needed my
diplomas for anything. Maybe I benefited somehow but I was a coder and
entrepreneur before that; I had a software company before uni. It was over 20
years ago and in hindsight I find it pretty pointless and a waste of time.
Maybe I became a better problem solver on some level but unless you are going
into research or are not a self starter I would not recommend it.

~~~
vidarh
I went into taking my MSc explicitly to be able to add the letters to my
resume - I started not long after the dot-com bubble burst, as a precaution. I
don't regret it; I learned some interesting things during my thesis (the rest
of it was regurgitating stuff I already knew; but it was distance learning so
I didn't have to put in much effort), but similar experience - I don't really
use it much. If I'd done it full time, it would have been a tremendous waste
of time, though.

But a lot of the reason for this experience for me at least was that I started
uni after having spent about 15 years learning to program already, and by the
time I picked up again and did my masters, I'd had another 10 years of
commercial software development experience.

These things are not really geared for people like us that came to them with a
lot of pre-existing knowledge, but at people like my class-mates first time
around that had hardly touched a computer before, and that _did_ need a lot of
hard work to come out with a good understanding of the problems.

As a hiring manager this is why I rarely care whether someone has a degree or
not if they can demonstrate experience. And on the other side of the table, I
only took that degree because in the UK there are still sufficiently many
employers that have an obsession with degrees regardless of experience...

------
admirethemeyer
I had several exceptional Math teachers throughout my education, but the piece
of advice that stuck with me the most is:

"If you're not sure what you want to do with your life, study Math. Why?
Because Math teaches you how to think."

The skills I learned studying Mathematics have been invaluable, the Math that
I currently am able to recall is abysmal.

The author did a great job calling this out succinctly: Mathematics is an
excellent proxy for problem-solving

------
hnarayanan
As a PhD in spplied math, I must say I concur wholeheartedly with the author.
The true value of a PhD in a quantitative field is less about specific domain
knowledge, and more in the set of general problem solving skills you pick up.

~~~
forrestthewoods
This raises an interesting question. Is the PhD process the best way to
acquire those general problem solving skills? Or is there a better way to
learn them?

~~~
Silhouette
From my experience of both academia and life in general, I'd say traditional
tertiary education -- meaning lectures and research work from undergraduate
upwards in a university setting -- is actually quite a poor way to learn
anything.

I'd say the ideal way to develop knowledge, understanding and skill in almost
any field is a combination of systematic practice and receiving personalised
guidance and training from someone who thoroughly understands the field itself
to at least the level you are trying to reach _and_ is able to share that
understanding effectively based on your current level of understanding.

Sadly, this is usually hopelessly unrealistic, because there are nowhere near
enough suitable trainers around to give everyone close to 1:1 training in that
format. But the further we drift from it, the more impersonal and generic
training becomes, the more isolated individual practice becomes, the less
immediate and detailed feedback becomes, the less effective the training
regime as a whole will be.

Given that neither undergraduate-style mass lectures nor postgraduate-style
research are particularly efficient at conveying useful information, guiding
practice, or promoting rapid and actionable feedback, I personally don't rate
either particularly highly on an idealised scale. Perhaps a more practically
useful question would be whether there are ways to improve university-level
training that are realistic given the time and money constraints and, beyond a
certain point, the lack of many if any people who actually are more
knowledgeable or skillful in increasingly specialised fields than the research
student who is dedicated to exploring them.

~~~
vidarh
Both the CS and maths undergrad courses at my uni were structured with
voluntary lectures coupled with compulsory group study with 10-15 students led
by a post-grad TA for many of the larger courses. I skipped most of the
lectures, and focused on the group study, and it was far more rewarding.

~~~
Silhouette
That sounds like an all-too-familiar story.

On the subject of lectures, it does slightly surprise me that in 2016 we still
have researchers with neither much interest in teaching nor the presentation
skills to do it well being asked/compelled to deliver undergraduate lecture
courses at individual universities. You'd think with the easy access to video
presentations and supporting materials now offered by the Internet,
universities might have collaborated by now to build the personal elements of
tuition around video lectures given by academics who do have the interest and
are gifted presenters.

On the other hand, I suppose that would expose how little personal attention
many students actually receive in return for the fees and debts they take on,
and universities don't want to encourage potential students to question how
much real value they provide. Surely it would be more reliable and efficient
as an education method if they focused their efforts on small group tuition
and individual guidance, though.

------
option_greek
Math is the shadow universe of physics. Most theorems may not look like they
are useful for anything real world till someone is able to peg all the
variables to real world. And then as if by magic we realize we already know
how how the real world behaves. Till someone does this pegging, the theorems
sit idle waiting for problems to solve. I believe this is actually a good
thing. We are letting people find solutions before someone finds problems to
use them for.

~~~
daniel-levin
I disagree. A mathematical result doesn't have to have direct application to
the real world for it to be useful. Drawing analogies between the real world
and mathematical concepts is very powerful. I have a good example:

Number theorists (amongst others) are seemingly obsessed with bounding things.
There are entire books written about obtaining and then refining bounds -
which appear to be nothing more than inequalities. There is great real-world
value to be derived from seeing 'inequalities' as tools to leverage. Brian
Kernighan once commented that controlling software complexity is the essence
of programming [0]. I believe similar thinking applies to other aspects of
software engineering, and product and business development. If you can take a
hard problem, and _bound its complexity_ , then you can say "the problem is no
more complex than this". This is very useful. The chief value proposition of
many SaaS businesses is the trivialisation of the upper bounds of complexity
of hard problems. For instance, for many developers, Heroku makes the
complexity of deployment very low.

[0]
[http://quotes.cat-v.org/programming/](http://quotes.cat-v.org/programming/)

------
ck2
Reminds me of this (well without the forgetting part, but I do that with old
code all the time)

[http://matt.might.net/articles/phd-school-in-
pictures/](http://matt.might.net/articles/phd-school-in-pictures/)

------
KKKKkkkk1
If you find yourself saying that you gained nothing from your education other
than soft skills, maybe you should have passed over the functional analysis
part and put the effort directly into learning said soft skills. I'm in the
same boat, and I can see how it can be hard to admit this.

------
analog31
My PhD is in physics, from 20+ years ago, and I would not be able to explain
or defend it today without studying it for a while. I've even forgotten the
language (Pascal) that I wrote my experimental control and analysis code in.

My experiment formed the basis of a fairly productive new research program for
my thesis advisor, so at least it lived on in somebody's brain, but not in
mine. ;-)

------
sbardle
A PhD isn't so much a test of intelligence as it is of perseverance.

~~~
analog31
I think it's hard to generalize about PhDs because of the huge diversity of
experiences. A PhD student should have a lot of freedom to define for
themselves what they get out of their education. They are responsible adults
and if they wanted a "marketable skill," they would have finished with a BS or
MS. Predictably, the flexibility of PhD education doesn't always happen, and
even when it does, it's both a blessing and a curse.

------
z3t4
I've also forgotten basically all high level math from school. And have to re-
learn when the occasion comes to use some of it. But one thing that occurred
to me is that in school I just learned how to make the calculations, so I
never got a deep understanding on how things worked anyway. And that's fine.

~~~
qb45
That's fine if you don't mind the time that could have been spent on something
you actually cared about.

Sometimes it happened that school attempted to teach me something I was
interested in and I ended up understanding it. At other times, however, it all
went to /dev/null.

~~~
z3t4
It's good that we forget stuff, or everyday tasks would be like querying from
a fully saturated disk. But we might have a tiny fraction of it in cpu cache,
that will let use make heuristic decisions.

------
kazinator
Someone doesn't understand his own work five years later to this extent, this
is a strong indication that the work is actually garbage, and the prior
understanding five years ago was only a delusion brought on by the
circumstances: the late nights, the pressure, and so on.

Perhaps it doesn't make sense today because it never did, and the self
deception has long worn off, not because the author has gone daft.

Several weeks ago, on the last work day before going on vacation, I submitted
fixes for nine issues I found in one USB host controller driver. The last time
I looked at the code was more than a year ago. I had refactored it and really
improved its quality. Looking at some of the code now, I couldn't understand
it that well. But that's because it wasn't as good as I thought it was. I was
still relying on the fictitious story of how I thought certain aspects of the
code worked really well thanks to me, and _it wasn 't meshing with the reality
emanating from freshly reading it with more critical eyes_. And, of course, I
was also confronted by a reproducible crash. As I'm reading the code, I'm
forced to throw away the false delusions and replace them with reality. This
is because I'm smarter and fresher today, not because I've forgotten things
and gotten dumber! It's taking effort because something is actually being
_done_.

Perhaps a similar problem is here: he's reading the paper with more critical
eyes and seeing aspects that don't match the fake memory of how great that
paper was, which was formed by clouded judgment at the time of writing. Maybe
that obscure notation that he can't understand is actually incorrect garbage.
His brain is reeling because it's actually digging into the material and
trying to do proper _work_ , perhaps for the first time.

If you can show that your five year old work is incorrect garbage, that
suggests you're actually superior today to your former self from five years
ago. So that could be the thing to do. Don't read the paper assuming that it's
right, and you've gone daft. Catch where you went wrong.

By the way, I never have this problem with good code. I can go back a decade
and everything is wonderful. Let's just say there is a suspicious smell if you
can't decipher your old work.

Good work is clear, and based on a correct understanding which matches that
work. There is a durable, robust relationship between the latent memory of
that work and the actual work, making it easy to jog your memory.

------
jeena
It would have been cool if the original was linked here
[http://fjmubeen.com/2016/02/14/202/](http://fjmubeen.com/2016/02/14/202/) and
not the medium repost. But still interesting.

------
danieltillett
This post inspired me to re-read my thesis (well browse through it). Although
it has been 16 years since I last looked at it, I didn’t have any problem
understanding it and I didn’t even really cringe reading it. I guess it
depends on your field how bad this effect is.

~~~
jackmaney
I was about to write much the same thing... Last year, I found a hard copy of
my dissertation (which I defended in 2004). I skimmed through it and had
absolutely no problem whatsoever understanding what I had written 11 years
prior. And I've been out of academia since 2008.

------
pervycreeper
The dissemination of knowledge is at least as important as its discovery.
Accessibility (i.e. clarity of exposition, availability to the public, etc.)
needs to become a cardinal virtue in research.

------
fiatjaf
The author almost realized the much more important conclusion of the fact he
lived. He shouldn't conclude the article by asking "what is the purpose of
studying maths?" and then giving an three stupid answers.

He should have asked: is this actually "knowledge" as they say academia brings
to society? Is the money researchers earn being well spent? Did I actually
deserve to be remunerated by this piece of work no one understands -- and, in
fact, no one has read except for maybe three people?

~~~
argonaut
Considering that PhDs get paid next to nothing, I actually think they're
getting paid pretty adequately.

~~~
fiatjaf
Nothing would be better, considering this article.

~~~
argonaut
Except that some % of PhDs go on to be professors (who do a real service), and
every once in a while you get a PhD student whose work probably contributes
hundreds of millions of value to society. And every couple years or so, you
have a PhD student who contributes billions to society in value.

Not to mention PhDs usually have to TA (teach students), which accounts for
some of their pay.

~~~
fiatjaf
There's no example of a PhD work that contributed hundreds of millions of
value to society.

(I could say there's no way to measure "value to society", in fact this
concept means nothing, but I agree on settling on "an enourmous amount of
productive capital to someone, not exactly the society").

~~~
argonaut
A one minute google search reveals:

Radioactivity.
[http://www.nobelprize.org/nobel_prizes/themes/physics/curie/](http://www.nobelprize.org/nobel_prizes/themes/physics/curie/)

Nash equilibrium. [http://rbsc.princeton.edu/sites/default/files/Non-
Cooperativ...](http://rbsc.princeton.edu/sites/default/files/Non-
Cooperative_Games_Nash.pdf)

Emergence of digital circuit design.
[https://en.wikipedia.org/wiki/A_Symbolic_Analysis_of_Relay_a...](https://en.wikipedia.org/wiki/A_Symbolic_Analysis_of_Relay_and_Switching_Circuits)

A pretty ridiculous thing to claim.

------
amelius
This is what happens to most programmers as well, when they try to read code
that they wrote a while ago.

------
bambax
> _what is the purpose of studying maths? Mathematics is an excellent proxy
> for problem-solving / Mathematics embeds character in students / Mathematics
> is fun_

Those may be reasons to study maths (although, studying anything seriously
probably yields comparable benefits) but doing a PhD and writing a thesis is
not only about yourself: it's supposed to _advance the field_. It's something
you do for the general community.

------
aldanor
As someone who left academia after a PhD in math (been working as a quant in
HFT for the last few years, which mostly involves coding in one form or
another), I can totally relate! Back then, all those stochastic integrals and
measures made much more sense. However, it doesn't seem totally alien -- I'm
pretty sure I could go to hacking math if required, but it would require at
least several months to get into the flow.

~~~
thearn4
I also left academia after a math PhD. Though in my case, my area was
numerical linear algebra, and I entered the aerospace engineering field. A lot
of what I was working on was immediately relevant, but I would say only a
fraction of what became my dissertation would be counted among it.

------
mirimir
> Mathematics is an excellent proxy for problem-solving

In my experience, earning a PhD in [redacted] was excellent training in
problem solving. And in developing working expertise in new areas. I suspect
that the choice of field is indeed irrelevant.

> Mathematics embeds character in students

I'd say that _actually finishing_ a PhD does that.

> Mathematics is fun

Whatever you pick for your dissertation topic had better be fun ;)

------
smonff
I experiment the same problem with "trivial regular expressions" in two yeats
old Perl programs.

------
wodenokoto
Let us give some credit to the professors who read the dissertation and
understood what was going on.

------
musesum
I had a similar problem with a crypto presentation. Basically, I was angling
for a free ticket to an expensive conference. The trick was to propose
something that is plausible, but too arcane for practical use. The consolation
was a free ticket. Problem was that they accepted the talk. Damn!

So, I started to read crypto journals. Basically anything co-authored by
Chaum, Micali, Goldreich, Wigderson. After a few weeks, I starting to get the
hang of it. Sort of like learning a new language. So, I gave the presentation
and then forgot about it.

A few years later, I decided to show my powerpoint to someone and describe the
process. WTF? How did this lead to that? Didn't understand half of it. Was
really embarrassing.

------
peterbraden
Could part of it just be because Mathematical notation is just so bad? It's
more of a shorthand than an actual tool of conveying meaning. So much context
goes into establishing what a notated equation means - and that context is now
gone.

~~~
stuxnet79
I always complain about this. While I'd say I have an affinity for math, it
was much more difficult to get caught up with my peers in university than
let's say programming because there was just so much context / assumed
knowledge required to make sense out of what the professor was stating that it
took me forever to understand even the most trivial things. To be
mathematically mature constant exposure is imperative. As to how to improve
mathematical notation so that it is not so subjective or context dependent - I
haven't a clue - seems to me like almost everyone is alright with the status
quo. Not a good thing in my opinion.

------
brudgers
_The Sheetrock was the last step. I myself would do the exterior and interior
painting. I told Ted I wanted to do at least that much, or he would have done
that, too. When he himself had finished, and he had taken all the scraps I
didn 't want for kindling to the dump, he had me stand next to him outside and
look at my new ell from thirty feet away._

 _And then he asked it: "How the hell did I_ do _that? "_ \--Kurt Vonnegut,
_Timequake_

I find the experience common when I look back on things I write or design or
build. As Bill Gates said, “Most people overestimate what they can do in one
year and underestimate what they can do in ten years.”

------
znpy
It seems to me that most commenters are ignoring the fact that the author is a
guy that basically left high level mathematics after completing his phd.

So basically he went doing other stuff not functional-analysis-related and his
functional-analysis got rusty.

It seems quite reasonable to me. Call it old code syndrome, call it "my math
got rusty", it seems quite normal to me.

Also: according to [http://fjmubeen.com/about/](http://fjmubeen.com/about/),
the author got his phd in 2007. It's 2016.

Almost ten years. What... What are we talking about?

~~~
refrigerator
He actually finished the PhD in 2011, but started in 2007. Not sure whether
that's a significant enough difference to change the point that you're making
though.

~~~
znpy
You're right.

I still think my point is valid though.

I guess he still has a strong mathematical background, a solid problem solving
attitude, but details of the topic of his doctoral thesis are missing.

It's been many years, it isn't strange at all.

------
egonschiele
This happens to me all the time. I have a very popular illustrated post on
Monads titled "Functors, Applicatives, and Monads in Pictures"[1]. When I
wrote it I thought it was the best monad guide ever. Now, reading back, I can
see that some parts are confusing. I still see a lot of people liking it, but
three years later I wish some parts of it were better.

[1]
[http://adit.io/posts/2013-04-17-functors,_applicatives,_and_...](http://adit.io/posts/2013-04-17-functors,_applicatives,_and_monads_in_pictures.html)

~~~
throweway
Its quite good actually. Problem with monads is that no tutorial will make
them less confusing. But writing code that does useful stuff with them builds
the intuition.

I now question the need to fully understand sometimes why not use libraries as
per example and cargo cult a little to build up intuition. Then later get a
more formal understanding.

------
lkrubner
He writes:

"Mathematics is an excellent proxy for problem-solving... Mathematics, by its
concise and logical nature, lends itself to problem-solving (it is not unique
in this regard)."

But how can we be sure this is true if is unable to read what he wrote?

Maybe I'm thinking about the way Clojure programmers tend to use the word
"concise" \-- concise is meaningful only if it contributes to readability.
Otherwise the more accurate description is "terse". And terse does not lend
itself to problem solving.

------
calibraxis
Reminds me to get around to Piper Harron's thesis. Which was made to be
_seriously_ readable.

Math seems to have a culture of systematically bad writing (as Bill Thurston
discussed).

------
chx
> Mathematics is an excellent proxy for problem-solving

I went to a special, maths focused high school class and this rings true on
that lower level too. I am a reasonably successful programmer/architect today
and I have -- repeatedly -- attributed my successful attitude toward solving
my problems to the 1200 or so problems we solved during those four years. Our
maths education was literally nothing else but solving one problem after the
other.

------
bbcbasic
It is not like riding a bike!

I am reading up on stats after 15 years away from the subject and even the
very basic stuff I have forgotten. Although the 'muscle memory' is there so
that perhaps it is a bit easier then when totally new.

What I also find is I am more interested in the application/intuition behind
something now rather than the mechanics of the formulas. Maybe that has to do
with a different aim i.e. usefulness vs. passing an exam..

------
thaw13579
To make an admittedly bad metaphor, it's likely a lot of that knowledge has
been moved from main memory to cold storage, and it would take some time to
bring it back. It certainly makes the case for why we write things down!
Although the part about having to dig for the main result makes me think the
abstract could be improved...

------
JulianMorrison
Math is twiddling with formal systems, and discovering how they behave, Some
of it has uses, some doesn't, and some of what presently doesn't will in the
fullness of time result in further islands of usefulness, as yet not even
imagined. But ultimately, it needs no more _justification_ than orchestral
music.

------
fiatjaf
How many people have read this and understand it?

How much money was spent on the production of this disseration?

------
lin0
This wrapper[1] helps me edit LaTeX fragment in Vim/Emacs...

[1]:
[https://github.com/linktohack/lyxit](https://github.com/linktohack/lyxit)

------
late2part
That's okay. I never understood it, at least you did for a while.

------
ISL
It takes about three months before a paper we ship becomes the best reference
we have on the subject, exceeding our own recollection.

Our brains can only hold so much, especially tiny details.

------
riprowan
We are clearly approaching the point where unassisted human intelligence is
becoming insufficient to continue to master even specific domain expertise.

------
juped
Reading mathematical papers is an acquired skill that needs practice. Writing
them also is, but the skills exist somewhat independently.

------
Houshalter
I took a course on formal logic on courses. I put all of the questions and
exercises into anki, a spaced repitition program. This ensures I will always
remember it and get it in my head at an intuitive level.

Basically it's like flash cards that decay exponentially. The first review is
in one day, the second in two days, then 4 days, and so on.

------
omginternets
Me neither. My understanding is that this is the rule, not the exception.

------
CurtMonash
Much the same is true of me, and of my best friend in grad school.

------
bitL
So, you basically ended up being better coming up with new algorithms?

------
nowprovision
This reminds me of revisiting Bash after a period of absence

------
rackforms
One may be tempted to call this Hodge Theaters syndrome.

