
Why There Is No Hitchhiker’s Guide to Mathematics for Programmers - iamwil
http://jeremykun.com/2013/02/08/why-there-is-no-hitchhikers-guide-to-mathematics-for-programmers/
======
atemerev
All right, so this article can be probably condensed to these two points:

1\. "Mathematics is hard and inconsistent, but so is programming".

Well, yes, but this is exactly the problem! I pity poor souls who started
their journey into programming with, say, C++ — the champion of consistency
and elegancy. But there are other ways and other languages. And, with some
effort, it _is_ possible to provide gentle introduction into programming
world. The same should apply for mathematics.

2\. "You can't learn mathematics on your own, but it's the same with
programming".

And here I deeply disagree. Programming as activity is _the_ perfect learning
environment. Find some coding sample, compile and run it (or paste to REPL),
see if it gets you the same result. Try to change some variables in the code,
see how behaviour changes. Can you run it without this function? What this
method does? (A quick look to the documentation reveals everything, including
additional arguments to try). You can tinker and hack and play and learn with
every language and environment out there, it's like a perfect experimental
setup where experiments are cheap, and everything is perfectly reproducible,
at least on your own machine. ;)

With math, there are no docs, no playground, no explanations, no nothing.
Either you are the part of mathematical community and can hope for
explanations and knowledge transfer, or you are not. In programmers parlance,
it's like being the part of a huge legacy project with long compile times and
cryptic documentation, where the only way you can learn is ask your senior
colleagues questions like "what this class is doing?" and get answers like
"oh, we are not doing _that_ anymore, this is an artefact from like 15 years
ago, still widely referred in the docs, though. Here is the right way
instead". Sounds familiar?

This is why learning math is way harder than it should be.

~~~
choosername
> With math, there are no docs, no playground, no explanations, no nothing.
> Either you are the part of mathematical community and can hope for
> explanations and knowledge transfer, or you are not

what's the difference to CS here, where did you get that coding sample you
mentioned? The playground is your mind, pebbles or pen and paper, maths are a
projection of the real world.

Mathematics is not inconsistent. Isn't this the premisses of rigor? Math has
mathematical consistency and that's not the same as literal consistency, but
maybe that's a problem in semiotics and language. The language and the
literature is inconsistent.

~~~
atemerev
> where did you get that coding sample you mentioned

I learned programming by reading books and trying samples from there, then,
step by step, beginning to understand how parts play together and why some
things are faster than another. I could _feel_ loops and recursion, tinker
with them, before trying to understand them theoretically.

I can't do that in learning math (yet; Wolfram Alpha is changing it). It's
either I understand the exact meaning of symbols, or I don't. There are no
intermediary steps.

And being rigorous is not the same with being consistent. (There isn't much
consistency in programming, either, but this is a minor problem).

------
gtrubetskoy
Applied science, i.e. taking something from text book abstract and general to
something specific is always extremely difficult. It's one thing to read about
a nuclear reaction but a whole different thing to actually set one off.

To implement something you need to have a level of understanding of it that is
way deeper than what is required to get a passing grade in your math class.
The problem isn't mathematical notation or the programming language (I am
quite competent in and have no quarrel with either).

I recently set out to implement Triple Exponential Smoothing (aka Holt-Winters
method), but I didn't want to use someone else's code, I wanted to understand
the algorithm so that I knew enough to compare various implementations and/or
write my own. It took _months_. There is a lot of material on the subject, but
it all disagrees ever so slightly on one thing or another, or skips over
something essential. I don't think anyone or anything is to blame for this -
that's just life.

You can read about it here, btw: [http://grisha.org/blog/2016/01/29/triple-
exponential-smoothi...](http://grisha.org/blog/2016/01/29/triple-exponential-
smoothing-forecasting/)

~~~
j2kun
> There is a lot of material on the subject, but it all disagrees ever so
> slightly on one thing or another, or skips over something essential. I don't
> think anyone or anything is to blame for this - that's just life.

Isn't this precisely what you said _wasn 't_ the problem? The resources out
there all differ slightly on one thing or another. I suppose you mean small
semantic differences? I think when I wrote this article originally I intended
superficial semantic differences in the math to count as well as notational
differences. Doing math for too long has made me think of these as the same
thing, since the true underlying insight doesn't depend on either.

~~~
gtrubetskoy
Yes, every text used its own conventions for variable names, different words
to say the same thing, etc, but no, that wasn't a problem (for me at least).

I think the point I was trying to make (not so well) is that perhaps it's not
that programmers do not like math, but rather that when you learn to view
everything through a programmer's eyes, i.e. "how am I going to actually build
this damn thing", you tend to get very wary of abstract or generic approaches
which is what scientific books are made of.

For example, if I were given a book with a chapter on Holt-Winters and told
that there'd be a quiz later, I could have "mastered" it in a night probably.
But when I had to write the code to actually do that, I'd find out that this
book does not mention how you come up with initial values, while wikipedia
does, wikipedia cites this NIST publication, but who wrote that and is it any
good? Then I came across a 2014 paper that says that the forecast can be
improved with a slight change in one of the formulas, only when I tried it, my
results were worse; no book mentioned how to find best alpha, beta and gamma,
because for that you need some kind of a gradient descent or nelder-mead
method, and on and on. _That's_ what took months (and it was challenging and
fun too), and I think we agree that there isn't a solution to this, it's a
dilemma, not a problem. (As someone once said - avoid trying to solve dilemmas
and avoid managing problems).

~~~
yummyfajitas
I think that's much the same situation as with programming.

If I were given the postgres docs, I could "master" the syntax pretty quickly.
But when I had to write code that actually uses postgres in production, I
realized that the docs don't carefully explain 3NF, making 3NF fast and user
comprehensible, good practices on writing complex queries, etc.

Is that really a math-specific problem?

------
adrusi
I think a big barrier to understanding math for a lot of people, and
definitely for me, is names. Things get named for people instead of what they
are: "the Kronecker delta function". You came up with some genius model and
proved it, and after all that work you decided that your name and some greek
letter are what people need to think of when they hear about it and not
anything about what it is? Fuck you Kronecker!

The problem extends beyond mathematics, to essentially everything except
programming. And in fairness out of all the fields with shitty names,
mathematics has the best excuse: unlike the natural sciences and engineering,
math is completely abstract. But programming is also pretty abstract, and yet
we don't call stacks Hawk-elbower's double-u structure! No, we notice a
semantic similarity between our abstract concept and something more familiar,
and we come up with a monosyllabic, five-letter word that conveys a lot of
information about the construct to someone completely uninitiated. This both
makes it more approachable and easier to learn, serving as a mnemonic device.
And once you've internalized it it makes no difference whatsoever.

I hear people whine that naming abstract concepts after concrete objects will
only give people a harder time thinking abstractly. Bullshit, I say: have you
ever heard of a novice programmer thinking that stacks are only good for
representing stacks of paper or dictionaries for looking up definitions of
human words.

I understand that naming things nicely in math is _harder_ than in
programming, and that in some more cutting-edge branches there might simply be
no good analogies to draw, but I don't think it's impossible for more mundane
topics. Math professors teach by drawing analogies and thinking of
applications all the time. In any case we can do a lot better than
"Kronecker's delta function".

~~~
OscarCunningham
What would you call the Kronecker delta?

~~~
mrob
"equality operator"

~~~
exgrv
Except it is not the equality operator, but a special case of an indicator
function (c.f. comment by pash).

------
contingencies
Just yesterday I found myself reverse engineering checksum algorithms in
various domestic bank account systems about the world.
[https://github.com/globalcitizen/php-
iban/issues/39](https://github.com/globalcitizen/php-iban/issues/39) I went
for an explanation of some of the newer ones I hadn't heard of before (Damm)
and it was painful. (The code was only a few lines, though!)

While some algorithms had a pre-existing implementation, wiser than rewriting
I simply nabbed them and evaluated. Most of the implementations were
unreadable/undocumented and of dubious origin, many of the implementations
claiming to provide the same algorithm delivered different results. Some of
these work for some countries, some for others. No idea where the differences
lie and not enough interest to bother finding out.

Then of course there's the origin of these 'standards', ISO, who insist on
charging 88CHF for a 5 page document, thus we can't actually read the damn
standard. I would up porting a family of algorithms from Java using code
generation.

Why we don't yet have a library delivering smackdown-documented functions in
arbitrary languages for named algorithms escapes me...

------
delinka
"Doing mathematics" and "using math in my code" are so far apart. In my
programs that Really Utilize Math, I'm taking advantage of some existing
mathematical concept, but I'm not advancing the state of the art in
mathematics.

However, the few times I've really wanted to delve into math and really
understand the concepts underpinning that formula I'm using, I need a machine
to iterate on. My computer isn't ideal because the software that's available
is too expensive for 'hobby' use; there's no library that lets me use code in
a language I know to manipulate symbols and equations. The TI series of
calculators are not ideal because A) they're intentionally crippled (to
prevent student cheating) and B) they have a bit of learning curve to them
when you don't use them regularly.

Can I get a touch-device app for mathematical symbol/equation manipulation?
That'd be ideal.

~~~
iheartmemcache
[http://detexify.kirelabs.org/classify.html](http://detexify.kirelabs.org/classify.html)
that will yield a TeX symbol which can trivially be transformed into something
acceptable into most of the engines (e.g. Sage, Octave, etc) for evaluation.
It's not "evaluate definite integral (0,5] for .." but it'll get you half the
way there.

PS. Cheating on TI's is trivial. If you're a math professor, adopt a no
graphing calculator rule. (You can still cheat on your standard secretary's
20$ calculator, especially in the age of low-power Cortex M0+'s, but it's
somewhat less trivial. I'd imagine memorizing 5 or 6 rules on taking the
derivative would be easier for someone capable of doing that. (Though if those
tools had been around when I was in uni, I might have just done it for the
sake of doing it.))

------
skybrian
Well sure, mathematicians are so nutty about notation, it's amazing any
mathematics gets done at all.

Not at all convinced it's a feature. More like an unfortunate side-effect of
tradition.

~~~
OscarCunningham
I feel a need to stand up for mathematics notation. The main complaint seems
to be that it is too minimalist, using the minimal number of symbols (or less)
to be unambiguous.

But mathematics notation needs to be extremely succinct because the notation
is not just for reading. When you're working something out you often need to
write pages and pages of formulae and diagrams by hand. So using a notation
more verbose than absolutely necessary would be unnecessarily painful.

Programming is different because you can use a text editor with copy, paste,
autocomplete etc. But I imagine that if I wasn't allowed to use any of those
then my variable names would shrink down to one letter when I was programming
too.

Also, I think that making formulas smaller makes it easier for the eye to see
the whole formula together. This makes it easier to quickly understand the
meaning of the formula, at least once you've got used to the notation for that
particular area of mathematics.

~~~
skybrian
Yes, that's the tradition part. I certainly don't object to using whatever
notation you like when you're at a whiteboard or scribbling on paper. But math
papers are carefully formatted and published using LaTEX, not by taking
pictures of whiteboards. This isn't because it's easiest. It's supposedly to
benefit the reader, and yet fails in practice.

It seems like coming up with a format that at least allows definitions and
usages to be hyperlinked might be pretty nice, instead of a format whose
primary benefit is looking pretty when you print it on paper.

~~~
shasta
Go to definition (for symbols or notation) would certainly be helpful in
reading a mathematical paper, but how is that related to choosing good
notation? Do you have an example of mathematical notation that "fails in
practice"? In my experience, mathematical notation becomes established
_because_ it works well enough in practice.

~~~
skybrian
I don't have any specific example, but I've read that many (most?) math papers
are incomprehensible to outsiders, even mathematicians in other fields. I also
seem to remember an article about a proof that's gone unchecked for years.

So, from the outside, it seems like there are big problems with communication
in the mathematics community, and it's simply accepted that it has to be that
way because math is hard.

~~~
kaitai
Good notation certainly aids understanding, but math _is_ a vast endeavor
where the details matter. In switching from one programming language to
another, often a rough understanding of corresponding concepts is enough: how
to print has different details in different languages but it's not that
different, and the details are easy to look up. On the other hand, just
switching from topology to symplectic geometry involves asking entirely
different questions -- having different values and pursuing different goals.
Maybe it's more like switching from web dev to scientific computing as well as
switching languages.

Yes, a lot of mathematical papers are incomprehensible to others in the field.
But it's not just because of notation. You wouldn't ask a content marketer who
is great at A/B testing to deal with tweaking something in the linux kernel
because it's all "computers".

------
sklogic
There is One Book to rule them all (sorry, could not find a link to the
original 1973 edition):

[http://www.amazon.co.uk/Mathematical-Handbook-Scientists-
Eng...](http://www.amazon.co.uk/Mathematical-Handbook-Scientists-Engineers-
Definitions/dp/0486411478)

------
awalGarg

        With a program, you can always write test cases and run them to ensure they all pass. If your tests are solid and plentiful, the computer will catch your mistakes and you can go fix them.
        There is no corresponding “proof checker” for mathematics. There is no compiler to tell you that it’s nonsensical to construct the set of all sets, or that it’s a type error to quotient a set by something that’s not an equivalence relation.
    

This is something that I have been noticing for quite a while. As I got out of
high-school and self discovered programming - something that I gradually
realized is that programmers have an infrastructure of tooling setup to help
themselves. They made their things to help themselves. Compiler compilers (and
cross-compilers!) simply blew my mind away.

I joined college after a gap of an year, and as I faced Physics and
Mathematics again, that feeling strengthened. I now sometimes think of a very
strict compiler for mathematics that catches errors from the math lingua.
Could possibly be extended to Physics too. Don't even know if that is feasibly
possible, but I find the idea intriguing.

~~~
irremediable
> I now sometimes think of a very strict compiler for mathematics that catches
> errors from the math lingua.

Is that what Coq
([https://en.wikipedia.org/wiki/Coq](https://en.wikipedia.org/wiki/Coq)) is?

~~~
Joof
Coq is either too immature or too strict. Writing a proof in coq is far too
difficult for what it does.

I agree that we need a proof 'compiler' though.

~~~
irremediable
Thanks for the insight on this, by the way; I'm more of an applied math
person, so I have no firsthand experience of things like computer-verified
proofs, Coq, etc.

What do you think are the specific ways in which a proof compiler would be
different?

------
Ended
I hate how programming is so inconsistent. I mean, in vim I press 'w' to move
one word, whereas in Notepad I press Ctrl-Right. They're not even close! I
hate it when I use someone else's editor and I have to re-learn all the
commands. I wish programmers could standardise this stuff. Maybe there could
be a committee whose job it is to standardise editor commands, then we could
program in any editor without having to worry about it.

[I'm joking of course, but only half-joking. There's a serious point to be
made, which is that mathematics exists completely independently of notation.
When mathematicians don't care about notation, it's because they care about
the actual mathematics instead. The reason mathematics notation is non-
standardised is because standardising it wouldn't really help mathematicians
that much. Unfortunately this means that non-mathematicians who use
mathematics suffer.]

------
alfiedotwtf
There's "The Hitchhiker's Guide to Calculus" by Spivak, and it's actually a
really easy read while giving you an excellent mental model of Calculus.

------
2sk21
The difference is this: When I write code that has an error, I usually get
either a compile or run time error. Not so with math!

~~~
Smaug123
One can, however, catch some of these with unit tests, in maths just as well
as in code. The unit tests for maths are tiny, trivial cases of the main
theorem. For instance, if I wrote down Fermat's Last Theorem as "x^n + y^n =
z^n has no solutions for n > 1", I would run my unit tests by checking the
boundary case n=2, and discovering the easy counterexample of (3,4,5).

------
joolze
[http://www.amazon.com/Numerical-Recipes-Scientific-
Computing...](http://www.amazon.com/Numerical-Recipes-Scientific-Computing-
Second/dp/0521431085/ref=sr_1_4?ie=UTF8&qid=1455840459&sr=8-4&keywords=numerical+recipes)

This or a variation is on almost every physicist's shelf I know...

~~~
Silhouette
_This or a variation is on almost every physicist 's shelf I know..._

Unfortunately, that is a significant part of the problem.

 _Numerical Recipes_ was an OK book for its time, and certainly a popular one
given the limited material available in the field. However, neither the
recipes themselves nor the book's general presentation are ideal today, and in
many cases, someone interested in implementing robust, efficient algorithms
for various mathematical constructs would do better now to read other sources.

For example, substantial linear algebra computations are probably going to use
some variation of BLAS/LAPACK today. There is also a lot of background
material available about the algorithms used within these libraries and the
underlying mathematical foundations, for those who need to implement something
a little different or who are simply curious.

In most fields, and excluding those who are actually writing this kind of
mathematical library, a programmer will do better to use the tools that are
already freely available today rather than trying to implement their own code
based on ideas from _Numerical Recipes_.

~~~
bigger_cheese
I'm an Engineer I have NR sitting on my desk shelf I refer to it often. A lot
of my peers use it as well.

NR is hugely helpful. One example a few years ago I needed to implement
technique from a math paper in some C code.

The paper included such classic math terms like "Tridiagonal Matrix" and
"Cholesky Decomposition" I'm sure they mean a lot to someone with a math
background but baffled me at the time.

It wasn't until I pulled out my handy copy of NR I was able to even slightly
wrap my head around just what the hell a "Cholesky Decomposition" even was.
Let alone how I would code it. And even then I had to dredge up my old Linear
Algebra textbook from Uni to re familiarise myself with basic Matrix
operations.

The code I wrote ended up being full of comments like this:

"/* Newton's method to compute positive root f(p)^2 = (u^T)(Q^T)(D^2)Qu and
F(dF/dp) = (u^T)(Q^T)(D^2)Q(du/dp) */ "

Which is basically me trying to wrap my head around just what the hell the
mathematical paper is talking about whilst writing the algorithm
programatically.

Even now opening the .C file some years later it took me a few minutes to make
sense of that comment I had to realise that U^T was shorthand for transpose of
Matrix U.

Numerical techniques are hugely complicated especially when you don't deal
with them every day I have yet to find a book better than NR at breaking them
down and presenting them in a way someone like me (with Science/Eng
background) can understand them.

I'll gladly take any recommendations you have for updated references. As far
as I know NR and the GSL (GNU scientific library) documentation are considered
the gold standard by all my peers.

~~~
Silhouette
It's been a few years since I did serious computational linear algebra, but at
that time _Matrix Computations_ by Golub and van Loan seemed to be the
standard textbook in the circles I moved in. Rather than trying to provide
ready-made production-quality implementations as _Numerical Recipes_ does, it
instead concentrates on presenting the relevant mathematics, often in more
depth and with more discussion than _NR_ because it's a specialised text, with
outlines of the relevant algorithms. For production use it refers to relevant
BLAS/LAPACK functions so you know what to look for. If you decide to get a
copy, make sure it's at least the third edition, as I think earlier editions
referred to predecessor numerical libraries.

BLAS and LAPACK themselves are constantly evolving to add new or better
production-quality algorithms as the field develops, and I highly recommend
using the functionality they provide instead of trying to reimplement any of
the basic algorithms in-house. There are heavily optimised versions of these
libraries available for almost every platform you can imagine, their own
documentation is pretty good, and often recent research in the field also gets
written up as background papers and incorporated fairly quickly (the magic
words to search for are "LAPACK working note").

If memory serves, the GSL actually depends on having a BLAS implementation
available for some of its functionality, so if you're not limited by the
licensing you might already be using the same sort of code under the hood
anyway. :-)

------
mamcx
I read a lot of papers about programming recently, and the weird kind of math
people use there is a huge barrier. So, no even math related to programming is
readable by programmers!

------
ludamad
The opinions of what the subscript could mean are very coloured by Python :P
Didn't mention the fact that this could be a type, etc

------
hydandata
I love programming, but never got the hang of Mathematics due to various
reasons. Failed a big project as a solo programmer long time ago, when I was
17, because I could not create the necessary mathematical model and no help
was available, with everything else completed, infrastructure, web
site/interface database etc. Soon afterwards set off on a different career
path, but always kept an intimate relationship with programming, doing some
both at work and for fun at home.

Recently, after watching R.W. Hamming lectures and reading his book "Art of
Doing Science and Engineering" [1] [2] decided that I would try learning
mathematics again.

I found it really hard to do. There are so many implicit assumptions, so much
ground that is not covered. I was about to give up again, when I found a book
[3] by retired computer science professor, who seemed to have written it
specifically for me! it really felt that way.

Now I recommend it to almost everyone, especially tech-savvy people. I am no
longer afraid of mathematics, with the help of this book, and great advice by
Hamming, I am finally "getting it". If you find learning new programming
languages easy, but struggle with mathematics, check it out!

The book is not aimed at teaching you mathematics, rather, it teaches you
those missing pieces, that almost every other book either assumes you already
know, or describes vaguely and badly, gives you context and valuable advice.
To me it almost feels like it is teaching you a functional programming
language. A strange and very flexible programming language, with its warts and
all. You learn the language, and then you can go on to explore programs
written in it, algorithms, data structures whatnot.

My ultimate goal is to, at least, learn enough so that I can really understand
books by Hamming, especially "Methods of Mathematics Applied to Calculus,
Probability, and Statistics" [4]. And, of course, to do interesting projects
along the way, now that I am starting to understand some of the research
papers.

[1] [http://worrydream.com/refs/Hamming-
TheArtOfDoingScienceAndEn...](http://worrydream.com/refs/Hamming-
TheArtOfDoingScienceAndEngineering.pdf)

[2]
[https://www.youtube.com/watch?v=AD4b-52jtos&list=PL2FF649D0C...](https://www.youtube.com/watch?v=AD4b-52jtos&list=PL2FF649D0C4407B30)

[3] [http://www.amazon.com/The-Language-Mathematics-Utilizing-
Pra...](http://www.amazon.com/The-Language-Mathematics-Utilizing-
Practice/dp/0470878894)

[4] [http://www.amazon.com/Methods-Mathematics-Calculus-
Probabili...](http://www.amazon.com/Methods-Mathematics-Calculus-Probability-
Statistics/dp/0486439453)

------
Jugurtha
I think this goes beyond programmers and those who want to learn
"Mathematics": it can be frustrating to talk with someone who confuses
equivalence and implication in everyday conversations; this inability to use
proper, basic, logic seems to be too widespread.

I can relate to the funny part on notation. I've been fortunate enough to have
an older sister who's Algebraist and I remember when I was in primary school
and I'd have a question, she'd explain on paper using different symbols than
the ones I was using just to make sure I wasn't a slave of notation.

She insisted I had to think in an "abstract" way (that's precisely the word
she used) and not be tied by what the letters are called, and only look at the
relations between them and context. This one piece of advice served me well.

It's also funny the author mentions Fermat's Last Theorem because the first
time I had heard of it, it was still called "Conjecture de Fermat" and I had
read a piece on "Science & Vie" on Andrew Wiles' proof and the fact there
still was work to be done to see whether it was correct or not. I was 7 at the
time and it seemed strange and amazing how something that can be stated in
terms deceivingly simple even I could think I understood, can be so difficult
to prove.

I've studied Electronics Engineering in college and we had really cool Maths
courses (again, in the spirit of different notation, I noticed a peculiarity:
in the U.S., the term "Calculus" is used, whilst we used "Analysis",
"Introduction to Mathematical Analysis", "Differential Analysis", etc. Perhaps
the influence of French and Soviet Mathematicians who taught at my University.
The books are also either Smirnov, Demidovich, Pontryagin, Piskunov, or
Dieudonné, so the style taught is kind of different).

I digress. For me, it doesn't matter. I love Mathematics. Not the way many
people say they love it. It is something on my top list of things to do and I
am self studying currently. Why? First, emotional reason: in retrospect, the
happiest years of college where in the "Common Core" (two years of core
knowledge (mainly Physics and Maths) every Engineering student goes through
before choosing a specialty (except Computer Science students, they only do
freshman and then CS). Maths (and Physics) bring me happiness in a way that is
hard to describe.

The second reason is practical: when my Maths skills were somewhat sharp, I
had no problem understanding other things, because I had the necessary tools
to strip down the things and go at it where it was called for (for example,
modulation with a transistor and using Taylor series for the current). I then
turned utterly incompetent and let everything rust and I'm paying the price
right now. I have graduated but the blade is dull and I can't live with myself
like that. I've come a long way, especially because I was clueless in time
management and focusing my efforts and getting things done and I had to learn
how to mitigate my proclivity to be all over the place. The one sure thing is
that I'll never let go. I may not become a great mathematician, but every day
I am less incompetent than the day before it. The point is, as in everything
else: people don't learn because they lack the drive to do so, face the
reality of sucking, learning the basics and fighting that embarrassing feeling
of the inner voice that says "shouldn't you already know that stuff?".

~~~
bigger_cheese
I studied Engineering like you. I graduated six years ago and the rust has
definitely set in for me. At my university our Math courses were split into
three streams, Statistics, Calculus and Linear Algebra. coincidently I had a
Russian Lecturer as well.

It was after those core subjects where I fell in love - Thermodynamics and
Fluid Dynamics was the 'aha' moment for me. It made all that core stuff make
sense when I finally got a sense of the applications. Bernoulli, Euler, Fermi,
Dirac - The shoulders of giants...

~~~
Jugurtha
Yeah. Whenever I had trouble with a topic, it was because my core knowledge
was weak and rusty. I was in Instrumentation and Control. The stuff in Control
Theory was all Pontryagin, Lyapunov, Bellman, Evans, etc. We were fortunate
enough to have a very good Professor who taught a course that is state of the
art (continuous control in 3rd year, and discrete control systems in 4th year.
Optimal control and RST controllers).

For applying the stuff, it's sort of easy. But I know that I'll never
appreciate the beauty of it until I understand the concept of stability from a
"Calculus" stand point and how it relates to systems.

Anyhow, I'm also acquiring Russian. Many Russian books have been translated
but a lot have not.

------
DrScump
A Hitchhiker’s Guide to Mathematics would probably just be the number 42 over
and over again.

~~~
ccvannorman
With alternating pages printed "What is 6 x 7"

~~~
DrScump
No! Remember the book:

    
    
      "What do you get if you multiply six by nine?"

~~~
colejohnson66
And because everything is base 10, 6 times 9 is 42. In base "A", it's 54, but
we're using base 10, not base "A"

~~~
jaybosamiya
All bases are base 10

