
The Mathematical Hacker (2012) - noch
https://www.evanmiller.org/mathematical-hacker.html
======
jacobolus
> _But rarely in these discussions will you find relevant mathematical
> considerations. If the goal is to compute a Fibonacci number or a factorial,
> the proper solution is not a recursive function, but rather knowledge of
> mathematics._

If you want precise results for large sizes, then the integer-arithmetic
recurrence is faster than computing the square root of 5 to very high
precision and raising it to the nth power.

The author’s “proper solution” might better be named the “naive undergraduate
introductory linear algebra student solution”. Or maybe “18th century analytic
solution ignorant of computer number formats”.

> _They run in constant [...] time_

If we are going to artificially limit ourselves to numbers smaller than the
precision of e.g. double precision floats, then the integer algorithms might
just as well be considered “constant time”, where the constant is just
whatever it takes to compute the largest value we are considering.

Or alternately we can just precompute and cache the entire set of them if we
want. There are only 78.

Here’s a Javascript implementation

    
    
      const fibcache = [
        0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233,
        377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711,
        28657, 46368, 75025, 121393, 196418, 317811, 514229,
        832040, 1346269, 2178309, 3524578, 5702887, 9227465,
        14930352, 24157817, 39088169, 63245986, 102334155, 
        165580141, 267914296, 433494437, 701408733, 1134903170,
        1836311903, 2971215073, 4807526976, 7778742049, 
        12586269025, 20365011074, 32951280099, 53316291173, 
        86267571272, 139583862445, 225851433717, 365435296162, 
        591286729879, 956722026041, 1548008755920, 
        2504730781961, 4052739537881, 6557470319842, 
        10610209857723, 17167680177565, 27777890035288, 
        44945570212853, 72723460248141, 117669030460994,
        190392490709135, 308061521170129, 498454011879264, 
        806515533049393, 1304969544928657, 2111485077978050, 
        3416454622906707, 5527939700884757, 8944394323791464];
    
      const fibonacci = (n) => fibcache[n];

~~~
0815test
AIUI, you don't need to "compute" sqrt5 to high precision in order to use that
closed form! Just "adjoin" it to Q and compute symbolically over Q(sqrt5).

~~~
jacobolus
Yes, and that symbolic computation is the same arithmetic as the original
algorithm the author was complaining about.

Personally I like the version better where you explicitly work in terms of
integers like _a_ \+ _bφ_ or rational numbers like ( _a_ \+ _bφ_ )/ _c_.
[https://observablehq.com/@jrus/zome-
arithmetic](https://observablehq.com/@jrus/zome-arithmetic) very handy when
working with the symmetry system of the icosahedron.
[https://observablehq.com/d/cae3fea57d7cf2d4](https://observablehq.com/d/cae3fea57d7cf2d4)

~~~
irchans
Nice. Thanks for that.

------
kccqzy
Let's just accept that there are different kinds of math and people are
interested in different kinds of math. The lisp school may especially like the
beauty of recursion (recursion theory, aka computability theory _is_ a branch
of math), and the fortran school may like the beauty of calculus and numeric
computing. This isn't necessarily exclusive. I can write a naïve recursive
function for Fibonacci numbers, or I can derive the "constant-time" formula
using generating functions on the fly. Or I can find a method better than both
using matrix exponentiation. These are all math. There's no need to denigrate
one school over another, just like there's no need to denigrate one branch of
math over another.

------
nextos
I agree with his main point.

Programming will become true engineering soon. In a few decades, we will be
building mainstream software with formal guarantees. And for that, there's
tons of abstract algebra needed as a foundation, once you try to digest [1-6]
and whatever replaces them in the future.

It's tragic most CS programs do not emphasize this a bit more.

[1] [https://www.elsevier.com/books/lectures-on-the-curry-
howard-...](https://www.elsevier.com/books/lectures-on-the-curry-howard-
isomorphism/sorensen/978-0-444-52077-7)

[2]
[https://softwarefoundations.cis.upenn.edu/](https://softwarefoundations.cis.upenn.edu/)

[3]
[https://www.cis.upenn.edu/~bcpierce/tapl/](https://www.cis.upenn.edu/~bcpierce/tapl/)

[4]
[https://www.springer.com/gb/book/9783540654100](https://www.springer.com/gb/book/9783540654100)

[5] [http://www.concrete-semantics.org/](http://www.concrete-semantics.org/)

[6] [http://adam.chlipala.net/frap/](http://adam.chlipala.net/frap/)

~~~
throwawaymath
In the future, I really don't think the vast majority of software engineers
will need to know any abstract algebra in order to use or write software with
strong formal guarantees. The history of software engineering (and most
engineering disciplines in general) is a narrative in which practitioners need
to know less theory as time goes on.

I expect that formal methods will become another specialization that most
engineers don't need to directly work with, alongside things like cryptography
and low level arithmetic.

~~~
kovrik
Yes.

Nowadays it is mostly about tools/frameworks. You don't even need to master
the programming language that you are using.

I work mainly with Java stack and see so many devs with 10+ years of
experience who know nothing about Java internals, who know nothing about how
Java's hashmap works...heck, many of them can't even explain what some
_keywords_ do: volatile, transient.

Nevertheless, they 'successfully' close their JIRA tickets and deliver.

~~~
schwurb
> Nevertheless, they 'successfully' close their JIRA tickets and deliver.

Leave those quotes away and I agree. If they know enough Java to finish those
tickets in a way that the bug is actually fixed and the fix itself does not
impose more technical debt than needed, then those developers know exactly
enough java and close their tickets successfully indeed.

I wrote my own compiler for a subset of Java (including inheritance and
overriding) to MIPS in uni, among the top in class, and am currently looking
to compile Haskell to a very neat kind of calculus (check out "Formality-
Core"). It was a really fun experience, but I feel that compilers are
immensely overhyped. I guess that is because it is a hard topic and for many
people, the investment is huge, so they exagerate the benefits they percieve
to reap.

Being proud of understanding something or admitting that a topic is hard is
all fine, but I am absolutely allergic to elitism where it isn't just. And no,
in 2019, knowing all about parsing and register allocation won't unlock
superpowers anymore.

------
dkarl
I think the greatest link is aesthetic, the instinct for power and simplicity.
Mathematics and programming are both the creation of weightless structures
that are limited only by their tractability by human intellect.

Of course that perspective ignores two important things: physical constraints
on performance, and the relationship of data to facts in the real world. From
these constraints programming also takes on the characteristics of natural
science and engineering. These considerations come into play as well, but so
often the limiting resource is human time and intellectual bandwidth,
requiring a simpler, easier solution.

In mathematics, the scale can be much grander than anything yet produced in
computing. Calculus is just breathtaking -- the signature accomplishment of
the greatest mathematicians of their time, refined over hundreds of years into
a tool that can be wielded by ordinary teenagers. It might seem shallow to
compare such a sweeping achievement to mundane programming tasks such as
writing enterprise business logic. Yet, if you ignore scale and universality,
the intellectual task is the same: to take something that requires careful
thought and close familiarity with the product requirements today, and boil it
down to something that can be understand at a glance six months from now by
someone who was not involved in the initial development. To distill patterns
of thought internalized via close familiarity with the motivating problems
into easily defined and digested concepts.

------
anaphor
Leslie Lamport (of Paxos, LaTeX, and TLA+ fame) has been making this point for
years. Basically that in order to design a piece of software, you need to be
able to create a spec, and the best way of doing that is through mathematics.

E.g.

\- [https://lamport.azurewebsites.net/tla/math-
knowledge.html](https://lamport.azurewebsites.net/tla/math-knowledge.html)

\- [https://youtu.be/-4Yp3j_jk8Q](https://youtu.be/-4Yp3j_jk8Q)

~~~
pmiller2
I agree in principle, but there’s a rather fundamental problem here: writing
the spec has to be easier than writing the program. Otherwise,we should just
write a compiler for the spec language and save ourselves a step, or blunder
on by writing the program without a spec, as is usually done.

~~~
anaphor
I think if you're doing your spec as a combination of high level set theory or
algebra and some prose, then it should be easier than writing the program
itself (assuming you know a bit about math and how to write a spec).

The question is how much time is it worth putting into writing a detailed
spec. I think the answer is that it really depends on the size and complexity
of your software.

I think the idea is that programmers have to be comfortable with writing
_something_ down, even if it's not perfect. A half-baked spec is better than
no spec, basically. Otherwise you will likely end up with spaghetti code that
needs to be rewritten or scrapped.

~~~
pmiller2
I already write my specs as a combination of math and prose. It’s just that
the math portion is empty. /s

Seriously, though, that prose portion is where the vast majority of the bugs
will be.

------
bloaf
This question of the relationship between math and CS always boggles my mind.
Computers are, in a very real sense, _automatic math machines._ Programmers
are, in a very real sense, telling the automatic math machines _what math to
do._

Now we've built ourselves a large number of layers of abstraction in an
attempt to forget these things. And with those abstraction layers, programmers
can be reasonably productive without having to think about their problems in
terms of the math that the underlying math machine is doing. But in my mind,
if someone wants to call themselves a _computer scientist,_ they are
necessarily concerned with the underlying math being done.

~~~
29athrowaway
A fruit picker is doing math all the time: calculating distance to fruit,
using inverse kinematics to move limbs in order to pick a fruit and put it in
a basket in a way that does not deform or damage the fruit... and the list
goes on and on. There's math in everything.

The fruit picker does not have to think about that math or understand it, just
like most programmers do not have to understand what happens at the low level.

~~~
antonvs
Except the programs that programmers write very directly correspond to
mathematics. The formal semantics of programming languages maps expressions in
programming languages to mathematical expressions.

As such, all ordinary programmers are demonstrably writing mathematics - just
with syntax, semantics, and applications that are unlike those of more
traditional math.

~~~
runeks
> The formal semantics of programming languages maps expressions in
> programming languages to mathematical expressions.

Which mathematical expressions do _memcpy_ , _fork_ , and _putchar_ map to?

~~~
antonvs
memcpy is trivially modeled as an operation on an underlying store. Such
stores are a basic feature of any programming language semantics.

Putchar is easily modeled in terms of something like a state monad. In earlier
semantics, such as the original denotational semantics, it was just modeled
directly as a store that was threaded through a program, i.e. passed to every
function.

Fork is modeled by process calculi.

If you think programming languages somehow can't be modeled by mathematics,
you're essentially taking the position that they're doing something
supernatural.

------
iainmerrick
Besides the question of whether the Fibonacci sequence is a good or bad
example, here’s a different objection to the author’s thesis:

If software development is ripe for a mathematics-driven leap forward, just
like logistics and baseball, why hasn’t it happened yet?

If software developers and baseball coaches were both sitting next to low-
hanging fruit that merely required a bit of mathematical knowledge and
spreadsheet-wrangling to exploit, I would have expected the people already
using computers -- the software developers -- to find it first. What’s
different, and why?

It could be that software is inherently less amenable to statistical and
numeric approaches than baseball, and that’s just the way it is. When you look
at baseball the right way, you have a great mass of data with fairly simple
and consistent rules, ideal for a statistical approach. When you look at a
typical software system, you have a mass of special-case business logic, with
no real simplifying assumption to be found.

In cases where “big data” is available, successful companies _are_ using
statistics to exploit regularities in the system.

So maybe what the author is arguing for is already happening, albeit on an
somewhat smaller and slower scale, and maybe this is inherent in the nature of
software.

Applied mathematics, basically calculus, is great but it can’t do everything.
The author seems to be ignoring discrete mathematics (which a lot of the “Lisp
hacks” he dismisses would fall under). Then there are sociological issues,
such as documentation, unit testing, debugging, modularity, maintenance of
legacy systems.

Plenty of researchers have tried applying a data-driven approach to testing
and debugging, and I think it’s safe to say no-one has completely cracked it
yet. If another recent paper posted here is to be believed, we don’t even know
if there’s a significant difference in bug rates between static and dynamic
typing.

------
karmakaze
Article misses the point on recursion. It isn't about solving the study
questions. They were only chosen as self-contained examples. A better example
is navigating tree structures or divide and conquer algorithms but that
requires more context. Math is useful because it's a common language we expect
to have been exposed to before programming.

> Lisp-school essayists [...] spend their time worrying about how to make code
> more abstract. This kind of thinking may lead to compact, powerful code
> bases, but in the language of economics, there is an opportunity cost. If
> you’re thinking about your code, you’re not thinking about the world
> outside, and the equations that might best describe it.

Another way of saying the same thing in a positive light is to say that these
things _have already been thought about_ and can be applied to take advantage
of opportunities where applicable.

> Although the early years in the twenty-first century seem to be favoring the
> Lisp-school philosophy, I predict the balance of the century will belong to
> the Fortran-school programmers

This part does seem to be happening. Just look at Apache Storm v2 port from
Clojure to Java:

 _" While Storm’s Clojure implementation served it well for many years, it was
often cited as a barrier for entry to new contributors. Storm’s codebase is
now more accessible to developers who don’t want to learn Clojure in order to
contribute."_[0]

[0] [https://www.dm4r.com/tech/apache-storm-v2-0-porting-from-
clo...](https://www.dm4r.com/tech/apache-storm-v2-0-porting-from-clojure-to-
pure-java-performance-upgrades-more/)

------
Gondolin
The Fibonacci example is interesting. If you want arbitrary precision, then
the naive recursive method is exponential in $n$ to compute $F_n$. The loop
method (recursive method with cache) is in $O(n^2)$. Indeed you only have $n$
steps in the loop, but at step $k$ you add $F_k$ and $F_{k+1}$ which have
$O(k)$ bits so the total costs is $O(n^2)$. Finally the matrix/formal \sqrt{5}
algorithm costs $O(n log n)$.

Indeed, using fast exponentiations, you only need $O(log n) matrix
multiplications. But the dominant cost is actually at the last step, where you
multiply two numbers of size $O(n)$ bits for a total cost of $O(n log n)$.

Speaking of mathematics in programming: I find it remarkable that the recent
convergence of async programming into the use of promises and async/await (in
python, js, rust...) is a weak form of the continuation monad. And the
continuation monad is a form of the 'not not' Lawrere topology!

------
Ragib_Zaman
Not to say anything about the central thesis of the post, but the two examples
of how deeper mathematical knowledge can lead to better programming are just
terrible examples. As someone whose background was originally mathematics and
starting programming relatively late, I can very much relate to thinking
Binet's formula for Fibonacci numbers or gamma function for factorials might
be faster than recursive methods. But when you actually learn about how ints,
floats and their arithmetic is implemented at a low level, how cache and
memory works etc, you realise that recursive methods for those two problems
are actually your best bet, both for speed and accuracy. So ironically, these
are examples where knowing much about low level coding and computing
architecture helps you more than knowing more advanced mathematics.

~~~
posix_me_less
Is recursion really that good? It seems calculating those once and storing
results in array and then only accessing the array would be best.

~~~
Ragib_Zaman
Sure, if you need to continually access the same values then you'd want to
save them but whether you need to reuse the same values or just do a one off
calculation, you still need to compute the value the first time in some way.

For Fibonacci numbers, expressing terms of the sequence as a 2x2 matrix
exponentiatied and then exponentiating by repeated squaring let's you get the
n-th Fibonacci number in O(log n) multiplications. It's straightforward, if
you get an input n and handle arbitrary precision integer arithmetic, you can
calculate F_1000 quickly and without much hassle. Compare that to Binet's
formula: How many digits of precision will you need to compute the golden
ratio to ensure your result from Binet's formula is correct? And how will you
compute it to that many digits? And that's not even worrying about the fact
that floating point multiplication doesn't correspond exactly to
multiplication of real numbers, so you may need to switch to arbitrary
precision floating point arithmetic.

~~~
irchans
I don't think it's too hard to get the sqrt(5) to a high enough degree of
precision. (Probably could use Newton-Raphson.) On my computer it takes about
0.05 seconds to compute sqrt(5) to one million digits. It takes about 0.01
seconds to multiply two 500,000 digit integers. If you want to computer F_n
using Binet's formula I think you need about (log(F_n)/log(10) + 10) digits
for sqrt(5).

------
User23
No mention of Hoare triples, predicate transformers, loop invariants, or any
of the other formal machinery that was discovered 50 years ago and thoroughly
solves the problem. The author means well, but this is completely
uninformative.

Dijkstra was trained as a Mathematical Engineer for example. Fact is, if
astronomers had the mindset of most programmers, they'd call their field
Telescope Science.

~~~
kgwgk
> Dijkstra was trained as a Mathematical Engineer for example.

"Mathematical engineering" looks like a modern invention. In any case, his
undergraduate degree was in physics.

~~~
User23
Dijkstra appears to have thought otherwise: "I worked at the time at the
Department of Mathematics of the Eindhoven University of Technology in the
Netherlands, and told at that conference that the official academic title our
graduates earned was 'Mathematical Engineer', and most of the Americans began
to laugh, because for them it sounded as a contradiction in terms, mathematics
being sophisticated and unpractical, engineering being straightforward and
practical."[1]

Before quibbling, note I said he was trained as a Mathematical Engineer, not
that he had a degree in Mathematical Engineering.

[1]
[https://www.cs.utexas.edu/users/EWD/transcriptions/EWD11xx/E...](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD11xx/EWD1175.html)

~~~
kgwgk
Not so modern an invention then. Thanks for the pointer.

(Anyway, participating in the training of "mathematical engineers" and being
trained as one are not necessarily the same thing.)

------
dang
2013:
[https://news.ycombinator.com/item?id=6953774](https://news.ycombinator.com/item?id=6953774)

2012:
[https://news.ycombinator.com/item?id=4915328](https://news.ycombinator.com/item?id=4915328)

------
wwweston
> the Lisp hacker is more likely to ask the irrelevant question: how can I
> reduce these two functions down to one function?

Except... mathematicians are _very_ frequently going to be doing the same
thing: asking if there is some generalization that covers two seemingly
independent cases. Otherwise, the author should/would probably be talking
about Stirling's approximation instead of the log gamma function in that
piece.

More knowledge of mathematics relevant to a given domain an application is
written for might often be a good thing -- I think that's an overarching point
worth examining.

But if you're going really to look to mathematics as a guiding light, it's not
going to be super useful in criticizing the drive for abstraction.

------
cdbyr
While we’re talking about metaphors: this reminds me of the distinction
between craft and art. Where one is the creation or something elegant and
functional (craft, lisp), the other treats more important the ideas being put
into practice (art, fortran).

I think the article’s artificially limiting its point to mathematics - there
are plenty of other disciplines where we can find well-tread useful metaphors,
etc.

~~~
vageli
> While we’re talking about metaphors: this reminds me of the distinction
> between craft and art. Where one is the creation or something elegant and
> functional (craft, lisp), the other treats more important the ideas being
> put into practice (art, fortran).

> I think the article’s artificially limiting its point to mathematics - there
> are plenty of other disciplines where we can find well-tread useful
> metaphors, etc.

I feel as though you have it backwards. Would you care to explain your
rationale for fortran as art compared to lisp as craft?

------
webdva
This is an excellent article. It motivated me to adopt a more Fortran
morality, where my assumptions or so function first on underlying scientific
principles, when I search for practical problems to solve.

------
throwawaymath
Unfortunately I have to strongly disagree with the author's central thesis.

 _> But I think that most programmers who are serious about what they do
should know calculus (the real kind), linear algebra, and statistics. The
reason has nothing to do with programming per se — compilers, data structures,
and all that — but rather the role of programming in the economy...One way to
read the history of business in the twentieth century is a series of
transformations whereby industries that “didn’t need math” suddenly found
themselves critically depending on it._

There are two ways to interpret the claim that more programmers should know
advanced math. One of them, which the author preempts, is the idea that
programmers will improve their own work through significant mathematical
maturity. It seems the author and I already agree that _isn 't_ the case, so I
won't address this interpretation. But I still consider this interpretation to
be more defensible than the other one.

The other way to interpret it, which the author seems to be explicitly
pushing, is that programmers with a strong mathematical background will be
better situated to proactively find fertile areas of the economy which can be
improved through a combination of computing and advanced math. I don't believe
this is realistic. None of the author's specific examples - control theory,
optimization, linear programming, econometrics, or Black-Scholes - were
pioneered by generalist engineers with a strong grounding in mathematics. They
were just applied mathematicians (and in a few of those cases, repurposed pure
mathematicians).

Likewise the bar is constantly rising. Knowing calculus, linear algebra and
statistics won't empower you to make significant, groundbreaking improvements
to any major field. That knowledge is necessary but insufficient. What's
considered low hanging fruit these days is something that could be conceivably
noticed and tackled by a postdoc. Mathematics itself is ossifying as a field
of specialists, to the point where prominent researchers within subfields like
algebraic geometry can be unfamiliar with each other's principal work.

This is really why find the appeal to economic advancement less than
convincing - every advancement the author mentions has an implicit foundation
in specialization. Historically, the most notable advancements in technology
have come about through division of labor and cross-pollination. You would be
better served by having two collaborating teams of mathematicians and
engineers than by a team of engineers who know undergraduate math. It's
already common for applied mathematicians to be able to program competently;
they dominate the authorship of fast, low level math libraries. If you need
them, hire them.

Conversely, I would argue we should actually _reduce_ the education for more
developers, not add even more material to it. Most developers already don't
need to learn a significant amount of the material in a standard computer
science degree. Piling on math courses isn't going to be a fundamental
improvement for the vast majority of engineers working on real world software.

~~~
jamessb
> None of the author's specific examples - control theory, optimization,
> linear programming, econometrics, or Black-Scholes - were pioneered by
> generalist engineers with a strong grounding in mathematics. They were just
> applied mathematicians (and in a few of those cases, repurposed pure
> mathematicians).

With regards to control theory, whilst some key people like Bode [0] had
mathematics degrees, others such as Kalman [1], Nyquist [2], and Black [3] had
electrical engineering degrees. Claude Shannon had a bachelor's degree in both
electrical engineering and mathematics.

[0]:
[https://en.wikipedia.org/wiki/Hendrik_Wade_Bode](https://en.wikipedia.org/wiki/Hendrik_Wade_Bode)

[1]:
[https://en.wikipedia.org/wiki/Rudolf_E._K%C3%A1lm%C3%A1n](https://en.wikipedia.org/wiki/Rudolf_E._K%C3%A1lm%C3%A1n)

[2]:
[https://en.wikipedia.org/wiki/Harry_Nyquist](https://en.wikipedia.org/wiki/Harry_Nyquist)

[3]:
[https://en.wikipedia.org/wiki/Harold_Stephen_Black](https://en.wikipedia.org/wiki/Harold_Stephen_Black)

------
zakeria
Fast forward a few years, now with the demand of learning-based software,
there is no doubt that mathematics is indeed a requirement for data-driven
software.

Although you may not need to know the math to build such software, given the
tools that we have available. However you would need to know the math in order
to understand the limitations, pros/cons of a particular ML algorithm for a
given application (at the very least).

------
btrettel
Theoretical fluid dynamicist here. I'm fairly decent with math, or at least
solving differential equations. Related to the essay, I've always found it odd
how many people with a CS background tend to "compute first, ask questions
later." My experience has been that analyzing a problem first is generally a
good idea, but CS folks typically don't in my experience. Exact or good
approximate solutions to problems often exist, perhaps only for relatively
simple problems, but still.

Sometimes computation is necessary, but that doesn't mean you'll save time by
avoiding analysis entirely. When mathematical analysis succeeds, the time
saved can be considerable. And you can combine both, e.g., recently I sped up
a non-linear algebraic equation solver for my work by using an ansatz that I
derived using the perturbation method. (Also probably helps the solver
converge to the right solution.)

One possible explanation:
[https://en.wiktionary.org/wiki/if_all_you_have_is_a_hammer,_...](https://en.wiktionary.org/wiki/if_all_you_have_is_a_hammer,_everything_looks_like_a_nail)

------
valw
A huge 3rd category of programmers is entirely missing: those that don't care
about _abstraction_ at all, be it mathematical or linguistic. I think the
priority for our industry is to get those on board, then we can discuss what
tools for abstraction they should specialize in.

------
ltr_
just doing a double click on the Fibonacci Solution :hypothetically speaking,
can a compiler take the recursive solution and optimize it to the Binet's
formula in machine code? . and obviously apply that technique to other similar
problems?.

~~~
jacobolus
In the general case (for values of n bigger than double precision floats
support), the Binet formula is slower.

The author’s proposed solution only works up to n = 78. If you want to speed
up the calculation for small values, just cache them all.

~~~
ltr_
ofc, but it is possible for compiler to be smart enough to"discover" this? and
use similar heuristics for other similar problems?

~~~
jacobolus
In this particular case, it would not be useful.

Personally I don’t ever want a compiler trying to replace straight-forward
integer algorithms with floating point formulas. YMMV.

------
justinmeiners
Just wrote an article about the same topic:

[https://gist.github.com/justinmeiners/f7d08f99e7e9b8b84f0183...](https://gist.github.com/justinmeiners/f7d08f99e7e9b8b84f0183ddaa7e830c)

------
abalaji
I agree with the point that the author brings up but I think there is a
broader point to discuss with regards to a mathematical focus in education.
China, for example, is putting a large focus on applying a mathematical
modeling perspective to their K-12 education, just look at the results of the
MCM of the past few years. [1] I was lucky to be exposed to that frame of
thinking in high school and I think it will become ever the more important in
all areas, not just hacking, for people to have some sort of training in that
respect.

[1]
[https://www.comap.com/undergraduate/contests/mcm/contests/20...](https://www.comap.com/undergraduate/contests/mcm/contests/2019/results/)

------
haecceity
Knuth is one of the few authors that shows the derivation of the Binet's
formula for Fibonacci numbers.

~~~
n4r9
Do you mean "one of the few computer science authors"?

We did this as whilst studying difference equations in first year
undergraduate mathematics. I imagine most books dealing with the subject will
include the Fibonacci equation as an example.

