Hacker News new | past | comments | ask | show | jobs | submit login
The Mathematical Hacker (2012) (evanmiller.org)
199 points by andsoitis on Dec 23, 2022 | hide | past | favorite | 174 comments



In Steve Yegge’s linked post:

> Math is a lot easier to pick up after you know how to program. In fact, if you're a halfway decent programmer, you'll find it's almost a snap.

This couldn’t be more wrong. Mathematics is the hardest thing I have ever done. I’m sorry, but mathematics is orders of magnitude more intensive and difficult than most programming. A simple fact that shows this is the amount of programmers who have no formal training in engineering or computer science but we’re able to self-teach the concepts. The same cannot be said of mathematics, which requires deep, dedicated study. Most programmers I know know very little mathematics, and it’s not like I’d claim I know a lot either. I’ve forgotten more than I know.

He even mentions how little math he took, so I’m not sure he’s an authority on the subject. Most of his post is just surface level platitudes. I’m generally confused why I see his posts referenced so frequently.

To be clear, this isn't some attempt at gatekeeping. It's just that mathematics is a very deep, difficult, and misunderstood subject. I think maybe only true philosophy is harder because there, it's usually not even clear what the questions are.


> mathematics is orders of magnitude more intensive and difficult than most programming

But what level of programming and mathematics are you comparing here though? because college-level algebra and calculus is really not that hard imho (once it "clicks" for you, but it's the same for programming), and if we are comparing math as in what you see in a BSc/Msc of Mathematics (or research-level) then I agree it's hard but you have to compare for an equivalent level of programming.

> simple fact that shows this is the amount of programmers who have no formal training in engineering or computer science but we’re able to self-teach the concepts. The same cannot be said of mathematics

I would blame it more on the fact that programming is a very useful tool for people outside Computer-Science, it has very direct applications and you can monetize it very easily, so it's very likely that they might want to learn it, however, rarely you see someone deciding to take Calculus just for the sake of it if they never bothered with it in College.

Overall I agree with you, but personally I find math more difficult, most people probably do but I don't think it's inherently more difficult, it's just that people are less used to study it.


I would hard disagree that undergrad level Analysis or even just the trickier corners of vector calculus are within the bounds of what programmers can easily pick up without dedicated and guided study. Everybody's gangster until they have to parameterize some bullshit helical structure in R3.

Comparable levels of programming, what we expect of CS juniors, are regularly picked up by "the guy who is good with Excel" in office settings as it's mostly a function of experience and exposure, not theory.

And now my worthless anecdotal evidence: I self taught myself into professional programming and it was a simple matter of banging my head against a wall until shit started working. The feedback loop, "did the thing crash or not", permitted me to learn on my own. I wouldn't even begin to understand how to self-teach myself Stokes Theorem or some shit, and have zero ability to author the proofs required to reach the conclusions higher level mathematics are built on.


I think you're hitting the nail on the head here. Something about the learning process makes programming much easier to pick up.

What if we had something similar for mathematics?

Rapid feedback, error messages, maybe even linters and highlighting for the "mathematical syntax".

I've though about this before and I think tools like this could unlock math for a lot of people, and also increase the effectiveness of professional mathematicians.

When learning math / seeing other learning math I've noticed that simple errors such as typos often slow down or hinder understanding of the subject.


Take a look at the Natural Number Game! [1] It does exactly that: "Rapid feedback, error messages, maybe even linters and highlighting for the "mathematical syntax"."

After you get the hang of the system, you can play with the interactive theorem prover behind it: Lean [2]. There's also plenty other interactive theorem provers (Coq, Isabelle, HOL, Mizar, Metamath, ...) but Lean has a lot of traction amongst mathematicians at the moment.

There are no limits to the math you an do with this. There is mathlib [3], the main mathematical library. It covers a lot of undergraduate material [4], and plenty of stuff beyond that [5]. The community has even covered some state of the art research math in Lean [6a, 6b].

You are very welcome to hang out on the leanprover zulip [7]] and ask questions about the Natural Number Game or anything else that is Lean-related.

[1]: https://wwwf.imperial.ac.uk/~buzzard/xena/natural_number_gam... [2]: https://leanprover-community.github.io/ [3]: https://github.com/leanprover-community/mathlib [4]: https://leanprover-community.github.io/undergrad.html [5]: https://leanprover-community.github.io/mathlib-overview.html [6a]: https://github.com/leanprover-community/lean-liquid [6b]: https://www.nature.com/articles/d41586-021-01627-2 [7]: https://leanprover.zulipchat.com/


Is it the learning process, or the subject itself?

Programming works with manmade abstractions, carefully designed to have very few interactions, keep mutable state contained, and to have all the parts structured in a hierarchy without recursion.

In math you have systems of equations that all reference each other. And they all happen at the same time because there's no steps and time or lines of code just 5 equations that all interact.

The atomic pieces are much larger and tied together and it inherently seems to require being able to fit nontrivial ideas in your head.

Programming lets you design a complex architecture one tiny piece at a time without considering other pieces.


Isn’t math just manmade abstractions as well? I’m not saying they’re equally hard (I think maths is harder too), but you seem to be overly simplifying programming, under the assumption that the system you are working with and building on is actually well designed, rather than what is actually more likely (a munge of SOME clean design with many layers of hacks on top).

I think Maths is harder because the abstractions are higher level, have less intuitive bases, the feedback loop is longer and doesn’t have robust testing. I don’t think the abstractions themselves are that much harder in general, but getting an intuition and doing anything useful (correctly) at a really high level of abstraction is quite difficult.


My view is that math is man made. It's a mash of notation for various ideas, more or less unambiguous, more or less rigorous. Some are more well designed than others. It's closer to natural language than programming languages.

Not every programming language has a concept of time. Sometimes that's a good thing, it depends on what you're used to. Math arguably has less mutable state than most typical programing languages.


I'd love to play around with such tools, but I think they'd only get you so far before they'd start to become a hinderance.

The linter in mathematics is whether the other mathematician (whoever you're proving to) knows what you mean. If you're locked into a rigidly defined syntax, an obvious line of questioning is: what's not expressible in this syntax?

I fear that by the time the tooling was agreed on, built, and taught in schools, you'd have something like APL, which might be cool to code in, but from which the mathematical conversion would have moved on a while ago. Efforts like that, after all, are how math becomes engineering.

Consider, for instance, Russel's theory of types, which was interesting math at the time and now strikes the student with an engineering background as "pretty much just Java" (or any "normal" statically typed language).


If you're learning math for career reasons rather than just pure curiosity, engineering is the main/possibly only place you'd use it besides statistical analysis.


Yeah that sounds about right. I was more musing about the philosophical boundary between math and engineering (math being a creative pursuit and engineering being about outcomes).

I can't quite pin it down, like with a definition, but I'm tempted to say that if it has a linter it's not math anymore, even if it once was.


I think this is the goal of proof assistants based on the Curry-Howard isomorphism, which the original author thought to denigrate for some reason.


What a creative, delightful solution. I encourage you to pursue this!


> I wouldn't even begin to understand how to self-teach myself Stokes Theorem or some shit

Input it into a proof assistant, and rely on the same sort of feedback "does the computer accept your proof, or get stuck". The hard job of formalizing stuff for this purpose has seen significant progress, e.g. by the Lean mathlib project.


> Input it into a proof assistant, and rely on the same sort of feedback "does the computer accept your proof, or get stuck". The hard job of formalizing stuff for this purpose has seen significant progress, e.g. by the Lean mathlib project.

As a math teacher who disagrees with the premise of the GGP post ("This couldn’t be more wrong. … I’m sorry, but mathematics is orders of magnitude more intensive and difficult than most programming"), and thinks that any good programmer can learn mathematics—of course there are code wodgers out there who don't really understand their craft of programming, and so can't translate that knowledge to facilitate an understanding of mathematics—I think I also disagree with this. I've never tried it, but I can't imagine someone learning about Stokes's theorem in anything like this way. One of the many axes along which I imagine this failing are that the state of human readability in proof assistants is, well, let's say it's less well developed than the, cough, stellar state of the art in compiler error messages.

But, more importantly, you can, at least in principle, know every single step in a proof of Stokes's theorem without understanding in any real sense why it's true—and a proof assistant in particular will force you into the weeds of minutiae that absolutely do not help to build any intuitive picture—and, even if you manage in the process to piece together that understanding of why it's true, you will never thereby gain an understanding of why it's interesting (e.g., among other things, its connections to physics and the entrée it offers to differential geometry).


The flip side of a proof assistant's 'minutiae' (and there's plenty of room to disagree wrt. whether paying attention to those minutiae helps with gaining a better, more accurate intuition!) is its ease of refactoring a proof. A proof assistant can instantly tell you whether a seemingly nicer, better-abstracted proof B really manages to prove the same thing as proof A, something that's very hard to do without the use of precise formalized statements and automated checking.


Self teach Stokes Theorem by inputting it into a proof assistant? Are you serious? That is very inefficient; the OP was talking about learning the kind of vector calculus taught in first calculus sequence. I think just watching a short YouTube video and doing a few exercises will work and is a proven method. Proofs of theirebs are very often much more complicated than applying them (understatement intended).


If you don't understand the proof of any theorem, you haven't really "learned" it in any real sense. Wrt. doing computational exercises in vector calculus, that requires knowing the "rules of the game" which is also something that you can test precisely in a proof assistant.


First, of course you can understand a theorem without knowing the proof! Uniqueness of prime factorization, for instance, is notoriously tricky to prove, but it would be a stretch to say people who haven't majored in maths haven't learned it in any real sense.

Second, even if you wanted to understand the proof of a theorem, doing it with a proof assistant is an atrocious way to go about it.


Many proofs of theorems in calculus would require topology to understand. You’re suggesting that you cannot be competent in vector calculus without knowing the proofs at a professional. I think I, along with probably everyone, will have to disagree with that.


I would quibble with whether this is exactly equivalent.

In programming I knew I needed to sort a list or find a most efficient path because some practical problem I was trying to solve demanded that I do that. Frequently I had a basically crap but working independent solution before I learned the names "EWD" or "A*". I independently discovered that I needed virtual interfaces (before I knew them by that name, "I wish pointers to parent classes could call implementations in subclasses") and then discovered language facilities for polymorphism and OOP.

Without formal or at least guided instruction I would never think to move towards or discover "I wonder if there's a relationship that makes these double integrals of curls of vector fields easier to solve for".

Programming has a high coupling between necessity, experience, and theory. In mathematics that coupling is much, much, much looser. Self learners in programming regularly re-discover and re-implement, typically less efficiently, all sorts of fundamentals of CS. The equivalent in mathematics rarely happens post-algebra.


I think your comparison is a bit unfair. Essentially, CS is as hard as mathematics because it is mathematics.

For example, take any good static analyzer that implements abstract interpretation. It generally works using Galois connections, which is just abstract algebra.

Dijkstra's algorithm or A* came pretty early in the history of CS. It would be fair to compare their difficulty to something similar in mathematics, say some basic results in Euclidean geometry.


CS may be mathematics but programming certainly isn't

If you're discussing pure CS, the thing you can write down in a book and for which a computer is a largely theoretical device, sure CS is mathematics.

If we're talking about the practical reality that CS majors in America today are trying to achieve, and their undergraduate programs are trying to prepare them for, that's becoming a working programmer and has very little relation to mathematics.


I wish we'd just start issuing software engineering degrees so it would be easier to get an actual computer science education.


A* is already intermediate level programming. CS is math because it's what we call the parts of programming that are math.

But so much of programming isn't. It doesn't require deep understanding, static analysers are advanced level things that are way beyond what many working programmers ever encounter.

I can't say I've ever seen "Real math" myself.


Do you have more information on this approach? Sounds very interesting. I've read and toyed a little with things like Lean and I'm interested in that field but the barrier seems a bit high (without pre-existing knowledge) to just "input" a theorem and toy with it.


College-level means undergraduate-level? If so, how is algebra/calculus not that hard? Abstract algebra is one of the hardest stuff I've come across. Calculus? Do you think it's not that hard to prove convergence/bounds/limits of random series and sequences... I agree though that calculus is not that hard, compared to the rest.

Programming is child's play compared to undergraduate mathematics taught in math departments. It's important that you take a module from the math department, not from a physical science or engineering department if you want to experience what it is like.


> Programming is child's play compared to undergraduate mathematics taught in math departments.

One thing you might learn in math is to avoid making overgeneralized statements that you can’t support.

A valid substitution in your statement for “programming” is writing a compiler. And for “undergraduate mathematics taught in math departments”, basic differential calculus.

Yet we regularly teach smart high school students and first-year undergraduates calculus, and almost never try to teach them to write a compiler, contradicting your proposition.

But what do I know? I’m just a dumb programmer. I can’t read your mind, so maybe you had something a little more specific you wanted to say.


> "Yet we regularly teach smart high school students and first-year undergraduates calculus..."

High school students are taught plug-and-chug calculus where one uses rules and formulae without any real understanding of the underlying subtleties that make calculus work.


Bulletproof counter argument, you sure showed me.


For practical purposes, a fair comparison would be "a useful amount of programming" vs "A useful amount of math".

You can get hired after a brief boot camp, although it's not common.

A useful amount of math is like, ordinary differential equations in engineering school, since apps have taken over most use cases for simpler math.

The only direct use is to learn to access the "New way of thinking" math people talk about, and even that seems harder than making detailed to do lists.


Calculus is significant more difficult and requires many Times more studying than algebra


Si


I'd agree: there's a common misconception that math is "objective." Even if one agrees on the axioms (Axiom of Choice [1]), one must reach consensus on definitions, which seem to be consciously chosen to allow generalization of theorems to more mathematical objects; that is, building connections between previously disparate fields of math, e.g. algebra and geometry, calculus and geometry, etc. Why have many domain-specific theorems when we can have one?

One could have a valid proof to a theorem, but there's the human element of having people understand and accept the result.

That said, I think when people speak of math, they speak of its application to the real-world, not the proofs.

[1] - From the Wikipedia page for the axiom, Jerry Bona has an amusing quote: "The axiom of choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?"


> That said, I think when people speak of math, they speak of its application to the real-world, not the proofs.

I think it very much depends on who the people are. I'm a math teacher, and, when I speak of math, I definitely don't just mean its applications to the real world. I definitely think programming—in the sense of thinking about the craft, not just cudgelling the computer into doing what one wants—is good preparation for learning the proof-theoretic arts of mathematics.


Math hard. Can confirm. Starwind have math degree. Starwind much better at programming than Starwind ever was at math.


I spend 5 years doing mathematics and still can't wrap my head around lot of maths. If you really want to see the difficulty in Computer science you'll have to go explore the theoretical stuff which at the end is just maths.


Same here, I remember putting so much effort into a course on Riemannian geometry and still not really fully grasping the content on an intuitive level. That was by far the hardest thing I've ever tried to learn.

The only areas in CS that come close to that in terms of difficulty are discrete math/algorithms related ones and the CS chairs that are involved in such research also tend more towards the applied side of those topics. Really theoretical discrete math stuff resides in math departments most of the time.

Still, having also taken some discrete math classes, in terms of difficulty those somewhat paled in comparison to differential geometry/topology/abstract algebra kind of stuff. I can't even imagine how difficult it must be to be doing research in those areas.


A copy of the classic "Mathematics Made Difficult" (which by the way is perhaps the pinnical of human literature) is saved to my desktop as mathhard.pdf


I learned Haskell before I learned to write proofs. And yes it did help.

But no I don't think experience doing boot camps and churning out React apps and gluing together APIs would help with learning mathematics.

Go learn a language that seriously challenges you like Haskell.


What do you mean by “learning Mathematics?” It should be notes that significantly less than 50% of the US population even has to learn calculus so getting into the stuff that most people would consider pretty advanced is not so hard, right?

I’d be curious what people’s coolest math tricks are that they’ve used at work. I did like one Taylor Series expansion and felt cool for a week.


Never mind calculus; a non-trivial fraction of the U.S. population has trouble learning elementary algebra. The fact that the U.S. K-12 educational system is notoriously a failure compared to otherwise similar countries should not be used to draw inferences about the inherent worth of any particular subject. Not least because that same system also often fails to teach functional literacy, or any amount of basic facts about society that may elsewhere be assumed to be known by any educated adult.


Then why do all New England states outcompete most European countries?

Why is most STEM research produced in Eastern Europe, the US, or China?

This really depends on what metrics you are using and has infinite room for gaming.

US undergrads (both native and foreign born) absolutely crush all of these nondescript places you are suggesting are better, and graduate programs make the gap even wider.

A non trivial fraction of those are educated K12 in the US system.


New England is mostly sanely governed, it shouldn’t be used as a stand in for the rest of the country.

The US was a neat idea we had but it went a little off-kilter around the second half of Pennsylvania.


Texas A&M is ranked 67, Urbana Champaign 41. (USNWR)

After you pull out HPYSM, these are not ridiculous rankings either.

Georgia Institute of Technology is 38th in the world by research impact, with the top ten or twenty being your standard prestige universities. (Elsevier)

If the US is so bad, why is a poorer southern state competitive with far more prestigious institutions from other countries you would perceive as better?

This is ahead of University of Tokyo, Urbana Champaign, etc.


I should have been more explicit; I was being a bit flip, trying to inject a little lightheartedness.

To specifically respond to just that one point, New England is in many ways an unusual region so I don’t think we should use the scores from here to come to any particular conclusions about the country in general.

This shouldn’t be taken as a response to your other points, most of which seem reasonable enough or at least I don’t know anything in particular about them.


Yeah sorry the issue that I run into frequently on this topic is "US bad because those poorer, more conservative states are terrible and if we just governed them like the utopia of New York City, we'd do much better in rankings", and while I think red states have their own failure modes - the stats show that at least some of them are doing just fine.

Sorry for being a bit reactionary there, sometimes a certain opinion is common enough that you respond like it's being stated due to similar comments being a prelude to it.


U.S. undergrads have to complete "general educational requirements" that are taken care of in high school in practically every other developed country. Why does that happen? Because U.S. colleges don't trust K-12 to provide a satisfactory education.


We do not trust the average student, however access to universities in the US is less gated on ability or achievement than most of the nations you're going to be comparing it to.


or that there's value in a college level comprehensive education as well


All other things being equal (including quality), high school is actually better than college at doing the "comprehensive education" thing. College level gen-eds are almost universally reviled as a pointless box-ticking exercise that gets in the way of specialized education. This particular dysfunction has effects even further out; U.S. college education pushes things out to the grad school level that are elsewhere part of the later years of undergrad.


You are right about the 50% but you might as well have said “fewer than 99%”. I mean, the real number is far, far less. A very small percentage of US residents would recognize and calculus or be able to solve a simple linear equation.


Apparently around 15% of highschoolers take calculus. I dunno, I think “significantly less than 50%” has a generally right-ish feeling, sort of like, something that most people don’t do but not like super-esoteric. But it is subjective.

https://www.edweek.org/teaching-learning/calculus-is-the-pea...

It is possible that the majority of students forget it though, I tutored calculus for a bit and it didn’t always seem like their hearts were in it.


I think Yegge is actually completely correct. I started to learn to program when I was 14, and once I understood the concept of functions, I found it much easier to do my calculus and physics work. Fundamentally I understood how to break things down into computable steps.

Granted, I think get the impression we might be overloading the term "mathematics".


Mathematics is not just functions and calculus.

And it seems to me that you learned the concept of functions through programming first but that there's not evidence that you couldn't have learned it from mathematics at first. Functions are a pretty easy concept, so I think it's pretty easy to introduce it from a variety of points of view. So I'm not sure the anecdote backs up any argument about learning programming makes mathematics easy.

I am of the opinion that programming can be used to explore and learn mathematical ideas and am a big proponent of that, but that is something different than "I know how to program so mathematics will be easy now".


Always wonder about this. Other than foundation of maths, the function abd calculus view … my wonder is about the data part of the maths. Not the process. Yes one can view matrix as a function but could it be data. One can real number line, complex or integer as data?


For most mathematicians, calculus as is often taught in typical undergrads is not "true" mathematics. It's just a tool for computation. For them, calculus is analysis (theorems/proofs that are used to build up calculus).

So my question is: Did you study analysis and would you credit programming in helping you get good at it?


+1 for this comment. Calculus, i.e., Calculating with mostly finite numbers and (usually) a known set of well defined rules is only a part of mathematics.

As a student that currently learns Analysis and linear algebra, it is far more complex and abstract than calculus. It is not neccessarily harder to learn but different. And adapting to this paradigma takes time (and effort).

It is similiar to learning Assembley as a Python developer. Knowing Python will help with Assembley. But the levels of abstraction are obviously different and will require a lot of learning.


I don't understand this comment. Calculus as taught to mathematics undergrads is analysis, right?

Where is calculus "often taught" without "theorems/proofs that are used to build up calculus"?


In most US universities, the courses called "calculus" are mostly about computing integrals and derivatives. Yes, they'll have some theorems (Fundamental Theorem, Mean Value Theorem, etc), but most of the problems are related to computation than proving.

Unfortunately, since engineering students outnumber math majors by a large margin, the departments cater to them and not the math majors. The latter study analysis in later years - usually third or fourth.[1] Universities with strong math programs may offer it in the first or second year.

[1] I just checked my undergrad's program. They begin taking analysis in their 4th year, and it's offered only one semester a year!


That's just sad.

I was taught Calculus in high school, years 11 and 12 (two optional senior years for those that want more education prior to university, etc rather than transfer to a trade apprenticeship or leave school altogether).

That was a mixture of basic theory and proof and some computation (eg: derive an expression for the volume of intersection of two pipes at right angles, etc).

University went straight in to heavy analysis and foundations (for Math 100 - math for the serious (math, physics, hard chemistry)), with seperate streams for "casual math" - engineering, business, law, etc.

Engineering math prepped people for calculating dynamics and kinetics with vary loads, masses, thrusts, harmonic forces, network propagation, mesh computations, etc.

This was Australia in 1980.


> They begin taking analysis in their 4th year, and it's offered only one semester a year!

This is insane to me. In the UK about ten years ago, on literally the first day of my degree, my first class was real analysis. Yes, it was the easy stuff like proving sequences and series converge, various things about continuous functions, but we learnt how to prove it all and the exam was all about proving various things. And we built up to harder stuff as the year went on.

What is even happening if math majors aren't studying analysis until their fourth year?

Sorry I'm so incredulous, it's just that I literally don't know what I would've been studying if analysis had been delayed so much.


Not US, but for GCSE A's further math equivalent, calculus primarily involved applying intermediate rules (chain rules, trig identities, etc.) to evaluate differentiation/integrals on elementary functions, and a bit on geometric and arithmetic series.

Analysis discusses less well behaved functions and spaces than these.


You can do the first part without even relying on analysis in a mathematical sense. You simply define a differential algebra, by introducing a derivation function that just happens to respect the correct rules. Then "calculus" is the topic of how to perform computations in such an algebra. Note however that you do need analysis to rigorously address other parts of a typical "calculus course", especially those dealing with infinities, sequences and etc.


Not them but I def would. I built a fintech/econometric-lite system in python years back and it was mostly just taking in a ton of “obvious” knowledge and realizing the cool stuff you could do in practice


functions in traditional programming and functions in math are two different things entirely. They should have been called procedures (I think there's historical debate about this but I can't find the reference). Obviously, functional programming is an attempt to address this.


I grew up programming (from 10 or so) and couldn’t agree more. I have almost no formal education yet I’ve picked up enough math to do ML and (mediocre) cryptography and keep up with people who have scary sounding degrees. in the process I’ve fallen in love with math and think as programmers we haven’t been meeting mathematicians half way.


> get the impression we might be overloading the term "mathematics".

We are; the challenge is there is a near infinite difficulty in both "programming" and "mathematics" and at a certain point, the skill sets diverge greatly.


Implementing numerical solutions for math concepts can give you insight into the math ex. Newton-Raphson Method or FFT. I would not sell yourself short on being able to understand. You may not be skilled at doing all the symbol manipulation required for what people consider typical "math" but that does not mean you cannot understand it.


I do agree that programming is very useful as a medium of exploration, a la what you said. I really like how the book Turtle Geometry approaches this. But that is really something different than what the quoted text is attempting to get at.

> I would not sell yourself short on being able to understand.

I have a master's in mathematics and have continued to take courses while working full-time. It's just that I'm more than aware of the amount of mathematics I do not know compared to an active graduate student in mathematics and above. Or maybe you meant the royal you.


I am more a recreational mathematician. There seems to be a lot of unnecessary math phobia lurking about. Simply like to encourage people that sound like they fall into that category - obviously you are not one of them:)


The only thing comparable is maybe drawing or playing an instrument. Math can never be learned from a textbook.

Anything that's just algorithmic steps would be done by a computer, so the useful math people want to learn requires some kind of insight or new mode of thought, or advanced methods that have not been made into an app yet.

I suspect when people say math is easy, they mean arithmetic and pre-algebra, and they're not concerned with practical applications at all, just the idea of general education and new ways of thought, and who knows if the level of math they are talking about is even enough to indirectly do much in daily life.


I agree, most modern academic mathematics is essentially a codebase with too much abstraction, impressive, but insane. It doesn't have to be harder, and programming is the out. So, in my opinion, you are just learning the wrong kind of math. Ideally, programming is exactly as hard as math.

Closer to the right kind of math: https://sites.math.rutgers.edu/~zeilberg/GT.html


For me the opposite was true. My mathematical education made it incredibly easy to pick up programming and programming languages, everything was somewhat familiar and the concepts just came naturally.


How did you feel when you first came across a global variable, or even a pointer? It seems to me that math-first people would probably find C to be an abomination.


I believe one thing which came really natural was to think in virtual machines, to see a programming language as something which acts upon a fictitious environment, where certain instructions map to certain consequences. Of course programming and mathematics are very different activities, but one core principle that I always relied on was thinking in abstractions. What option do I have to manipulate the environment and what invariants are there? How are complex thing constructed out of others?

>It seems to me that math-first people would probably find C to be an abomination.

I certainly don't. You might do so if you wanted programming to be an expression of pure mathematics, but I do not think that is the right approach. C does well for what it is an abstraction over an underlying, real machine and thinking of it as an abstraction is the right thing.


Technically, that's the converse, not the opposite ;-)

Someone may be led into thinking you're countering the OP.


Most definitely. For me, I have never used any specific mathematical concept in programming aside from some side projects for using programming to explore mathematical ideas. But my so-called training in mathematics taught me both highly abstract thinking and deep, concrete, in the weeds thinking, and that's what really comes in handy in programming.


I disagree because Math is programming. All those symbols you see map to a set of steps (a program.). It’s just knowing what subroutine every esoteric symbol stands for that’s hard.

I will agree it’s been more difficult learning math than programming for myself as well: but that’s because math is geared and targeted for people who like doing symbolic logic by hand. Math people think we’re a level below them (we are in some ways from a working perspective) so they tend to write off complaints like this as us just wanting to make math more like programming.

The fact we don’t have a nice and intuitive way of writing math via a keyboard is proof that these two fields, which should be tightly coupled, are not on the same page.


>All those symbols you see map to a set of steps (a program.)

Simply not true. Most mathematical statements, e.g. proofs of existence have no relation to a "program".

>math is geared and targeted for people who like doing symbolic logic by hand.

No, it is not. There is absolutely nothing interesting about symbol manipulation, it is always the least interesting part of a proof. It usually is the part the author handwaves away, while focusing on the actual interesting parts, the idea behind the proof and the possible intuitions for them. Textbook might include them, but as training and because you need to be more explicit when teaching.


>Simply not true.

Okay. What is a proof other than a step by step explanation for why something’s true? You’re getting caught up in “program” when it’s objectively the case that all math follows a series a steps. A lot of those steps are “handwavy” I’ll give you that. That’s not relevant to what I said though.

>there is nothing interesting about symbol manipulation

Cool. You missed my entire point again. Math education is geared towards a certain set of people who pick up on (and gain an interest in) the language of math. Only once you get to higher level math do you even start to get alternative visualizations etc (at which point you’ve weeded out a ton of people who would have benefited from e.g. visualizing numbers as groups of shapes). There are a million ways to teach math and we’re leaving a lot of people behind. That was my point


>Okay. What is a proof other than a step by step explanation for why something’s true?

Cooking instructions are a series of steps as well. Would you also claim cooking, mathematics and programming really are all the same thing?


The first part of your answer is incorrect about programming in a general sense (not the particular software programming most of HN does). I push it so far back as to call it programming in a computer science sense, which is very simply just proof theory encoded into a system.

There is a reason Turing is considered one of the greatest minds to ever live. He didn’t just invent a concept. He invented a completely new branch of science. We would’ve gotten there eventually but his idea to solve the Entscheidungsproblem using his machine was such a step ahead of the times that we christened him the father of an entire science.


Errr, Alonzo Church solved it first, Turing followed .. and both of their independent methods were heavily based upon similar earlier work by Kurt Gödel, with Church also incorporating ideas from Stephen Kleene.

There is no doubt that Turing was bright, very bright indeed, but next you'll be claiming he cracked the Enigma Code or something.


A constructive proof of existence is exactly a program. (It might not always be a program for a Turing machine, because "constructive" and "computable" are not exactly the same - but that's beside the point.) Even a non-constructive proof of existence for x can be significant in a programmatic context; it tells you that you can posit an oracle for x (e.g. asking for it to be input by the user, introducing further assumptions based on some special case, etc.) without thereby crashing the program or causing it to behave incorrectly.


>but that's beside the point

It seems to me that it is an important point though.

> Even a non-constructive proof of existence for x can be significant in a programmatic context

Which has no relation whatsover to proofs and programms being the same thing.


A type-level program is still a program. Non-constructive proofs are addressing the question "will this program crash or go wrong if I extend it to do X, regardless of how I achieve that?", which is exactly the domain of type-level programming.


Programs and proofs are the same thing though according to the Curry Howard Correspondence, if I'm not wrong.


> Math is programming.

That doesn't make any sense. What do you mean by that?

> I agree it’s been more difficult learning math than programming for myself as well: but that’s because math is geared and targeted for people who like doing symbolic logic by hand.

Mathematics is not about symbolic logic. Mathematics is the study of idealized structures, their properties, and their relationships. Symbols are just a convenient shorthand. They are not the mathematics in and of themselves.

> The fact we don’t have a nice and intuitive way of writing math via a keyboard is proof that these two fields are not on the same page.

What does that have to do with anything? Although LaTeX and the like are pretty decent at it, there's a lot of things we can't do easily via a keyboard. Why is that a constraint on anything or relevant?

Not trying to be provocative, but I honestly have little idea what you're talking about.


I think the math you've been exposed to is mostly on the computational side (compute an integral, solve an equation, etc).

Much (most?) of math is quite different from it. Proving that there is a well ordering of the reals, and simultaneously proving that it is impossible to show you such an ordering: Very different from skills needed in programming.


I disagree. You’re only able to prove that ordering of the reals (and that it’s impossible to show) because you are computing the abstract structure underlying the reals (which is based on some lower level ideas etc.) Just because a problem is computationally hard with a step by step CPU doesn’t mean it isn’t computation.

This is actually a problem of interest to me, so I’ve definitely been exposed to it and the limits of modern computation. But I’m not speaking strictly about the modern day CPU.


> You’re only able to prove that ordering of the reals (and that it’s impossible to show) because you are computing the abstract structure underlying the reals (which is based on some lower level ideas etc.) Just because a problem is computationally hard with a step by step CPU doesn’t mean it isn’t computation.

The set of computable real numbers is countable and thus has measure zero. In other words, almost all real numbers are non-computable, and almost all has an exact definition.

So you can’t compute the real numbers unless you’re meaning something else by computing.


If you want to get technical then programming is also math, provably so.


He might be describing one bifurcation in the world of math, where constructive proofs reign, but overlooking nonconstructive "existence" proofs.


I wrote a book based on this premise: www.pimbook.org.

The ebook is pay what you want.


If I may, would like to add that the statement is not even wrong …


Si


I used to work on these problems in school all the time, but once I decided to join industry I encountered a bunch of these Lisp-style engineers who love computation in the abstract but not the application thereof. Scala seemed to attract a lot of folks like this. Maybe if I had gone into scientific computing I would have found more.

I get to use these skills every so often though. When I was designing one of our early two-level caches at $WORK, I wrote an analysis of chosen TTLs and jitter values by looking at microservice latency distributions. I've debugged a nasty concurrency issue by modeling it in TLA+ and then fixing the bug (full disclosure: once I thought through the TLA+ model, the bug was fairly obvious to me, but the modelling itself was the valuable exercise.) I've used MILP solvers for capacity planning our boxes. I've designed queue backpressure through probabilistic analysing. Each of these things has been crucial in designs I've used, but I only encounter a problem like this once every some years. (And these days I often write more docs than code.)


I quite frequently encounter code that can be made much faster (at no significant loss of value generated) by sampling or using other statistical techniques to reduce precision.

Also flows and buffers in the software that can be improved with queueing theory. Logic that can be simplified with boolean algebra.


"Writing software" = "writing down math".

Consider:

* "Writing software" means arranging symbols (bits in a machine language, UTF or ASCII characters in a high-level language) in certain permitted ways, to transform an input sequence of symbols (e.g., a stream of byte values representing user actions from a video game controller) into an output sequence of symbols (e.g., byte values representing pixel RGB colors for display on a screen).

* "Writing down math" (e.g., to prove something, or to solve a problem) means arranging symbols ('x', '2', '+', etc.) in certain permitted ways, to transform an input sequence of symbols (representing mathematical notions) into an output sequence of symbols (representing other mathematical notions). Turing, Church, Curry, Gödel, and many others realized this in the 1920's and 1930's.

That said, most self-described "software developers" are typically working to solve relatively simple problems for immediate practical application, whereas most self-described "mathematicians" are typically working to solve highly complex problems, often highly abstract in nature, without regard for practical application.


I fear that's a very narrow view of mathematics. Nevertheless, most of the time you can see the output of what you created while programming, check if it's the correct result, etc. I don't think it's like that in math, where you deal in a much more abstract realm.


"Mathematics" is not the same as "writing down math."


I stand by what I said, applicable even to proofs, but I take your point.


:-)


Mathematics is not about computation. Certain means of arranging computation are entirely irrelevant to mathematics, computation is a mathematical tool, not an end.


I never said that mathematics "is about computation."

What I did say is that writing down mathematics is equivalent to writing software.

Both require the use of a formal system.


>I never said that mathematics "is about computation."

But programming is about computation. If mathematics isn't about computation as well, then they are not alike.

>What I did say is that writing down mathematics is equivalent to writing software.

It is not. That is plainly false. E.g. mathematics considers objects which are not computable and makes non-computable calculations with those objects. I am aware that you can encode certain formal mathematics into certain software, but that is like saying cooking is like programming because you can encode recipes as a program.

>Both require the use of a formal system.

So what?


>> What I did say is that writing down mathematics is equivalent to writing software.

> It is not. That is plainly false.

Whether the two look the same or not, maths and computer programs are in fact fundamentally isomorphic. [1]

[1] https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...


>isomorphic

Certainly not every computation is a proof. What statement does:

void f(){ printf("?"); }

prove?


The fact that you have found a function of type () -> () is a proof that true implies true. This is the Curry-Howard isomorphism.

Not a particularly interesting proof.


But surely there are uncomputable objects which you can make mathematical statements about. How do you encode them into a type signature?


There are no closed form solutions to uncomputable problems. You can encode them in math. You can also encode them in a program. The result is the same.

In fact, the absolute limit of computability is so interesting that people have explored it in the abstract sense. For reference, see busy beaver numbers on Wikipedia or Scott aaronson’s awesome essay on finding the bigger number.


> But surely there are uncomputable objects which you can make mathematical statements about.

Absolutely, one famous example is the halting function that has type (program description, program input) returns boolean.

A symbolic formal system can "reason about reason" and manipulate descriptions of objects that are themselves uncomputable, countably infinte, or even uncountably infinite.


It probably doesn't prove anything more than ∫₀¹ x dx.

Also, just like there are trivial, incorrect or incomplete proofs, there can be trivial, incorrect or incomplete programs.


The suggested "efficient" solutions for fibonacci and factorial only work on small inputs as they return the result as a long int. For these small instances the use of floating point functions like pow, sqrt, and exp is likely less efficient than a simple iterative solution. For larger instances using bignums as output, floating point computations do not even offer an alternative.

It's true though that mathematics offers faster integer only (e.g. matrix based) methods of computing fibonacci.


Indeed, I kind of stopped reading the article there.

The other issue is his picking on Lisp programmers with this example. If you open books on programming using C, it almost always will have either the iterative or the recursive solution - not the closed for one. So why is he picking on Lisp programmers in particular?


The matrix based method for Fib can be understood as applying the closed-form solution over Q(sqrt5). OP seems to miss this point.


My thoughts at this point in the original article was:

Well - if you want to demonstrate how mathematics helps here, then you should mention that computing the n-th fibonnaci number can be computed as the n-th power of the matrix [[1,0],[1,1]], and that the most efficient way to compute powers (in any associative domain) is the repeated squaring algorithm.


There are many ways to obtain the formula mentioned in the article. One of them is by diagonalizing this matrix and then applying exponentiation. How can another algorithm be better than constant time in the most general case?


No - there isn't really an O(1) solution. And the reason is that the Fibonacci numbers grow without limit. So there can't be an O(1) algorithm - even just writing down the answer takes O(N) - because the answer has O(N) bits.

Concretely Fibonacci(1480) is the largest Fibonacci number that fits into a double. So for higher Fibonacci numbers you need to compute this with arbitrary size integers (or floats). And then look at it in terms of bit complexity (i.e. the time to compute a product grows with the size of the integers).

Put differently. The "constant time" algorithm only works for N up to 1480. And if you limit N, then it doesn't really make sense to talk about the asymptotic complexity.

I believe the optimal algorithm is to compute the Fibonacci number by computing the N-th power of the matrix [[1,1],[1,0]], and compute the power using the repeated squaring algorithm. There are alternative formulations of that using identities for Fibonacci or Lucas numbers, but those identities basically correspond to squaring the matrix.

If you really want the asymptotically fastest algorithm you must use the fastest algorithm for integer multiplication (Harvey and van der Hoeven’s).

But even with the naive multiplication for arbitrary size integers you can compute Fibonacci(1000000) - which is a number with more than 200'000 digits - in less than a second. ;-)


Well, with Scheme, the difference on computing fib(n) recursively vs the iterative way shows up really fast.


I basically hate math discussions on HN. Why? Because it seems like there's two (at least) "camps" in terms of how they interpret what "math" even means. And the various camps constantly talk around each other, with neither camp seeming to realize that they're arguing about completely different things. So they keep arguing, nobody gets anywhere, and the whole thing is largely a cluster-fuck.

So what do I mean by "camps"? Well, the most obvious dividing line is between people who see "math" as something that everybody does, and that includes everything from elementary school addition up to, well whatever, and has a primary goal of computing something. Then you get the camp who see "math" as the exclusive domain of mathematicians, look down on any maths class at a level lower than abstract algebra or real analysis, and consider the primary goal of math to be more math (eg, proving things that we didn't already consider proven). And the latter often tend to look down their noses at anybody who dares talk about "learning math" but without a goal of becoming a mathematician.

sigh

My position is that "math" includes both of those notions, and that in most conversational contexts you have to explicitly define which one you are talking about up front in order to have even a chance at a productive conversation.


This is not the divide I see, but it's the divide I hear many people say they see, and so I've always wondered if the majority of people actually fall into these two camps. From where I happen to stand, I see four camps or so:

1 · those who see mathematics as a particular subject area, body of knowledge, or set of topics, with limited applicability to programming (which, given that definition of mathematics, I think is true)

2 · those who also see mathematics as a particular subject, but with almost unlimited applicability (which I think is false, except maybe on a technicality)

3 · those who see mathematics as all-encompassing and including every kind of careful thinking or precise action (which I don't think really matches serious use of the term, nor is a useful definition)

4 · those who see mathematics as a particular set of skills which are most commonly applied to the topics that people we call "mathematicians" study but are also applicable to many other problems (this is the camp I happen to be in)

So for me, at least, mathematics is primarily a way of thinking and playing. (More specifically, a collection of methods for problem-posing and problem-solving that are rooted in formal logic and equivalences / transformations of representation.) Mathematical training and experience with proofs helped me cultivate that, but studying at university isn't the only way.

I once asked my first mathematics professor whether I could call myself a mathematician even though I was just a first-year student; they replied that anybody who does mathematics is a mathematician. That idea has really stuck with me. Although it's helpful to have jargon and wonderful to have mathematical tradition, anyone can do mathematics and be a mathematician, regardless of age and background – the capacity is the common heritage of humankind, like the capacity for art or language. And like art, it can be directed toward any goal or be an end unto itself.

[I've never agreed with the opinion that mathematics exists for the sake of physics or real-world problem solving, but enough people I respect hold that opinion that I'm wary of dismissing it.]

Now to the matter at hand – does mathematics (sense 4) help with programming? For me, absolutely and without question. It's perhaps the most important set of skills I use while programming (but not the only one).

edit: I distinguish computer science (a branch of mathematics that professional mathematicians study) from programming (designing correct programs) from coding (communicating programs to a computer so that they can be run), and I assume that programming and (to a lesser degree) coding are mostly what people are talking about here.


From where I happen to stand, I see four camps or so:

I have no doubt that there are more ways to segment people in terms of how they view math, then the simple model I mentioned. That just happens to be the divide that I feel like I see most commonly and that often leads to fruitless discussions.

they replied that anybody who does mathematics is a mathematician.

The problem with that is the definition of "doing mathematics". And that's where I see the divide I mentioned above. Some people would consider "finding the derivative of f(x) = x/x^2 to be "doing math" whereas other people say that's "mere computation" and think of math only as things like "proving why the derivative of f(x)=x/x^2 is $WHATEVER".

I suppose one might say the former is "using math" where the latter is "doing math", but in popular usage it seems like people freely intermix those notions, and people who feel strongly about it one way or the other get their knickers all wadded up over it and hilarity ensues.


> They seem to agree on one thing: from a workaday perspective, math is essentially useless.

The creator of Dilbert advocates stacking, namely learning multiple skills and combining them to achieve better results than any single skill can. His advice applies to maths as well. I work on distributed systems as a generalist, yet I find maths, time and time again, career changing. A few typical examples: queuing theory that helps improving latency of my services by more than 10x. Statistics to identify patterns in data, which led to a new product. Time series analysis that led to a new system. Data mining and information retrieval in search and recommendation for continuous improvement of my search product. Linear algebra, calculus, and combinatorics as foundations to identify or prove certain properties of my systems for later optimization. And in general, the ability to understand papers (or at least know what to learn to unblock myself) to stay on top of what's going on in exciting fields.

One does not necessary need maths to build systems, but boy it is satisfying and career-rewarding when I actively look for real problems that scream for some maths. What's most amazing is that we don't even need graduate-level maths. Entry-level college maths play wonders most of the time.


> Rather, mathematics is a tool for understanding phenomena in the world: the motion of the planets, the patterns in data, the perception of color, or any of a myriad things in the world that might be understood better by manipulating equations.

OK, so on one hand I love this point, and I'd love for it to be more broadly understood and appreciated, especially since my educational background is Mathematics and my CS has been practical, half self-taught, and frankly patchier than I'd like.

But:

> Fortran-school programmers view the computer as an advanced tool for doing mathematics.

If we're understanding "mathematics" as numerical/analytical work + engineering (maybe applied mathematics) I can see this.

But the Lisp programmer talking in terms of recursive fibonnaci definitions is doing something that can quite adequately be described as mathematics, though it's also mathematics to figure out the closed form and understand why you might or might not use it. Not sure if it's mathematics to simply know a given closed form, though it's something mathemeticians sometimes do.

And I'm more skeptical that Lisp programmers don't do all this stuff: recursive demonstrations of fibonacci numbers are usually in textbooks to teach recursion rather than present an optimal way of computing fibonacci numbers, because it's simpler than starting with dynamic programming, and you're gonna need recursion to effectively solve some problems.

Not only that but a higher level, I think that a lot of developers (Lisp included) are already reaching into the very mathematical skills of domain modeling at one level or another -- often first with a different set of tools than a mathematician might bring, but a similar kind of work. If Miller's overall point is that we all could do a better job with more of a mathematician's tools, I agree, but then again as someone who came into industry with a math undergrad's tools the utility of those might be overemphasized here.

Or maybe I just don't appreciate what I already have.


I think the author is making two points among others:

1. LISP based text/books always present the same two cliche examples

2. They never go beyond (1) given how much they talk about recursion. He mentions sqrt of 5 in the explicit formula of Fibonacci sequence and how that could be explored in more detail to find out where that comes from. For that you need to know [0]. That's part of a larger suit of theorems on sequences. This stuff together with theory that surrounds the Gamma function can easily take up a whole book. But there are a ton of books that treat either one really nicely: most textbooks on discrete math and real analysis.

[0] https://ibb.co/TR1f5Gz


I see someone disagreed with me. Not sure about what, but just in case I will show how to derive a formula for Fibonacci using the above. You be the judge if it belongs in a LISP programming textbook, even though this particular result is very elementary.

Recurrence relation for Fibonacci is F_k = F_(k-1) + F_(k-2) for k=> 2 with F_0 = F_1 = 1. Also, t^2 - t - 1 = 0 implies t = (1 + sqrt(5))/2, (1 - sqrt(5))/2. Both of these facts satisfy the conditions of the linked theorem and so we have F_n = x((1 + sqrt(5))/2)^n + y((1 - sqrt(5))/2)^n for n=>0.

Now F_0 = x + y = 1 and F_1 = x(1 + sqrt(5))/2 + y(1 - sqrt(5))/2 = 1 from which it follows that x = (1 + sqrt(5))/2sqrt(5) and y = -(1 - sqrt(5))/2sqrt(5) meaning

F_n = (1 + sqrt(5))/2sqrt(5)((1 + sqrt(5))/2)^n + (-1 + sqrt(5))/2sqrt(5)((1 - sqrt(5))/2)^n.

This math (complete with a universe of theorems and their proofs) can obviously be extended in many different directions in such a way that it can take over your whole book.


I agree with you that all that stuff is off topic in a textbook about Lisp. Some off-topic material is necessary in order to connect the programming topic with the real world. Too much of it will just distract from the focus and add bulk to the page count.

That Fibonacci has a closed form it's completely irrelevant to teaching recursion as a programming technique. It could be mentioned in a small footnote giving some external reference. More relevant is the fact that the naive Fibonacci is terribly inefficient and can be vastly sped up by memoization. Even that is a problem that's not specific to the language and how to use recursion in that language. It has to do with using recursion well in any language, that belongs in an advanced chapter in a book which is not mainly about Lisp but about learning programming using Lisp.

Lisp already has a reputation for being scary, which is unfounded, but there it is. A Lisp book which goes into numerous mathematical rabbit holes will probably just contribute to that meme and have a discouraging effect.


I don't think anyone finds recurrence relations, closed form expressions, or their derivation controversial or disagreeable in any way.

On the other hand, the implication that Lisp textbooks -- or other language texts that use fibonacci or like examples for recursion examples -- shouldn't be using simple recursive definitions as an illustration if there exists a closed form may well raise some eyebrows. What do you want, Collatz Sequences? Additionally, there's an air of attacking the sophistication of devs who choose Lisp specifically, which is the kind of thing that people frequently do find disagreeable (even if there's a reasonable case for it).

Fibonacci and other linear recurrence relations are used because they're simple and often familiar intros that can help conceptualize the way recursion can reduce problems to smaller versions of themselves (which is very much mathematical thinking). They're not there because they represent limits of how Lisp devs generally do or should think.


> On the other hand, the implication that Lisp textbooks -- or other language texts that use fibonacci or like examples for recursion examples -- shouldn't be using simple recursive definitions as an illustration if there exists a closed form may well raise some eyebrows.

I didn't imply that. I think the author laments the lack of math in LISP programming books beyond a couple of cliched examples he gave, well the "simple recursive definitions". He is not saying you shouldn't be using them, he is saying that is not adequate. That's too little. He is saying LISPers should go beyond that. My point was that you can definitely go above and beyond that into the math territory complete with coherent body of theory(a bundle of theorems and their proofs), but that will take you way off course. Especially, given that regular math textbooks contain all the relevant info. I am not even arguing the rest of what you said.



The idea that the "Lisp programmers" are somehow adverse to mathematics is historically untenable. Macsyma was written in Lisp. And Macsyma was one of - if not THE most important application for the Symbolic Lisp machines (the ancestor of all later Lisp machines). Many later systems (Maple, Mathematica, ...) were written by people who would consider themselves belonging to the Lisp crowd. For example all of those systems had automatic memory management with garbage collection, most had closures/lambdas, etc.


Yes, the numeric language does numeric things. But these examples of closed form solutions ignore that numbers on computers are not the usual mathematical objects, rather having finite size and precision, and so it is hard to say immediately if these solutions using floating-point arithmetic provide the same precision. Of course the precision can be calculated and improved with analysis, still, but this falls under "it doesn't work but it's fast" territory (unless the limited precision is acceptable; I'd personally keep the result as a float then to expose that to the client of these functions).

I would be remiss not to mention the following story:

There is a story that Ken Iverson, the inventor of APL, was passing a terminal at which a Fortran programmer had just typed:

    I = I+1 
Ken paused for a moment, muttered “no it doesn’t”, and passed on.


Cool story!

Only way I = I + 1 is if I is infinity.


There is some irony in juxtaposing a statement on floating point arithmetic before; I = I + 1 if I is a sufficiently large float.


Project Euler's higher-numbered problems often require a combination of analytic solutions to vastly simplify some parts of the search space, and brute force for the rest. Without math or lucky guesses—where you've googled a partial solution you don't understand, or you've naively found a function which seems to get the right answer for test cases, but you can't be sure it generalizes—you're left with brute force methods that won't finish in a reasonable amount of time, possibly not even before the Sun turns into a red giant.

Those kinds of problems are not relevant to most everyday software engineering, but in the few cases when you need math, you really need it or you have to scrap the entire feature for being too slow or impractical.

Also, designing neural nets...


When I first was learning to program I was shocked at the logical mistakes and errors I was making. I thought these were an initial bout of bad luck that would pass. But no! And I learned the craft of debugging before my programming hobby took could really take off.

Programming takes high precision thinking. Learning mathematics is what has tuned my mind to better, more precise, thinking. So I am grateful for this and feel I am a much better programmer for it. It also seems to sharpen up ones debugging skills.

There are other things that help also. Second, I'd say learning to write well helps a lot also. Programming is communication (by way of the source code you leave behind for others to read and interpret). A deft hand for exposition sits at the centre of good naming practices for code.


I studied math for several years and got honors and everything in it so I thought I would be among the top math students at my university. Then I took the Putnam and signed up for PhD level classes which quickly changed my view. Still, programming does become much easier if you're comfortable working with these abstract concepts.

Higher level math is at its most fun for me when you're solving difficult problems with peers. I wish there was a startup that could recreate that experience outside of a university setting.


I define a hack as a use of something in a way it was not intended to be used to achieve a goal. This definition is missing from this essay, and the ones it does present are poor IMO. Under my definition, hacking falls entirely in domain of the Lisp programmer. Fortran engineers are the ones who get fired for hacking.

There is nothing inherently wrong with the "Fortran" approach. Sometimes, it is necessary. But it's clear that the Lisp-style solution is always preferable unless proven inadequate for the problem at hand.


"Again, no recursion is required as long as one knows that a factorial is actually a special case of the gamma function. (The implementation of log-gamma is usually a polynomial approximation which requires constant time to evaluate.)"

A prime idea behind using recursive techniques is that for many complex problems you do not have to know the "special case" as you can break complex problems into simpler pieces that can be solved via recursion.


I would be surprised if there is not some kind of loop in the code that is used to evaluate the gamma function. You can also define factorial like for example the number of different permutations of any set of a given size.


outside of edge cases, c++ lgamma (log of gamma) uses Lanczos iteration with 8 iterations and gamma is just exp(lgamma)


List programmers are not ignorant of mathematics. I knew about the closed form of Fibonacci long before I ever wrote the first line of code in Lisp.

I'm sure I've used Fib in examples, without always mentioning that it needs to be memorized to avoid a monstrous inefficiency, and that there's a direct way to calculate it.

The blogger is implying that whatever you don't mention must be something you don't know. That when you write on any topic, you must go down all the related rabbit holes you can think of in order to show that you know the topic inside out so as not to appear ignorant to someone who thinks like he does.

The problem is if you want to write a book which actually teach you somebody a certain topic like a programming language, that goal is almost completely at odds with the goal of writing in order to publish something which shows how smart you are.


Feels like the “modern” data science & python hacker/plumber is typically closer to the Fortan and linear algebra side these days?


Mathematics is different from programming as an activity and ontologically.

Activity-wise, mathematics is about intuiting and proving theorems to progress the frontier of knowledge about abstract mathematical objects (groups, probability distributions, triangles, matrices, knots, ...) and their interconnections. This has little to do with programming, but proofs by mathematical induction (German "vollständige Induktion" = "complete(d) induction"), which is actually - like all mathematics - a deductive method, can be better understood after understanding recursion and loops. It is also very hard.

Ontologically, for instance, "x = y" in mathematics means "the unknown value x equals the value y", and that is true always, at any time (until the context changes from one theorem/proof to another, when x and y may be 'recycled' without saying it explicitly). In contrast, "x = y" in (e.g. Python) programming means, "look up the value stored in the memory location labeled y and copy that value to the memory location labeled x." This is rarely said in textbooks and tutorials, so I tend to make a point of stressing this when teaching programming, in particular to non-computer scientists.

What the OP refers to is perhaps more applied mathematics than mathematics, though: using mathematics as a modeling tool.

I like the OP, but what is missing from it is the recognition that a lot of what he talks about is already happening in machine learning: any machine learning is mathematically a numeric optimization process, so all these tools like Tensorflow or PyTorch under the hood use what the OP calls "FORTRAN" way, often implemented in C++ and sometimes actual battle-tested FORTRAN libraries (e.g. via NumPy https://numpy.org/doc/stable/user/building.html#accelerated-...).


I really like this article, but there’s something the author is not considering:

https://lee-phillips.org/lispmath/


As pointed out above - the most efficient to compute large fibonacci numbers is to compute the matrix power [[1,0],[1,1]]^n using repeated squaring. Or you could use the known identities to compute Lucas numbers, which amount to the same thing. The lispmath talks about computing fib(40000) in 100..200 ms, the repeated squaring approach computes the same number in < 5 ms (on my not very powerful machine).


We're here (computational scientists) and we're quietly working doing our thing. The only reason for the disconnect we don't really swim in the same circles as the lispy programmer types, we tend to be relegated to academic circles which by their nature are rather insular. There is also a very different approach to computation in general, and so there is "laughing" perhaps, but such is how it is I guess.


Kind of agree but my argument would be that solving math problems exercises the same mental muscles that programming uses and vice versa. Doing one improves the other. Saying that it is necessary or essential is a false dilemma. They are complementary. Is computation mathematically based - that seems obvious. Does programming get done despite programmers lack of understanding the math of computation - obviously quite a bit.


I ain’t one to promote gatekeeping, but this “writing about math” thing- who let the men with only basic calc and linear algebra knowledge in!!!?!? this article was a goddamn mess


This article misses the point. The Fibonacci/factorial calculations are just toy examples to easily illustrate recursion to a beginner. Nobody who actually needs to compute them for a real application would think that these inefficient, merely pedagogical examples would be the way to do it; Lisp hackers would have no problem using the closed-form solutions.


I was a "programmer", and well paid at the time, but also had a good background in ugrad plus some in pure/applied math.

The math helped, was enough help to be a crucial difference and put me way ahead of the pack, that is, everyone else around. The math I did was to solve problems in the job I had. The job was as a programmer or other computer guy title, but I saw the problems and used math to attack them successfully.

So much for some generalizations. Some examples are needed:

Example 1: One afternoon Fred Smith, founder, COB, CEO of FedEx, stumbled out of his office, tired, frustrated saying "we need a computer". He had just been trying to schedule his fleet of airplanes. Soon the BoD was also concerned, so concerned that crucial, necessary equity funding was seriously at risk.

A guy I knew in ugrad physic class called me. At the time I was being a computer guy at Georgetown U. and also teaching courses in computer science. Several of us met in a conference room in the library to discuss what to do about the scheduling. There was lots of noise. Finally I announced that I would design and write the software. I got an account on a time sharing service offering CP67/CMS (VM/CMS) computing and wrote the software in my favorite language, then and now, PL/I.

Later in Memphis, one evening SVP Roger Frock and I used my software to develop and print out a schedule for the full planned fleet. The next day two representatives of Board Member General Dynamics went over the schedule and announced "It's a little tight in a few places, but it's flyable." The BoD was happy, and the funding came. Smith's remark was that the schedule "solved the most important problem" facing the company.

Role for math? Apparently I was the only one around who could see that calculations of great circle paths was just the the law of cosines for spherical triangles and to do the vector calculations to handle winds.

Right, not much math was involved, but the math was crucial, and I was the only one around who had it.

Example 2: At Georgetown a computer science prof got some public code for some statistical operations, wrote a main program to call that code, and used the result in teaching a course in statistics. In his testing, three of the public routines had problems. Two of the problems I fixed with just some PL/I tricks with memory and some algorithms from Knuth. For the third, there were numerical problems, and I solved those with a version of orthogonal polynomials. Not much math, but the math solved the problems the prof saw, and I was the only one around who knew that math.

Example 3: I was in a software house, part of KMS (early connection with laser fusion), and bidding on some software the Navy wanted. Part of the work was Nyquist sampling, the fast Fourier transform (FFT), power spectral estimation, digital filtering, etc. I had been working close to all that due to working with the FFT, got the Blackman and Tukey book on the statistics of power spectral estimation, quickly wrote some sample code, in PL/I, illustrating how to do much of what the Navy wanted in the bid, showed the code and its output to one of the Navy engineers, and presto, bingo KMS got coveted "sole source" from the Navy. My work with the math put our software house far ahead of the software houses we were competing with.

Example 4: The BoD at FedEx wanted some revenue projections. People around the office had hopes, intentions, dreams, etc. but nothing rational or convincing. We knew the current revenue and the revenue from the full, planned fleet. So, the projections were an interpolation between those two. Argue that the revenue would grow by current customers influencing customers to be so that the rate of growth would be proportional to the number of current customers doing the influencing and the number of customers to be being influenced. So, at time t, let the revenue ($ per day) be y(t), the current time t = 0, the current revenue y(0), and the planned revenue b. Then for some constant of proportionality k, we should have

d/dt y(t) = y'(t) = k y(t)(b - y(t))

So, this is a first order, linear, ordinary differential equation initial value problem. There is a closed solution based on, right, exponentials. It is just calculus to solve the equation. I did that, picked a reasonable k, drew a graph, and, ..., in short this work kept the BoD from collapsing and saved the company.

Calculus. Gee, just calculus. My big advantage was that I remembered calculus (quite well, and way beyond this problem) and was able to formulate the problem as calculus and solve the differential equation.

Right, the solution is a lazy S curve and is related to some models of Covid growth.

There were other examples in 0-1 integer linear programming, some original work in mathematical statistics, continuous time, discrete state space Markov processes, etc.

Here is how that can go: Are working in computing in a company, see a problem the company has, see some math that provides a solution, and, by writing some software and using available data, proceed with some success. Maybe no one else in the company sees the problem because they don't know enough math to see a math formulation and solution.

It appears that commonly where people are doing a lot of important work, there are important problems that need solving, are being neglected, and where some math can be the key tool in formulation and solution. A person with that math may be unique in the company.

But don't hold your breath looking for job descriptions that mention such usages of math. Uh, why not? Did I mention that the person with the math might be unique in that organization.

Also draw from the two old Disney movies Snow White and Cinderella and see the possible roles of jealousy and sabotage. In my experience, Disney was 100% correct.


I hate how much emphasis is put on economic value. I love math because of how beautiful it is. I went to uni for math but ended up switching to theatre because of how it was taught as just a means to an end rather than an art form in itself.


CS/programming for me was a good balance between practical application and theory. Spending more time programming and getting better at programming will probably land you a higher salary than if you were to spend your time learning complex math which you might/might not use.


Maybe we could first try distinguish both. So when I think about it, software development is mostly about the application of mathematics and transformation of information whereas mathematics are about describing the world in a formal manner.


Ah, always interesting to see a subject discussion based on centuries old metadata constructs (higher order formal logic & mechanical/electrical/abstract autonoma), ignore centuries old metadata constructs.


Category theory [1] bit ligher read than application of formal logic to data science/cs [2]

-------

[1] category theory : https://bartoszmilewski.com/2014/10/28/category-theory-for-p...

[2] : https://plato.stanford.edu/entries/types-tokens/#toc

https://plato.stanford.edu/contents.html


Seems like the so called Lisp hackers are Common Lisp users.

Before Matlab was cool, Lisp is VERY maths-heavy and engineering-heavy. Symbolic equation solvers, robot controls, neural nets, etc.

Even Julia which is very Matlab-like has roots to Lisp.


As i see, maths is more about proof, and rediscovery new theorems, than to write "arbitrary, unproven proof". It's more like 100% unit test coverage of your codebase (harder than writing the code).


I think that the most important idea of math, programming, physics etc is the idea of fixed points which is predicated on the idea of nilpotence.

Fixed points go by many names like invariance, spectra, diagonalization, embedding, braids etc.

By fixed point I mean something like the "Lawvere's fixed point theorem". https://ncatlab.org/nlab/show/Lawvere%27s+fixed+point+theore...

I have a braindump on this https://github.com/adamnemecek/adjoint

I also have a discord https://discord.gg/mr9TAhpyBW



Fixed.


Point.


Read The Computational Beauty of Nature and compile the associated examples:

https://github.com/gwf/CBofN


As someone who recently decided to forego future software engineering jobs in favor of an Applied Math PhD, this article is giving me some great confirmation bias.


After reading the article, I am confused. I use types and categories frequently, and how is that not math?


I came into a software engineering career from a B.S. in Mathematics. I had no idea how thinking like a mathematician dominated my approach to engineering until I figured out that not everyone develops software the same way - that was 25 years into my career.

To me, the key leap of understanding in a mathematics education is realizing that mathematical proof isn't about clever, convincing arguments that have a high probability of being true. Mathematical proofs are truth preserving operations that require you to understand and document all the assumptions needed to guarantee that truth is preserved throughout. It's about understanding exactly what you know and, more importantly, what you don't know but that you need to be true in order to guarantee truth. And "guarantee" here does not mean "very very very high probability." It means "exactly zero exceptions, given that the explicitly documented assumptions hold." A mathematics education goes on to give lots of practice identifying these assumptions in various ways and in various domains.

This line of thinking has served me well as an engineer. I almost never go as far as formal methods or using full mathematical rigor. But I do try to understand the assumptions I'm making and whether those assumptions are likely to hold in the application domain. For example...

I assume reading from disk repeatedly (vs. preloading into RAM) is fast enough for my use case, but I need to test that assumption. Write a quick test... it's fast enough on my laptop's SSD. But I'm going to deploy to spinning disks and I'm doing a lot of seeking - better test there as well. Nope, too slow. If build a simple index of the data, is it fast enough? Write a quick test... yep, so I don't need to deal with cache eviction strategies. Oh, but I'm assuming the tests I wrote are representative, so I need to give myself a couple of orders of magnitude of headroom to be pretty sure.

This is not mathematical rigor, but it's leveraging a lot of that kind of thinking. My entire design/code/debug lifecycle is constructed around identifying and testing assumptions like this.

When I realized this came from my background in math, I started trying to figure out how some of the people around me were doing things when they had little or no math background (i.e., not enough to have a ton of practice comprehensively identifying assumptions). Some had come to a very similar process, just not through math (many with graduate degrees in biology or experimental physics also had a pretty direct route to similar enough thinking). Some were much more intuitive about things - I don't think I could develop anything that way, but they were very good at their job. I think this is part of why UI/UX work was always so frustrating for me - the assumptions run so deep that simplifying through intuition is constant, and testing (e.g., through user facing experiments) is expensive.

I'd love to see deeper studies into the range of cognitive practices employed by various software engineers and how that differs across educational backgrounds and application domains. How does that differ from graphic designers, accountants, EEs, mechanical engineers, etc?


(2012)



Added. Thanks!


I use math with computers all the time; have since pretty much the beginning of my programming experience 35 years ago. But I don't use it well. I depended a lot on other people to convert a mathematical equation into a program (for example, think of a summation-- that's really just a for loop incrementing an accumulator. And an integration isn't much more than that, just divide by a constant at the end). I learned gravitational simulations that way (amusingly, I was able to do mandelbrot on my own knowing just z = z ** 2 + c and brute forcing myself through the details).

For me math is more of a received wisdom. I'll have a problem I need to solve, and as part of that, I need to compute some function. But the naive version of the function that I was taught (say, factorial function) is slow, and might fail because of integer data types outside their range. In comes my professor, who mentions https://en.wikipedia.org/wiki/Stirling%27s_approximation which allows me to complete my project and graduate in time. Said professor also derived analytic derivatives of our objective function, since at the time (1993-4) we didn't have autodifferentiation.

At the time, I didn't really think too much about it. I had a problem and somebody handed me a practical solution. But I got curious... what was this gamma function and why is it defined over floats (reals!) over integers? And so that led down a rabbit hole of mathematical exploration (most of which was executed using a highly worn copy of Mathematical Recipes in C).

Another example is the mandelbrot set. You can take the raw definition and attempt to compute set members but your calculation will never complete. Instead, clever math people figured out ways to compute an approximately right answer faster- and in some cases, optimized for the limited hardware of the time (see FRACTINT for an integer-based fractal program for x86 machines pre-floating point hardware). This and many other tricks made fractal exploration on consumer hardware practical (although probably not very useful?)

Over time I've come to be better at math- at understanding concepts- and the relationship between practical high performance computing and both the underlying math and physics that are required to do it effectively. I've learned so many different ways to approach problems compared to when I started, much of it because I continued to learn more math, and practice at it. I see a close relationship between computing theory and the math/physics that enabled it (IE, transistors and vacuum tubes before them, and mechanical gears and switched before that).

I've also realized that I can learn some math easily- for example, more or less anything on a cartesian grid- while other things, like complex symbolics or tree structured algorithms- take a lot more thinking.

To me it's an endless world of unknown delights that I stumble across and periodically take 20+ years to understand. I am just now solving problems that my smarter grad school friends managed to do in a day, 20 years ago, because they're better at math (and logic, and memory, and more...)


> it is possible to be a productive and well-compensated programmer — even a first-rate hacker — without any knowledge of science or math. But I think that most programmers who are serious about what they do should know calculus (the real kind), linear algebra, and statistics

Not surprisingly, those happen to be branches of mathematics with lots of applications. I would argue that programmers should know them, but only superficially, as in having an understanding of what mathematical properties mean in real life and in knowing how to compute them, without having to understand why and when those computation steps work.

For example, when computing a double integral, changing integration order is only allowed under specific circumstances (https://en.wikipedia.org/wiki/Order_of_integration_(calculus...).

In a math exam, you’d have to show these properties hold before switching integration order. If you do that well and then make a slight mistake in your subsequent computation, you get a tiny point deduction, but still easily pass your exam.

In physics, you’d just switch order, compute the answer, check it with reality, and if it doesn’t look wrong, declare you solved the problem (that may be slightly exaggerated, ;-)).

The check with reality is important even if switching integration order is OK because, if you make a slight mistake in your subsequent computation and design a bridge based on the result, people may die.

The point is: for physicists, it’s not the journey that’s the reward, but the destination. And yes, ideally they’ll know when their tool won’t work, but physicists rarely encounter the weird constructs that mathematicians consider in their job such as functions that are continuous, but nowhere differentiable (https://en.wikipedia.org/wiki/Weierstrass_function), or discontinuous at every rational number, but continuous elsewhere (https://en.wikipedia.org/wiki/Thomae%27s_function), and there’s always the reality check at the end that will catch (most) errors.

> Rather than viewing mathematics as an advanced tool reserved for extremely specialized computer applications, Fortran-school programmers view the computer as an advanced tool for doing mathematics.

Here, I’d say Fortran-school programmers view the computer as an advanced tool for doing computations. That’s not surprising; Fortran-style programmers are physicists. They care about the result. Mathematicians don’t, at least not in the sense of something that’s useful in the physical world.


Si




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: