Hacker News new | past | comments | ask | show | jobs | submit login
Think in Math, Write in Code (justinmeiners.github.io)
478 points by selff on July 18, 2019 | hide | past | favorite | 244 comments



There's an unpopular and somewhat seemingly contradictory opinion that I have regarding this, because this isn't the first time I've seen this topic brought up. Mathematics and programming are not really all that related to each other and I think there's an overemphasis on the importance of math in programming for 99% of applications.

Sure, mathematical thinking can be useful, but it's only one type of logical thinking among many types which can be applied to programming.

I've been programming so much for so long now that before I even start writing code my mind launches into an esoteric process of reasoning that I'm not confident would be considered "thinking in math" since I'm not formally skilled in mathematics. It's all just flashes of algorithms, data structures, potential modifications, moving pieces, how they all affect each other and what happens to the entire entangled web when you alter something. Fortunately, my colleagues are often pleased and sometimes even impressed with my code, and yet I'm not so sure I would consider my process "thinking in math."

So, this isn't necessarily a direct refutation to the article. In fact, maybe what I'm talking about is the same thing as what this article is talking about. But, anyway, my point is that I feel that there's more ways to think about problems and solutions than pushing the agenda of applying formal mathematics.

As an aside, I noticed this part of the article:

"Notice that steps 1 and 2 are the ones that take most of our time, ability, and effort. At the same time, these steps don’t lend themselves to programming languages. That doesn’t stop programmers from attempting to solve them in their editor"

Is this really a common thing? How can you try to implement something without first having had thought of the solution?


Math has more aspects than just logical deduction via mechanical rules. Math also has an aesthetic aspect that guides people to find elegant, powerful solutions within the space defined by the mechanical rules. There may be many paths of deduction from point A to point B, which are all mechanically equally valid. But from the human point of view, they have different value. Some will be simple and easy to understand; others will rely on ideas from one or another realm of math, making them friendly to people who understand those ideas. Some will suggest, to the human brain, analogies to other problems. The mathematical validity of the argument is judged according to whether it correctly follows the mechanical rules, but all other aspects are judged by aesthetics and intuition and ultimately by how the solution is received and utilized by other mathematicians.

If the only aspect of mathematics that you bring into programming is logical deduction by mechanical rules, then I doubt it will help, except for rare cases where you prove or disprove the correctness of code. If, on the other hand, you bring over the aesthetic concern, the drive to make painfully difficult ideas more beautiful (ergonomic) for human brains, then it will help you make your code simpler, clearer, and easier for others to work with.

Is this really a common thing? How can you try to implement something without first having had thought of the solution?

It's common, and as you can imagine, it doesn't lead to good outcomes. When people start by coding first, it's so much work they tend to stop at their first solution, no matter how ugly it is. When people start by solving the abstract problem first (at a whiteboard, say) they look at their first solution and think, "I bet I can make this simpler so it's easier to code." The difficulty of coding motivates a bad solution if you start with code and a good solution if you write the code last.


Ah, well, you just described the relative value of math in much the way I'd describe the relative value of... well, just about any intellectual pursuit. Same in philosophy. Or in law. Or in physics.

A lot of people with particular interest in one area -- say, mathematics -- don't realize that much of what is important is much more generally applicable.

It's not that these things are distinctly important for math. It's that they are important for thinking.


That's true to a certain extent, but math and programming share the property of being built up from logical building blocks that are combined in strict logical ways. Law and philosophy are built on language and culture; physics is closer but is empirical. Math and programs are built from logic, and this gives them more of a common aesthetic sense.

For example, in law or philosophy, repeating the same argument multiple times, adapted for different circumstances, can give it weight. In math and programming, the weight of repetition is dead weight that people strive to eliminate. In law and philosophy, arguments are built out of words and shared assumptions that change over time; in math, new definitions can be added, and terms can be confusingly overloaded, but old definitions remain accessible in a way that old cultural assumptions are not accessible to someone writing a legal argument.

In physics, the real world is a given, and we approximate it as best we can. In math and software, reality is chosen from the systems we are able to construct. Think of all the things in our society that would be different if they were not constrained by our ability to construct software. Traffic, for one — there would be no human drivers and almost zero traffic deaths.

Where programming differs from math is that math is limited only by human constraints. Running programs on real hardware imposes additional constraints that interact with the human ones.


Modern computing is empirical. That's why MIT switched their intro to CS class from Scheme theory of computation to Python robot controllers.


It’s possible I am misunderstanding you, but think I agree with this.

There’s kind of two ideas going on here (in this thread in general), I think.

One seems to be of a mindset I’d describe as thinking in math means glomming onto knowing linear algebra.

The other seems to be thinking in interconnections, minimalist definitions, and those abstract concepts that exist in math (and all kinds of things) for connecting discrete ideas into composite ideas.

One thing that bugs me is code with overly specific semantics, where it reads like that’s the only problem the code could solve.

When if it’s broken into concepts and abstraction in the PLANNING stage the code ends up being less verbose and descriptive of the human problem and more useful for a variety of problems.

So instead of code to balance a checkbook, I’d write code to add/subtract numbers and input numbers from my checking account.

I see a whole lot of code with too much specific semantic meaning. And it ends in practice that we think code in one system is highly specific to that system and minimizes effort to reuse.

At least that’s been my experience at work. Ymmv


I see math as the language of thinking. Math doesn’t really have a domain beyond: how do we think, how do we know, and how do we communicate our knowledge. The progression of mathematics has been the systematic removal of domain. Numbers are widely applicable because they are very abstract and devoid of domain, and they are one of the least abstract things in mathematics.

I agree with your gist, there are lots of things where studying that thing is virtuous beyond its direct application. But also, I’d contend that thought is the subject of mathematics and not just a virtuous side-effect.


Math, properly done, is rigorous formal thinking. And it lets us think things we normally couldn't. Nobody can visualize a 100 dimensional object, but a mathematician can easily work with one.

And as programmers we work with mathematical objects called state spaces, that have vastly more than 100 dimensions.

That said one can easily be a competent programmer without having much formal mathematical knowledge much like one can easily be a competent ball player without knowing the differential calculus. However, just as modern ball players improve their games with computer aided mathematical analysis of their swings and so on, a programmer can improve the quality of his output by mathematical analysis, in particular via the use of the predicate calculus and its, in my opinion, most useful application of loop analysis.


Mathematics and programming are strongly related more than most people think.

http://www.norvig.com/spell-correct.html How did he solve it? Using probability theory and sets.

It's not just games, cryptography, finance, signal processing, compression, optimization, and AI that require mathematics, tons of programming does most people just don't realize it and brute force their way to a solution.

Lot's of real world problem can be solved with algebra, calculus, Boolean algebra, linear algebra, geometry, sets, graph theory, combinatorics, probability and stats. What typically happens is most programmers are giving a problem, and what do they do? They start thinking in code. How did we solve problems before computer?

Apply that kind of thinking, then solve the problem with mathematics. Your code will often be much smaller and dense. Sure, dealing with output and input doesn't require you to write mathematical code, but the core of your problem can often be solved with some mathematics.


I used to believe as you do until AlphaZero learn to play Go and Chess by itself and discovering new strategies in the process.


By using tensor calculus.


Can you provide link to paper explaining the use of tensor calculus in AlphaZero?



> Is this really a common thing? How can you try to implement something without first having had thought of the solution?

Unfortunately, it's incredibly common.

The result is always almost a mess. Functions that are never called, parameters that are never used, as they discovered their mistake as they were coding but then never went and cleaned up the stuff they don't use anymore. Broken logic, poor performance. Functions with a mess of loops and if statements, nested like 10 indents deep.

You can tell by looking at code if they were making it up as they were going versus implementing a solution they had thought through before starting coding. It's painfully obvious.

When you try to solve your problem by coding, I think you are forced to take a myopic view of only subsets of your solutions and it's near impossible to step back at this point and come up with a nicer, more abstract and probably more concise solution. The solution comes out spikey.


Really? I think the best way to solve a problem is to code it. I can never see all the corner cases and logical inconsistencies before I start typing. I have tried to formally model software before I start writing it and in the end it was largely a waste of time because real understanding of the problem comes with coding the solution.

Of course when doing it like this you write a lot of code which later is unused or bad. But I think that will always happen and it's just a matter of having the discipline to continuously clean up after yourself.


Writing a lot of code you later discard because it isn’t part of the final solution is like throwing clay on the table and then carving away the bits that don’t fit.

Nobody criticizes the sculptor for the clay that ends up on the floor, and clay is heavy. We carve away bits, they have no mass and don’t need to be swept up, all we have to do is cut them away, revealing the final program.


On the other hand the sculptor has considered the type of clay, the quantity, the tools she'd need to use to carve away those bits, and understands enough about what she wants to create to know what bits to carve away first.

Programming may not be (all) math, but it's not art, either.


"Programming may not be (all) math, but it's not art, either."

Huh? Maybe the kind of common 8-5 office programming around buisness logic is not, but to design any bigger project is definitely art.


Math can be considered an art, though not a fine art. Programming by either extension is an art (though again, not a fine art).

https://en.m.wikipedia.org/wiki/Mathematics_and_art


I don't see why it couldn't be a fine art.


Perhaps it could be practiced as part of a fine art, but like carpentry, programming itself isn’t.


> Writing a lot of code you later discard because it isn’t part of the final solution is like throwing clay on the table and then carving away the bits that don’t fit.

How do you make a statue of an elephant?


Planning for realistic levels of waste. Throwing away prototypes. Gradually making changes. Keeping backwards compatibility. Testing. Accommodating users with large collective investment in learning.


This is not a useful analogy; it is more of an excuse for not trying to think things through. Would this be a reasonable analogy for building a bridge?

If you don't have a reasonably detailed idea of what you want and how to achieve it, you are unlikely to get it.

https://dilbert.com/strip/1991-9-6


> Would this be a reasonable analogy for building a bridge?

That is also a useless analogy. Do bridge builders get to test and re-test their bridges in the real, non-simulated world? Can they instantly make a copy of their bridge with a few critical differences and see how the two behave? Can they re-build their bridge in minutes?

Metaphors aside, I think history is ample evidence that "coding your way around a problem" rather than conceptualizing a solution first is a perfectly valid way to approach professional programming. It's not the only way, and it has drawbacks which others have pointed out here. So does the conceptualize-first approach: you might solve the wrong problem, make something inelastic in the face of changing requirements, or fall into the psychological trap of being attached to your mental model even when it turns out that you really didn't think of everything and have to make changes on the fly.

I'm really tired of people being dogmatic about either approach ("move fast and break things/pivot; anyone else isn't really interested in getting stuff done!", "you're just a messy code monkey unless you can hold the solution in your head before you start!"). It's almost always veiled arrogance rather than honest improvement-seeking, in my experience.


Well, yes, it is a useless analogy... Oh, you meant to say that comparing bridge-building to software development would be a useless analogy? It's not an analogy I made - the point is that just because you can make a cute analogy, it doesn't mean it offers any insight.

> I'm really tired of people being dogmatic about either approach

Exactly - and the implication that I am being dogmatic is a straw man. I am simply opposed to arguments that depend on poor analogies.

Furthermore, all of the bad things that you say can happen if you try to think ahead are as least as likely to happen if you don't, and especially if you have gone in the wrong direction for some time (I know the latter is a manifestation of the sunk-cost fallacy, but it happens a lot on real projects.)


Building software is not even remotely the same thing as building a bridge. It would be more akin to the architect creating the drawings twice for the bridge. Once as an exploratory version and the second one the production version.

Oh wait that is actually how architects work. In fact at my work we have multiple CAD designers(not architects though) and it's not uncommon for them to completely throw away a design and start over. I think code should be mostly the same.


I'll bet the engineering process of the software written for the Apollo 11 lunar lander was much closer to the bridge building process than you might think. I'll also bet there's a whole host of software projects which use similar processes today. It's just that most of us writing DB skins for "The Enterprise" are rarely, if ever exposed to real engineering for the simple fact that quality software is expensive and typically, our organically grown solutions are good enough.


> I'll bet the engineering process of the software written for the Apollo 11 lunar lander was much closer to the bridge building process than you might think.

Of course, but the Apollo 11 lunar lander was created without the aid of ubiquitous desktop computers. I imagine the SpaceX guidance/control software was written in a way that less resembles bridge-building/Apollo 11 lunar landers and more like the organic processes we see elsewhere in the software industry.

If Neo were to build a bridge in the Matrix, chances are his processes would bear little resemblance to those of the Army Corp of Engineers.


> I imagine the SpaceX guidance/control software was written in a way that less resembles bridge-building/Apollo 11 lunar landers and more like the organic processes we see elsewhere in the software industry.

For the guidance/control systems, I bet you're wrong.


Maybe if we treated the practice of software development more like bridge building, we would have better reliability, fewer outages, fewer zero-day exploits, fewer patches and bugfixes--software that actually works the first time, and every time for years.


I work in aviation/defense. They try to treat it just like building bridges, and it's a disaster. Please don't.

Software is a design practice/process. Not a building process. Any analogy should be to the design phase of other engineering disciplines.


Your designers are working with abstract models. They are thinking about problems at a conceptual model, they are not putting up structures and seeing if they work.


code is also an abstract model.

The CAD designers absolutely test if things work. Why do you think almost every engineering bureau has 3D printers.


> Code is also an abstract model.

Sure, but it is not the only one. You are allowed to think at other levels, and it can be quite useful, especially on larger systems.


In the analogy with the clay, I believe they are both adding and removing clay


Really? yes, it is very common.

The problem of this approach is that it does not scale to large systems. If you don't spend much time on thinking in the abstract about how it will work and what might go wrong, then, by the time you have written enough code to find that out, you may have gone a long way down the wrong path, and not all architectural-level mistakes and oversights can be patched over.

No-one does this perfectly -- even people using formal methods will overlook things -- but, on a big project, if you don't put much effort into thinking ahead about how it should work, and try to identify the problems before you have coded them, you are likely to end up where, in fact, many projects do find themselves: with something that is nominally close to completion but very far from working. Those that are not canceled end up looking like legacy code even when brand new.


If you want to have low quality solutions that kinda work on the first try then sure go for it. Your approach will inevitably lead to insurmountable technical debt that can't be paid off.

Big projects should be cut into smaller pieces where each piece can be relatively easily rewritten.


> Big projects should be cut into smaller pieces where each piece can be relatively easy rewritten.

To come up with the right smaller pieces, you have to think about how they will work together to achieve the big picture. That means interfaces and their contracts, and if you get them wrong, you end up with pieces that don't fit together, and do not, collectively, get the job done.

Big problems cannot be effectively solved in a bottom-up manner, and perhaps the most pervasive fallacy in software development today is the notion that the principle of modularity means you only have to think about code in small pieces.


That's my point you CANNOT possibly come up with the right smaller pieces until you have a solution that you have verified works.

What do you think other engineering principles do? They create a proof of concept. Verify it works and then create the real thing. That is why "real" engineering companies have hundreds of tools to test stuff.

I really don't understand why people want software to be different. You write some shitty throwaway web app then sure go ahead and don't prototype anything just hire a "software architect" that designs something and use that.

But do you want something that actually works then that is completely useless. Prototype, verify, start over if necessary. That is the way to write quality software.


>That's my point you CANNOT possibly come up with the right smaller pieces until you have a solution that you have verified works.

That's beside the point. The point is that coding is not the only way to verification, especially at the architectural level.

> I really don't understand why people want software to be different.

It seems to be you who wants to be different. Making prototypes is expensive and time-consuming, so engineers try to look ahead to anticipate problems. Prototyping in software is cheaper, but not so cheap (especially at the architectural level) that thinking ahead isn't beneficial.


in my opinion the big difference and problem is that space and resources are virtually unlimited, and a "product" can keep changing indefinitely too. and if something fails, in most cases it will fail very differently than in other engineering disciplines. I agree it would be great to write more prototypes and all that, but hey, capitalism: good_enough/shitty makes money, so that's where we are (also, CS is rather new, we are still figuring out a lot of things, still getting deep in the mess)


I have found that the gap in understanding between thinking you understand something and implementing something as a program is often much smaller than the gap between having programmed something and having proven it.


What do you mean having proven it? As in you tested it and it works or do you mean you have formally using math that your solutions always gives the correct result?

If it's the former then this is part of building it. An implementation without proper testing is incomplete. If it's the later I actually agree. Only the most sensitive of applications require that level of sophistication though.


A mathematical proof or analytical solution. I understand that analytical solutions only apply for very narrow ranges of problems but... consider writing an analytical solution to a differential equation versus applying a numerical solver. A numerical solution rarely leads to as deep an understanding as an analytical solution or a number of approximations in various limits. I feel like we're in the limit of something analogous to writing numerical solvers and claiming understanding from observing the output.


I often take the compromise technique: write a prototype and build off that.

The prototype is generally a mess, but I throw that out anyway.


Welcome to "Agile".


Rhymes with "fragile".


I’ve always thought that you don’t really understand a problem until you have a solution to it. What I like to do is first write a naive solution to a problem so I can understand it better. Then I throw away the code and start again. I invariably learn a lot more about a problem and the most elegant solution than I would have if I had just tried to write a perfect solution right off the bat.

Code, after all, is cheap (and often totally worthless). More developers should adopt this view. I’ve seen engineers more times than I would care to admit get attached to some piece of code, as if it was some piece of themselves. Code is more akin to dogshit than the limb of a dog.


The problems we're solving when building software happen simultaneously at different levels. Some are best solved by prototyping. Others are best solved on a whiteboard. The trick is correctly identifying the problems, and then knowing which layer they belong to. If you try to code up a prototype for a design problem, you'll most likely waste a lot of time and not reach any useful conclusions in the end (or end up shipping the broken thing). If you try to whiteboard a coding problem, you can get stuck forever drawing things, with no result to show for it.

In my experience, the problem levels go differently than one could naively expect. Data structures, abstractions, module interfaces - all problems dealing directly with code - are best solved first on a whiteboard, where evaluating and iterating through them is cheap and effective. User interfaces, user experience, usefulness of a part of a program - things dealing with business and user needs - are best solved through prototypes, because you can't reasonably think through them on paper, you have to have a working thing to play with.


> It's all just flashes of algorithms, data structures, potential modifications, moving pieces, how they all affect each other and what happens to the entire entangled web when you alter something.

That's what doing math is like too - just substitute axioms, mathematical objects (whether numbers, sets, rings, or whatever is under discussion), potential lemmas and approaches, what bag of mathematical tools (theorems) you can use, and how much closer to a solution when you shift terms in your formulae around.

Then you write it all down (if you haven't already), simplify it, and clean it up before showing it to others, just like you would code.

Also, you can map programs to proofs and vice versa: https://en.wikipedia.org/wiki/Curry–Howard_correspondence


That sentence jumped out at me too. The intuitive thinking process - unverbalized or half-expressed leaps of logic making interconnections - I imagine mathematicians experience a similar rush of thoughts while solving or exploring.

All code boils down to operations that can be described mathematically. Software is applied mathematics (with a sprinkle of art, perhaps). I think the reason why some people feel that programming is not closely related to mathematics, is that programmers are thinking and working on top of so many layers of abstraction, it's almost like working with the "stuff of the mind" itself, with models, processes, flows, transformations, events, composing behaviors.

That said, I relate to what the grandparent commenter is saying. Software allows me to think with visible, malleable and "living" mathematics while building up a system, to ask questions and have a dialogue with it.

>> there's more ways to think about problems and solutions than..applying formal mathematics

I agree with this. Often a "looser" approach is needed to explore a problem space, and formal mathematics may not be the best medium for creative problem-solving. On the other hand, the qualities that are valued in software - types, functional programming, test-driven development, etc. - are all about proofs. Not necessarily mathematically rigorous, but the closer you get, the more reliable the logic.


> That said, I relate to what the grandparent commenter is saying. Software allows me to think with visible, malleable and "living" mathematics while building up a system, to ask questions and have a dialogue with it.

Programming's friendlier to algorithmic thinking (versus equation/identity and proof). The former's really easy for me, and while on paper (aptitude test scores) one might think the latter would be too, it's very, very not. I've only relatively late in life realized I need to reframe any non-trivial math I encounter in terms of algorithms to have any hope of understanding it. It's probably why I bounce off—understand well enough, just strongly dislike—programming languages that try to make code more look more like a math paper (more focus on equality/identity and proof-like structures).

And yeah algorithms are math, but lots of math's not really algorithms and when someone writes "think in math" that mostly means "think in proofs" to me. If they mean "think in algorithms" then that's close enough to programming—as I see it—already that it's a pretty fine distinction.


Yeah. It is easy to people with no university-level math background to think math is a deterministic, conscious effort to execute what are basically algorithms, like long division with pen and paper.

Whereas actually ”mathematical thinking”, like coming up with a proof, is an incredibly intuition-guided process, a parallel heuristic search in the solution space, a fundamentally creative endeavour. And as your intuition comes up with promising paths through the search space, you write them down, formalize them, probably discover some corner cases you have to handle, and either continue down that path or realize that it is a dead end and you have to backtrack.

At least to me, this process is incredibly similar to programming effort. You come up with subsolutions, formalize them, fix issues revealed by the formalization, carry on with the next subsolution or realize that approach can’t work after all, and come up with something else.


> Is this really a common thing? How can you try to implement something without first having had thought of the solution?

There appears to be two distinct kinds of programmers that are about equally effective: ones that think through the problem first and then write down the solution on the one hand, and ones that start with something close and then iteratively refine it into the desired result on the other hand.

When you’re doing things like writing documentation, this is important to remember as the two kinds of programmer will approach the documentation differently — important information needs to be put where both approaches will find it: http://sigdoc.acm.org/wp-content/uploads/2019/01/CDQ18002_Me...


Fascinating, thank you for the link to the paper.

They group these styles as: opportunistic versus systematic approach to programming. Paraphrasing below..

Opportunistic programmers develop solutions in an exploratory fashion, work in a more intuitive manner and seem to deliberately risk errors. They often try solutions without double-checking in the documentation whether the solutions were correct. They work in a highly task-driven manner; often do not take time to get a general overview of the API before starting; they start with example code from the documentation which they then modify and extend.

Systematic developers write code defensively and try to get a deeper understanding of a technology before using it. These developers took time to explore the API and to prepare the development environment before starting. Interestingly, they seemed to use a similar process to solve each task. Before starting a task, they would form hypotheses about the possible approach and (if necessary) clarify terms they did not fully understand.


I've always thought of those two groups using different labels. The Code Artists and the Engineers. The Artists have a strong need to be creating to understand something, whereas the engineer has a strong need to understand before they can create. And those that believe the programming is not an art fall into the latter group.


A pretty good essay I've recently read that explores this topic.[0]

0. http://www.paulgraham.com/hp.html


Mathematical thinking can be extremely useful for programmers who never touch 3d games, or physics engines, or anything else requiring calculus or matrices. Functions are mathematical objects, and can be combined using operators that obey mathematical laws into other functions - thinking of them in this way leads to the functional and concatenative programming paradigms. These combinations can also be rearranged in ways that also behave mathematically (i.e. according to simple rules that do not change, and have been explored for millenia), making it much easier to both refactor and optimize code. They can even be used as foundational abstractions for organizing your code, leading to horizontal rather than vertical abstraction - i.e., using the tower of abstract algebra or category theory types, we can organize our code in a way that anyone who understands that type will immediately grasp, whether or not they understand the internals. Math is everywhere, and you're using it, whether you know it or not. Might as well use it well.


I've been thinking about this quite a bit, but coming from a different angle. I've been helping at my kid's school with coding clubs for primary students. When teachers are recruiting for the coding club they always mention the students who are good at math as good candidates. But what I have noticed is that the students who do the best in coding are more often musically inclined or linguistically talented. It seems to me to make some sense. The ones who can parse and understand languages at an early age might also have the aptitude for programming. It's a small sample size, a few dozen students. Still seems kind of interesting.


It's also worth thinking about that you don't really learn math in primary school so much as you learn numbers and computation, so when a teacher says a child is good at math they usually mean good at numbers. Very few teachers understand math well enough to identify who would be good at it and a lot of unfortunate students find this out when they get to college.


Sure, I can see that. But isn't it similar with language and other subjects? You just learn the basics, nothing deep, no turns of phrases, little expressiveness.

Perhaps there is little correlation between those who excel at coding at a young age and those who go on to be good programmers when they get older. I just find it interesting that at this young age I see a correlation between coding skills and language skills more than math (really just arithmetic) skills.

Another observation was that we did the Hour of Code activity in December last year with Year 2 to Year 6 students (equivalent to Grade 1 to Grade 5 in the US). And in each group there was one or two student who really stood out. And every one of them was a girl. Small sample size of only about 100 students so maybe I shouldn't be wondering what is going on here.


Math is somewhat unique in that the high-school and early college version is not at all representative of the real thing. Its not "just a taste", its qualitatively different.

As the other comment above mentioned, I think this has to do with education of the teachers. Very few teachers know what math is either.


You may be surprised that this also applies to math.

High-level math values logical and linguistic skills. This is often a hard stopping point for many students who were good at high school computation like calculus.


I have a math undergrad, didn't program much until my senior year. Then went to grad school in computer science. Learning data structure and algorithms was very easy for me compared to other students who came from non-math non-CS backgrounds because writing math proofs and designing algorithms are very very similar and use similar though processes and methods. Udi Manber's Introduction to Algorithms show how you use mathematical thought processes to design algorithms.


A friend of mine taught high school math until recently. He told me a similar thing about math. The kids who are "good at math" in elementary school don't necessarily become the top math students later on. He felt that his best math students were the ones who were just curious about a lot of things as kids.


>Mathematics and programming are not really all that related to each other and I think there's an overemphasis on the importance of math in programming for 99% of applications.

This is the standard thinking of someone who's not deep into math but deep into programming.

The two are deeply interrelated and in actuality are one in the same. Knowing math provides deeper understanding of programming. If you want to get better at programming in general, learning every new frameworks or specific technologies is not the path to getting better. Learning math is the path.

I cannot show you the path for you to understand it, you'll have to walk it yourself to know.

Suffice to say that there is an area of math that improves programming in a way you can understand. Type checking. Type checking proves that your program is type correct, it comes from math. You know it, and probably use it all the time.

To extend this, there's this concept of dependent types which also come from math. Dependent types can prove your entire program correct.

That's right with math you can write a single proof which is equivalent to billions of unit tests that touch the entire domain of test cases, to prove your program 100% correct. It's a powerful feature that comes from math. It's in the upper echelons of programming theory / mathematical theory and thus not trivial to learn. If you're interested you can check out the languages: Coq, agda or idris.


> but it's only one type of logical thinking among many types which can be applied to programming.

Could you elaborate on this? Mathematics is abstract/meta enough that I would consider any type of logical thinking as part of math.

> It's all just flashes of algorithms, data structures, potential modifications, moving pieces, how they all affect each other and what happens to the entire entangled web when you alter something.

That for example sounds very much like "think in math" to me.


> Mathematics and programming are not really all that related to each other

I may be wrong but I believe the Curry-Howard correspondence disproves your claim. One can translate between the two and find that they are equivalent.

The difficulty is that some programming languages are hard to model mathematically due to the way they were designed and implemented. Some, like Idris or Agda, make it easy to see the correspondences. Others like C or Javascript are harder.

The key to solving hard problems is being able to think concretely in abstractions. The best language we have for abstraction is pure mathematics.


Curry-Howard maps typical practical tasks on the programming side to completely useless make-work on the math side. Turning a customer's name and address into text to make a mailing label has to happen at some point, but you really don't need to go through all those steps to show that if customers exist, then strings exist.


I'm talking about hard problems. A proof for such a program you suggest is trivial and probably not worth knowing. A program is a proof that there exists a type which satisfies a proposition. Not all propositions are interesting and neither are all programs.


My second job, the project had been run by a couple of vocational developers. They did okay. No version control, lots of corner cases not covered, lots of code that made inferences from incomplete data. But the team needed to grow significantly and none of this stuff was going to survive other people touching the code.

One of the bad patterns in the code was very complex nested boolean logic in places. Often with the same condition in several branches.

So I started using K-maps to untangle these. A few of them were much easier to read, but some of them... some of them it was unclear that all the cases were addressed. So I started putting big block comments above those, but we all know what happens to block comments over time.

Much later, big conditionals like that I would just move to a separate function, and then split em up to look like normal imperative code, instead of like math.

The first rule of teamwork is stop trying to be so goddamned clever all the time. It's like being a ball hog in basketball, football, soccer. Use that big brain to be wise instead. Find ways to make the code say what it means and mean what it says. Watch for human errors and think up ways to avoid them.

Math has very, very little to do with any of that. Psychology is probably a better place to spend your time.


How come math's to blame? That's clearly just some nuts level of incompetence that spawned all of spaghetti described. Lack of any conscious effort. After all, the mathematical thinking mentioned in the article inevitably includes covering the corner cases of the problem at hand as well as of the solution proposed, not to mention simplification and generalization. That's not trying to be smart, that's outright being dumb. (I'm not even sure if you're trolling at this point.)


Math isn't to blame. But neither is it the solution.


> I've been programming so much for so long now that before I even start writing code my mind launches into an esoteric process of reasoning that I'm not confident would be considered "thinking in math" since I'm not formally skilled in mathematics. It's all just flashes of algorithms, data structures, potential modifications, moving pieces, how they all affect each other and what happens to the entire entangled web when you alter something.

I think I'd call this "thinking in programming", and it seems like a great way to do it.

> Is this really a common thing? How can you try to implement something without first having had thought of the solution?

A distressingly large amount of work I've done has not been greenfield development but things that might be called "maintenance" or "integration". You're not trying to draw a picture on a blank sheet of paper - you've been handed an almost-completely-assembled jigsaw, the photo on the box, and limitless box of random pieces. Your job is then to work out which of the already-assembled pieces is wrong and which of the spare pieces can be used to fill the hole.

In this context, disposable programs are very useful for finding information about what's going on, sketching possible solutions, and finding out which plausible ideas won't work for reasons outside your control.

(e.g. this week I wrote a disposable program to use libusb to extract HID descriptors; this duplicated a library we already had but didn't trust, and enabled me to pass a problem over to the team programming the other end of the USB link.)


> Is this really a common thing? How can you try to implement something without first having had thought of the solution?

Some of us actually think by programming. In that sense, a REPL or notebook is probably a better medium, but the thinking is going on concurrently with prototyping.

It isn’t so much like “we are solving the problem at the same time we are writing the code for the solution” but more like “we are writing (disposable) code to help us solve the problem.”


> Mathematics and programming are not really all that related to each other

With respect, that tells us much more about you than about math or programming.

No Haskell expert, or formal methods expert, or complexity theory expert, would ever make a statement like that.

You may be right that math is quite a distance from day to day development, of course. (I don't think I'm being pedantic here, but perhaps.)

> it's only one type of logical thinking among many types which can be applied to programming.

What do you have in mind? Design patterns and software development practices, or something else?


Mathematics and programming are not really all that related to each other and I think there's an overemphasis on the importance of math in programming for 99% of applications.

I think if you regard logic (in philosophy) and maths (as a huge broad field) and computing (specifically a sub-field in maths to some people) its pretty clear that logic and computing have a huge relationship.

I can think of lots of other subfields in maths, which have huge inter-relationships. Applied maths, whats that got to do with probability? Well.. it turns out that modelling complex systems uses Monte-Carlo methods .. (a fictional example, I suspect, I know the manhattan project people dreamed MC up but its modern applicability is unknown to me)

You don't think maths informs programming, or its over-stated? I guess thats true, in as much as poetry doesn't inform legal writing. But, I observe that people who do enough poetry or writing to understand the difference between a simile and a metaphor and an allegory, are really on-point communicators, and the law needs that concision and precision.

I think people with good groundings in maths (and logic) make awesome programmers but its not strictly neccessary to be a mathematician to know how to "speak" in a programming language. What pitfalls you avoid from your knowledge, I cannot say. But I do know that huge pitfalls lie in naieve programming: large loops iterating over un-initialized data structures, not understanding the if-then-else logic or side effects of expressions, tail recursion..

I think computing is a sub-field in maths. How much it matters depends on how much your code matters.


> Sure, mathematical thinking can be useful, but it's only one type of logical thinking among many types which can be applied to programming.

Completely agree with this. I did a Maths and Philosophy degree, and I reckon the Philosophy was more useful to my career in programming than the Maths was. Although this probably depends on what kind of programming you do.


As somebody interested in both but having mainly been a programmer all my life. Could you describe in what ways philosophy can help somebody?

My (heavily uninformed) guess would be the ever questioning if our assumptions are actually true or not.


I've always felt that formal logic has been more useful to programming than math has for me.


I agree with you. There's a popular book they recommend around here for learning linear algebra, which they say its very related to coding.

I found it to be not the case. Upon reading the first chapters I started wondering how could this be useful for coding. So I jumped to one of the last chapters where they show you practical applications. Upon reading those I thought: "I can do all this in code just fine without using linear algebra".

I never touched that book again.

About two years ago or so I started to make little games for the pico-8 fantasy console. There's some math involved there but almost always you don't use the math formulas as you would in a text book, for example, for something simple like drawing a straight line or a circle, finding paths, collisions... there are very specific algorithms for that, they don't look anything like a math formula, even if they are derived from those.

Just my point of view.


> there are very specific algorithms for that, they don't look anything like a math formula

I'm not sure what you mean by that. I'd describe making a 3D game (engine) with rendering, collision detection, etc as probably one of the math-heaviest areas of programming outside of scientific computing or algorithm R&D.


Numerical solutions in computer programs look very different from the closed form symbolic theorems in math books.


I think it's pretty safe to assume that you haven't heard of ML language family and Hindley—Milner type system. At the lowest possible level, I mean, yes, numeric solutions are computed by the sets of instructions that are pretty distant from the mathematical level of abstraction. But in higher-level languages, everything remotely readable and reliable usually looks like formal math from the textbooks. I'd say that the representation heavily depends on what language you're using to write your programs, yes, though not everything looks like Algol 68 nowadays.


Don't numeric solutions in math books also look very different from closed-form solutions?


Which books is this? Sounds like something I might want to check out. Thanks!


Another Point: Math is basically taking a set of primitives/axioms and proving and constructing statements from that set of primitive axioms.

This is exactly what programming is.


Logic is part of math, not the other way around. Basically the same goes for your "how it affects the web" since that becomes about directed graphs. On algorithms and data structures: do you actually evaluate their complexity if that's not about math? Math is everywhere.


I think the ghost of Gottlob Frege just gave you a stern look.


I'm always mindful of complexity when designing algorithms but I would barely consider it math. It's not an exact science and you're hardly quantifying anything. Complexities are essentially just eyeballed approximations.


You're right that the complexities are approximate in some sense. But now I actually have more questions:

1. Are you aware that the complexity analysis isn't about being precise but about being able to predict the time for any given input given some sample? Since from my own experience, it's more of an analytical part and it's about calculations of worst case scenarios/computability of the process overall. Still, it has everything to do about actually predicting the exact values, with the grain of salt that the relativity of the method is.

2. Are you actually aware that the math isn't about being "precise" in the sense of numbers but about relationships between abstract entities? Ever heard something about category theory or pretty much anything related to the abstract algebra?

3. Is there anything else than math that helps abstraction in your opinion? For what I know, even mediocre understanding of abstract algebra helps a lot. Please note that this question is totally non-ironic, I'd really want to know.


I would like to welcome you to the formal study of complexity: https://complexityzoo.uwaterloo.ca/Complexity_Zoo


Is this really a common thing? How can you try to implement something without first having had thought of the solution?

I suspect one of the reasons is that to a casual observer, there is no difference between someone who is thinking deeply about something, and someone who is just daydreaming. They both aren't interacting with the computer and may have their eyes closed. On the other hand, "coding" by constantly banging at the keyboard and mousing around looks productive.

I am someone who thinks deeply first, and have been told off about it because they thought I was sleeping or otherwise not working.


> overemphasis on the importance of math in programming

Well, any fool can write a loop. But to do the same thing in constant time instead one might need to use some math.


Any relational database is based on set theory - I quite often if I am doing a more complex query will draw it out as a set diagram so I know what I am doing.


What do you mean by math though? Are you focusing on numbers? Usually when I thing about math I'm thinking about boundaries and how inputs effect outputs. Those things do matter to programmers.


This. If you are solving a mathematical problem then think in math, if you are solving an accounting problem then think like an accountant. Programming is a general purpose tool to solve problems in various domains. Just because computer science has roots in math doesn't mean computer programs must also behave same way. General purpose computing and absractions weren't invented for nothing. [Disclaimer: didn't read the article]


I agree a lot. Mathematics intersects with programming when you use programming to solve mathematical problems. I find that that happens rarely for me (although it has happened). I feel like the biggest boons for programmers are having a good grasp on logic, pattern recognition, category theory, and the process of abstraction.


"plan to throw one away, you will anyway" Fred Brooks, the mythical man month.


You could be provocative and say even computer science and programming are not very related. Traditional computer science education involves much more chalk and blackboard and less keyboard typing.


I'd say every type of logical thinking is math. Otherwise it wouldn't be logical.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: