Sure, mathematical thinking can be useful, but it's only one type of logical thinking among many types which can be applied to programming.
I've been programming so much for so long now that before I even start writing code my mind launches into an esoteric process of reasoning that I'm not confident would be considered "thinking in math" since I'm not formally skilled in mathematics. It's all just flashes of algorithms, data structures, potential modifications, moving pieces, how they all affect each other and what happens to the entire entangled web when you alter something. Fortunately, my colleagues are often pleased and sometimes even impressed with my code, and yet I'm not so sure I would consider my process "thinking in math."
So, this isn't necessarily a direct refutation to the article. In fact, maybe what I'm talking about is the same thing as what this article is talking about. But, anyway, my point is that I feel that there's more ways to think about problems and solutions than pushing the agenda of applying formal mathematics.
As an aside, I noticed this part of the article:
"Notice that steps 1 and 2 are the ones that take most of our time, ability, and effort. At the same time, these steps don’t lend themselves to programming languages. That doesn’t stop programmers from attempting to solve them in their editor"
Is this really a common thing? How can you try to implement something without first having had thought of the solution?
If the only aspect of mathematics that you bring into programming is logical deduction by mechanical rules, then I doubt it will help, except for rare cases where you prove or disprove the correctness of code. If, on the other hand, you bring over the aesthetic concern, the drive to make painfully difficult ideas more beautiful (ergonomic) for human brains, then it will help you make your code simpler, clearer, and easier for others to work with.
It's common, and as you can imagine, it doesn't lead to good outcomes. When people start by coding first, it's so much work they tend to stop at their first solution, no matter how ugly it is. When people start by solving the abstract problem first (at a whiteboard, say) they look at their first solution and think, "I bet I can make this simpler so it's easier to code." The difficulty of coding motivates a bad solution if you start with code and a good solution if you write the code last.
A lot of people with particular interest in one area -- say, mathematics -- don't realize that much of what is important is much more generally applicable.
It's not that these things are distinctly important for math. It's that they are important for thinking.
For example, in law or philosophy, repeating the same argument multiple times, adapted for different circumstances, can give it weight. In math and programming, the weight of repetition is dead weight that people strive to eliminate. In law and philosophy, arguments are built out of words and shared assumptions that change over time; in math, new definitions can be added, and terms can be confusingly overloaded, but old definitions remain accessible in a way that old cultural assumptions are not accessible to someone writing a legal argument.
In physics, the real world is a given, and we approximate it as best we can. In math and software, reality is chosen from the systems we are able to construct. Think of all the things in our society that would be different if they were not constrained by our ability to construct software. Traffic, for one — there would be no human drivers and almost zero traffic deaths.
Where programming differs from math is that math is limited only by human constraints. Running programs on real hardware imposes additional constraints that interact with the human ones.
There’s kind of two ideas going on here (in this thread in general), I think.
One seems to be of a mindset I’d describe as thinking in math means glomming onto knowing linear algebra.
The other seems to be thinking in interconnections, minimalist definitions, and those abstract concepts that exist in math (and all kinds of things) for connecting discrete ideas into composite ideas.
One thing that bugs me is code with overly specific semantics, where it reads like that’s the only problem the code could solve.
When if it’s broken into concepts and abstraction in the PLANNING stage the code ends up being less verbose and descriptive of the human problem and more useful for a variety of problems.
So instead of code to balance a checkbook, I’d write code to add/subtract numbers and input numbers from my checking account.
I see a whole lot of code with too much specific semantic meaning. And it ends in practice that we think code in one system is highly specific to that system and minimizes effort to reuse.
At least that’s been my experience at work. Ymmv
I agree with your gist, there are lots of things where studying that thing is virtuous beyond its direct application. But also, I’d contend that thought is the subject of mathematics and not just a virtuous side-effect.
And as programmers we work with mathematical objects called state spaces, that have vastly more than 100 dimensions.
That said one can easily be a competent programmer without having much formal mathematical knowledge much like one can easily be a competent ball player without knowing the differential calculus. However, just as modern ball players improve their games with computer aided mathematical analysis of their swings and so on, a programmer can improve the quality of his output by mathematical analysis, in particular via the use of the predicate calculus and its, in my opinion, most useful application of loop analysis.
How did he solve it? Using probability theory and sets.
It's not just games, cryptography, finance, signal processing, compression, optimization, and AI that require mathematics, tons of programming does most people just don't realize it and brute force their way to a solution.
Lot's of real world problem can be solved with
algebra, calculus, Boolean algebra, linear algebra, geometry, sets, graph theory, combinatorics, probability and stats. What typically happens is most programmers are giving a problem, and what do they do? They start thinking in code. How did we solve problems before computer?
Apply that kind of thinking, then solve the problem with mathematics. Your code will often be much smaller and dense. Sure, dealing with output and input doesn't require you to write mathematical code, but the core of your problem can often be solved with some mathematics.
Unfortunately, it's incredibly common.
The result is always almost a mess. Functions that are never called, parameters that are never used, as they discovered their mistake as they were coding but then never went and cleaned up the stuff they don't use anymore. Broken logic, poor performance. Functions with a mess of loops and if statements, nested like 10 indents deep.
You can tell by looking at code if they were making it up as they were going versus implementing a solution they had thought through before starting coding. It's painfully obvious.
When you try to solve your problem by coding, I think you are forced to take a myopic view of only subsets of your solutions and it's near impossible to step back at this point and come up with a nicer, more abstract and probably more concise solution. The solution comes out spikey.
Of course when doing it like this you write a lot of code which later is unused or bad. But I think that will always happen and it's just a matter of having the discipline to continuously clean up after yourself.
Nobody criticizes the sculptor for the clay that ends up on the floor, and clay is heavy. We carve away bits, they have no mass and don’t need to be swept up, all we have to do is cut them away, revealing the final program.
Programming may not be (all) math, but it's not art, either.
Maybe the kind of common 8-5 office programming around buisness logic is not, but to design any bigger project is definitely art.
How do you make a statue of an elephant?
If you don't have a reasonably detailed idea of what you want and how to achieve it, you are unlikely to get it.
That is also a useless analogy. Do bridge builders get to test and re-test their bridges in the real, non-simulated world? Can they instantly make a copy of their bridge with a few critical differences and see how the two behave? Can they re-build their bridge in minutes?
Metaphors aside, I think history is ample evidence that "coding your way around a problem" rather than conceptualizing a solution first is a perfectly valid way to approach professional programming. It's not the only way, and it has drawbacks which others have pointed out here. So does the conceptualize-first approach: you might solve the wrong problem, make something inelastic in the face of changing requirements, or fall into the psychological trap of being attached to your mental model even when it turns out that you really didn't think of everything and have to make changes on the fly.
I'm really tired of people being dogmatic about either approach ("move fast and break things/pivot; anyone else isn't really interested in getting stuff done!", "you're just a messy code monkey unless you can hold the solution in your head before you start!"). It's almost always veiled arrogance rather than honest improvement-seeking, in my experience.
> I'm really tired of people being dogmatic about either approach
Exactly - and the implication that I am being dogmatic is a straw man. I am simply opposed to arguments that depend on poor analogies.
Furthermore, all of the bad things that you say can happen if you try to think ahead are as least as likely to happen if you don't, and especially if you have gone in the wrong direction for some time (I know the latter is a manifestation of the sunk-cost fallacy, but it happens a lot on real projects.)
Oh wait that is actually how architects work. In fact at my work we have multiple CAD designers(not architects though) and it's not uncommon for them to completely throw away a design and start over. I think code should be mostly the same.
Of course, but the Apollo 11 lunar lander was created without the aid of ubiquitous desktop computers. I imagine the SpaceX guidance/control software was written in a way that less resembles bridge-building/Apollo 11 lunar landers and more like the organic processes we see elsewhere in the software industry.
If Neo were to build a bridge in the Matrix, chances are his processes would bear little resemblance to those of the Army Corp of Engineers.
For the guidance/control systems, I bet you're wrong.
Software is a design practice/process. Not a building process. Any analogy should be to the design phase of other engineering disciplines.
The CAD designers absolutely test if things work. Why do you think almost every engineering bureau has 3D printers.
Sure, but it is not the only one. You are allowed to think at other levels, and it can be quite useful, especially on larger systems.
The problem of this approach is that it does not scale to large systems. If you don't spend much time on thinking in the abstract about how it will work and what might go wrong, then, by the time you have written enough code to find that out, you may have gone a long way down the wrong path, and not all architectural-level mistakes and oversights can be patched over.
No-one does this perfectly -- even people using formal methods will overlook things -- but, on a big project, if you don't put much effort into thinking ahead about how it should work, and try to identify the problems before you have coded them, you are likely to end up where, in fact, many projects do find themselves: with something that is nominally close to completion but very far from working. Those that are not canceled end up looking like legacy code even when brand new.
Big projects should be cut into smaller pieces where each piece can be relatively easily rewritten.
To come up with the right smaller pieces, you have to think about how they will work together to achieve the big picture. That means interfaces and their contracts, and if you get them wrong, you end up with pieces that don't fit together, and do not, collectively, get the job done.
Big problems cannot be effectively solved in a bottom-up manner, and perhaps the most pervasive fallacy in software development today is the notion that the principle of modularity means you only have to think about code in small pieces.
What do you think other engineering principles do? They create a proof of concept. Verify it works and then create the real thing. That is why "real" engineering companies have hundreds of tools to test stuff.
I really don't understand why people want software to be different. You write some shitty throwaway web app then sure go ahead and don't prototype anything just hire a "software architect" that designs something and use that.
But do you want something that actually works then that is completely useless. Prototype, verify, start over if necessary. That is the way to write quality software.
That's beside the point. The point is that coding is not the only way to verification, especially at the architectural level.
> I really don't understand why people want software to be different.
It seems to be you who wants to be different. Making prototypes is expensive and time-consuming, so engineers try to look ahead to anticipate problems. Prototyping in software is cheaper, but not so cheap (especially at the architectural level) that thinking ahead isn't beneficial.
If it's the former then this is part of building it. An implementation without proper testing is incomplete. If it's the later I actually agree. Only the most sensitive of applications require that level of sophistication though.
The prototype is generally a mess, but I throw that out anyway.
Code, after all, is cheap (and often totally worthless). More developers should adopt this view. I’ve seen engineers more times than I would care to admit get attached to some piece of code, as if it was some piece of themselves. Code is more akin to dogshit than the limb of a dog.
In my experience, the problem levels go differently than one could naively expect. Data structures, abstractions, module interfaces - all problems dealing directly with code - are best solved first on a whiteboard, where evaluating and iterating through them is cheap and effective. User interfaces, user experience, usefulness of a part of a program - things dealing with business and user needs - are best solved through prototypes, because you can't reasonably think through them on paper, you have to have a working thing to play with.
That's what doing math is like too - just substitute axioms, mathematical objects (whether numbers, sets, rings, or whatever is under discussion), potential lemmas and approaches, what bag of mathematical tools (theorems) you can use, and how much closer to a solution when you shift terms in your formulae around.
Then you write it all down (if you haven't already), simplify it, and clean it up before showing it to others, just like you would code.
Also, you can map programs to proofs and vice versa: https://en.wikipedia.org/wiki/Curry–Howard_correspondence
All code boils down to operations that can be described mathematically. Software is applied mathematics (with a sprinkle of art, perhaps). I think the reason why some people feel that programming is not closely related to mathematics, is that programmers are thinking and working on top of so many layers of abstraction, it's almost like working with the "stuff of the mind" itself, with models, processes, flows, transformations, events, composing behaviors.
That said, I relate to what the grandparent commenter is saying. Software allows me to think with visible, malleable and "living" mathematics while building up a system, to ask questions and have a dialogue with it.
>> there's more ways to think about problems and solutions than..applying formal mathematics
I agree with this. Often a "looser" approach is needed to explore a problem space, and formal mathematics may not be the best medium for creative problem-solving. On the other hand, the qualities that are valued in software - types, functional programming, test-driven development, etc. - are all about proofs. Not necessarily mathematically rigorous, but the closer you get, the more reliable the logic.
Programming's friendlier to algorithmic thinking (versus equation/identity and proof). The former's really easy for me, and while on paper (aptitude test scores) one might think the latter would be too, it's very, very not. I've only relatively late in life realized I need to reframe any non-trivial math I encounter in terms of algorithms to have any hope of understanding it. It's probably why I bounce off—understand well enough, just strongly dislike—programming languages that try to make code more look more like a math paper (more focus on equality/identity and proof-like structures).
And yeah algorithms are math, but lots of math's not really algorithms and when someone writes "think in math" that mostly means "think in proofs" to me. If they mean "think in algorithms" then that's close enough to programming—as I see it—already that it's a pretty fine distinction.
Whereas actually ”mathematical thinking”, like coming up with a proof, is an incredibly intuition-guided process, a parallel heuristic search in the solution space, a fundamentally creative endeavour. And as your intuition comes up with promising paths through the search space, you write them down, formalize them, probably discover some corner cases you have to handle, and either continue down that path or realize that it is a dead end and you have to backtrack.
At least to me, this process is incredibly similar to programming effort. You come up with subsolutions, formalize them, fix issues revealed by the formalization, carry on with the next subsolution or realize that approach can’t work after all, and come up with something else.
There appears to be two distinct kinds of programmers that are about equally effective: ones that think through the problem first and then write down the solution on the one hand, and ones that start with something close and then iteratively refine it into the desired result on the other hand.
When you’re doing things like writing documentation, this is important to remember as the two kinds of programmer will approach the documentation differently — important information needs to be put where both approaches will find it: http://sigdoc.acm.org/wp-content/uploads/2019/01/CDQ18002_Me...
They group these styles as: opportunistic versus systematic approach to programming. Paraphrasing below..
Opportunistic programmers develop solutions in an exploratory fashion, work in a more intuitive manner and seem to deliberately risk errors. They often try solutions without double-checking in the documentation whether the solutions were correct. They work in a highly task-driven manner; often do not take time to get a general overview of the API before starting; they start with example code from the documentation which they then modify and extend.
Systematic developers write code defensively and try to get a deeper understanding of a technology before using it. These developers took time to explore the API and to prepare the development environment before starting. Interestingly, they seemed to use a similar process to solve each task. Before starting a task, they would form hypotheses about the possible approach and (if necessary) clarify terms they did not fully understand.
Perhaps there is little correlation between those who excel at coding at a young age and those who go on to be good programmers when they get older. I just find it interesting that at this young age I see a correlation between coding skills and language skills more than math (really just arithmetic) skills.
Another observation was that we did the Hour of Code activity in December last year with Year 2 to Year 6 students (equivalent to Grade 1 to Grade 5 in the US). And in each group there was one or two student who really stood out. And every one of them was a girl. Small sample size of only about 100 students so maybe I shouldn't be wondering what is going on here.
As the other comment above mentioned, I think this has to do with education of the teachers. Very few teachers know what math is either.
High-level math values logical and linguistic skills.
This is often a hard stopping point for many students who were good at high school computation like calculus.
This is the standard thinking of someone who's not deep into math but deep into programming.
The two are deeply interrelated and in actuality are one in the same. Knowing math provides deeper understanding of programming. If you want to get better at programming in general, learning every new frameworks or specific technologies is not the path to getting better. Learning math is the path.
I cannot show you the path for you to understand it, you'll have to walk it yourself to know.
Suffice to say that there is an area of math that improves programming in a way you can understand. Type checking. Type checking proves that your program is type correct, it comes from math. You know it, and probably use it all the time.
To extend this, there's this concept of dependent types which also come from math. Dependent types can prove your entire program correct.
That's right with math you can write a single proof which is equivalent to billions of unit tests that touch the entire domain of test cases, to prove your program 100% correct. It's a powerful feature that comes from math. It's in the upper echelons of programming theory / mathematical theory and thus not trivial to learn. If you're interested you can check out the languages: Coq, agda or idris.
Could you elaborate on this? Mathematics is abstract/meta enough that I would consider any type of logical thinking as part of math.
> It's all just flashes of algorithms, data structures, potential modifications, moving pieces, how they all affect each other and what happens to the entire entangled web when you alter something.
That for example sounds very much like "think in math" to me.
I may be wrong but I believe the Curry-Howard correspondence disproves your claim. One can translate between the two and find that they are equivalent.
The key to solving hard problems is being able to think concretely in abstractions. The best language we have for abstraction is pure mathematics.
One of the bad patterns in the code was very complex nested boolean logic in places. Often with the same condition in several branches.
So I started using K-maps to untangle these. A few of them were much easier to read, but some of them... some of them it was unclear that all the cases were addressed. So I started putting big block comments above those, but we all know what happens to block comments over time.
Much later, big conditionals like that I would just move to a separate function, and then split em up to look like normal imperative code, instead of like math.
The first rule of teamwork is stop trying to be so goddamned clever all the time. It's like being a ball hog in basketball, football, soccer. Use that big brain to be wise instead. Find ways to make the code say what it means and mean what it says. Watch for human errors and think up ways to avoid them.
Math has very, very little to do with any of that. Psychology is probably a better place to spend your time.
I think I'd call this "thinking in programming", and it seems like a great way to do it.
> Is this really a common thing? How can you try to implement something without first having had thought of the solution?
A distressingly large amount of work I've done has not been greenfield development but things that might be called "maintenance" or "integration". You're not trying to draw a picture on a blank sheet of paper - you've been handed an almost-completely-assembled jigsaw, the photo on the box, and limitless box of random pieces. Your job is then to work out which of the already-assembled pieces is wrong and which of the spare pieces can be used to fill the hole.
In this context, disposable programs are very useful for finding information about what's going on, sketching possible solutions, and finding out which plausible ideas won't work for reasons outside your control.
(e.g. this week I wrote a disposable program to use libusb to extract HID descriptors; this duplicated a library we already had but didn't trust, and enabled me to pass a problem over to the team programming the other end of the USB link.)
Some of us actually think by programming. In that sense, a REPL or notebook is probably a better medium, but the thinking is going on concurrently with prototyping.
It isn’t so much like “we are solving the problem at the same time we are writing the code for the solution” but more like “we are writing (disposable) code to help us solve the problem.”
With respect, that tells us much more about you than about math or programming.
No Haskell expert, or formal methods expert, or complexity theory expert, would ever make a statement like that.
You may be right that math is quite a distance from day to day development, of course. (I don't think I'm being pedantic here, but perhaps.)
> it's only one type of logical thinking among many types which can be applied to programming.
What do you have in mind? Design patterns and software development practices, or something else?
I think if you regard logic (in philosophy) and maths (as a huge broad field) and computing (specifically a sub-field in maths to some people) its pretty clear that logic and computing have a huge relationship.
I can think of lots of other subfields in maths, which have huge inter-relationships. Applied maths, whats that got to do with probability? Well.. it turns out that modelling complex systems uses Monte-Carlo methods .. (a fictional example, I suspect, I know the manhattan project people dreamed MC up but its modern applicability is unknown to me)
You don't think maths informs programming, or its over-stated? I guess thats true, in as much as poetry doesn't inform legal writing. But, I observe that people who do enough poetry or writing to understand the difference between a simile and a metaphor and an allegory, are really on-point communicators, and the law needs that concision and precision.
I think people with good groundings in maths (and logic) make awesome programmers but its not strictly neccessary to be a mathematician to know how to "speak" in a programming language. What pitfalls you avoid from your knowledge, I cannot say. But I do know that huge pitfalls lie in naieve programming: large loops iterating over un-initialized data structures, not understanding the if-then-else logic or side effects of expressions, tail recursion..
I think computing is a sub-field in maths. How much it matters depends on how much your code matters.
Completely agree with this. I did a Maths and Philosophy degree, and I reckon the Philosophy was more useful to my career in programming than the Maths was. Although this probably depends on what kind of programming you do.
My (heavily uninformed) guess would be the ever questioning if our assumptions are actually true or not.
I found it to be not the case. Upon reading the first chapters I started wondering how could this be useful for coding. So I jumped to one of the last chapters where they show you practical applications. Upon reading those I thought: "I can do all this in code just fine without using linear algebra".
I never touched that book again.
About two years ago or so I started to make little games for the pico-8 fantasy console. There's some math involved there but almost always you don't use the math formulas as you would in a text book, for example, for something simple like drawing a straight line or a circle, finding paths, collisions... there are very specific algorithms for that, they don't look anything like a math formula, even if they are derived from those.
Just my point of view.
I'm not sure what you mean by that. I'd describe making a 3D game (engine) with rendering, collision detection, etc as probably one of the math-heaviest areas of programming outside of scientific computing or algorithm R&D.
This is exactly what programming is.
1. Are you aware that the complexity analysis isn't about being precise but about being able to predict the time for any given input given some sample? Since from my own experience, it's more of an analytical part and it's about calculations of worst case scenarios/computability of the process overall. Still, it has everything to do about actually predicting the exact values, with the grain of salt that the relativity of the method is.
2. Are you actually aware that the math isn't about being "precise" in the sense of numbers but about relationships between abstract entities? Ever heard something about category theory or pretty much anything related to the abstract algebra?
3. Is there anything else than math that helps abstraction in your opinion? For what I know, even mediocre understanding of abstract algebra helps a lot. Please note that this question is totally non-ironic, I'd really want to know.
I suspect one of the reasons is that to a casual observer, there is no difference between someone who is thinking deeply about something, and someone who is just daydreaming. They both aren't interacting with the computer and may have their eyes closed. On the other hand, "coding" by constantly banging at the keyboard and mousing around looks productive.
I am someone who thinks deeply first, and have been told off about it because they thought I was sleeping or otherwise not working.
Well, any fool can write a loop. But to do the same thing in constant time instead one might need to use some math.
> Programming languages are implementation tools, not thinking tools. They are strict formal languages invented to instruct machines in a human-friendly way. In contrast, thoughts are best expressed through a medium which is free and flexible.
I don't find math to be "free and flexible," at least, not compared with prose. It's more of an uncomfortable middle ground. When I write code, the computer forces me to be 100% precise and will spit errors at me as soon as I do something wrong. When I write prose, I can sort of proceed however I like within the very broad allowances of English grammar. But when I write math, I feel like I don't know what's allowed and what isn't, what I have to prove and what I can take for granted, what I have to define and what I don't, etc.
> Just as programming languages are limited in their ability to abstract, they also are limited in how they represent data. The very act of implementing an algorithm or data structure is picking just one of the many possible ways to represent it. Typically, this is not a decision you want to make until you understand what is needed.
I disagree pretty strongly on this point. I find that implementing the structures involved in a problem almost always gives me a better understanding of it and helps me find the solution.
This is similar to math though, with the exception that you don't have something or someone telling you that you made a mistake, but you have to seriously question every step in your proof by applying the rules of logic and your knowledge. At a certain point you develop sufficient intuition to spot steps that might be wrong. Most mathematicians ignore steps, which they are not completely confident of being true for the time being, assume the step is true and continue to see whether their derivation leads to what was to be proved, only later checking the steps which they had doubts about. It's more similar to programming than you think. If you like the logical part and algorithmic thinking involved in programming, I'm sure you would also enjoy math if you'd give it a real chance.
These are my favorite videos of his
Not trying to romanticize programming. It is a grueling and frustrating experience in my opinion and experience. But those that are good at it can be exceptional at it.
Development isn't my day job but I read a lot of code and there are those that can write code that is simply beautiful to view and that is highly functional. It truly is an art.
Math is a secondary concern. Sure, if you are working on hardcore algo stuff, it's heavy on math. But the great, great majority of programmers are not doing that. They are writing logic to achieve a business goal using existing primatives.
When I was starting out in my career, I worked at a hedge fund. The fund had a bunch of physicists and mathematicians working on models and they actually wrote the code for those models. They wrote some of the worst code I've ever seen. For example, rather than structuring their code properly, they would use exceptions to pass messages around. If function A needed some information from many levels deep in the stack, they would just throw an exception with the message inside. Function A would catch the exception. These aren't actual exceptions but they didn't to refactor their code. As you can imagine their code had horrific performance.
If you abstract away all the details a lot of concepts and constructs in CS look very similar but some of those details that were abstracted away are going to matter a lot when the code is actually run.
I posted about this elsewhere in the thread, but a deeper insight for you may be (was for me) that you have an easier time thinking in terms of steps in a process—algorithms—than identities and proofs.
Couldn't disagree with that more. That's a language of its own with a lot of extensions and variations introduced by the people involved. Basically any sound formal system (usually the talk's on rings) is viable, therefore math is indeed free and flexible. All you have is to escape the box of spoken language, the same thing as you'd do to learn a programming language if it's not "verbose" enough to make you think in phrases instead of the language in question.
Then you haven't done math to a sufficient level. Which most people don't if they aren't math majors.
I'm not talking about calculus or differential equations, etc. Even engineers and CS focuses too entirely on calculation (though CS has its own kind of proofs which are more what I'd call math). Besides mathematicians, only physicists occaisionally look at math this way.
At a certain point, math is about proofs which are a kind of rigorous prose. My math tests in upper level courses were done in essay blue books up to 10 pages of single space text, on one particularly long test.
There are multiple ways to prove a theorem. There are multiple ways to write a program. Some are shorter, some are longer. Some are more cryptic and hard to follow. Some rely on the work of others to outsource your own efforts. They are really quite similar except for math doesn't have a compiler (Coq and it's I'll excluded).
Lamport's TLA+ makes this formal. It is a language based on simple mathematics + some temporal logic for reasoning about discrete systems (software, hardware) as well as hybrid discrete-continuous systems, and is increasingly used in industy to lower the cost of software development (Amazon, Microsoft, Oracle and others). The idea of directly representing the relationships between abstractions and their implementation is the organizing principle of TLA+. For example, no program in any programming language (at least not in its runnable portion) can directly express Quicksort, as even though its specification (https://en.wikipedia.org/wiki/Quicksort) is complete, none of its steps is deterministic enough to be conveyed to a computer; the best a programming language can do is describe one particular implementation (/realization/refinement) of Quicksort. In TLA+, you can specify Quicksort itself precisely, and then show that a particular sorting program is indeed an implementation of Quicksort.
I wasn't aware that TLA+ made this part possible. How do you map from TLA+ to C++ (for example) with certainty?
... But you probably don't really want to do that. Code-level verification using any "deep specification" tool (TLA+, Coq, Isabelle, Lean, F* etc.) is extremely limited in scale compared to specifying in TLA+ at a higher level. Because there is no known way to directly verify programs larger than several thousand lines affordably, and because that's precisely the kinds of programs that most engineers need to verify most of the time, it's far more common to use TLA+ at a higher-than-code level.
However the goal isn't often to verify every single line of code. That would be prohibitively time consuming and expensive. The ideal use for this stuff is to verify the hard parts that are critical to get right. Verifying that a critical section of code will not lead to a deadlock or resource contention might be really important and so you could start with verifying that particular system.
I'm starting to see people say "I shouldn't learn PlusCal because it's not really TLA+", get disheartened about how difficult TLA+ is to learn, and believe they aren't able to use formal methods. I'd rather 10 people use a slightly-more-limited tool than 1 person use "the real thing".
Everything has to be "easy", so anyone can understand from a basic level.
It's one of the reasons we don't like verifying software using TLA, coq(proofs for programs), refinement types or using functional programming techniques. "It's too difficult for the average programmer".
The first excuse is that we don't have enough time to do things "right". When I get the metrics in how much time we waste fixing bugs after the fact, then it moves on the too difficult argument.
The result is we have an entire ecosystem full of buggy unreliable software.
Rarely do companies and startups have the luxury to just lean back and take their time. It's a race against competitors, and everyone wants the advantage of being first.
I don't blame the devs, as much as I blame the market. People start taking shortcuts when they're judged by how fast they can crank out codes, and whether they can finish their sprints on time. You develop a culture of constantly putting out fires.
Hell, for some consulting firms this is a profitable business model: Deliver a partially working product, then spend 10-15 years on patching it up, on your clients bill.
We have very deliberately developed technologies that enable rapid development. For "make it so a little message appears on the screen saying what day it is" features this works very well but it means even our formal analysis methods fall over on dynamic languages with heavy framework use.
This isn't just laziness. I'm a hardcore PL person and even I think that soundness is largely a mistake when making real analysis engines.
Formal reasoning feels very empowering.
You write down assumptions, apply transformation, arrive at conclusion.
Proof one Lemma at a time.
Work through some examples.
Eventually you arrive at a deeper understanding of the problem, and maybe even a solution.
When I started working in Software, I largely lost the ability to reason formally about the thing I am doing.
Also I need manuals and computers around to make progress.
This still frustrates me today.
And I have tried hard to reason formally about code, to gain back this experience:
- Doing Lambda Calculus by hand is possible.
- LISP is already tedious (scoping rules/mutable state!).
- Register machines are a nightmare to do by hand (See Knuth's books)
- The semantics of C are nearly impossible to write down by hand, and work with.
I am excited about this blog post, because it shows a new way of approaching the situation.
Model the problem domain with mathematical language, and leverage the Lemmas/Theorems in your implementation.
Some, food for thought. Thanks Justin!
I mentioned in another comment, I am really emphasizing modeling the problem mathematically, not really using formal programming methods like lambda calculus. Sounds like you got my idea!
Expressible conscious thought is limited to just a bit beyond what is readily typed or spoken. Not all thought is conscious. Just about every take-home test in grad school, I'd wrestle with the problem, go to bed thinking I was going to flunk, then find myself writing down the answers over breakfast.
There's the "tip of the tongue" experience, where you know you should be able to know something, but can't quite get it out. This not only happens with memory. It also happens with problem solving. This tells me that there's also unconscious thought and inexpressible thought we are conscious of.
It's abstract reasoning, as is math. And that's about as deep as the similarities go.
Math can't deal with imperfect input and/or side effects without turning into something else. And code without real world side effects is useless, as is real world perfection.
This is just not true. Differential equations certainly deal with change over time (except it's not called "side effects" there) and imperfections. The analog for discrete systems in temporal logic.
See stats, probability, information theory, and signal analysis
It very much can though.
There's no math equation for reading from a socket. Code is not math and math is not code. It's possible to sort of hide the fact by stacking enough abstractions on top, but in the end it's going to be the same old code that makes it happen.
Thinking in Haskell is the same feeling as thinking about math research. I know mathematicians who can only code in Haskell.
The trouble with discussing languages online is it's harder to assess if each party has actually used each language. The dogma in such discussions is completely "welcome to my world" familiar to me as a mathematician. We all have different opinions, and we're all sure we're right.
Making the jump to stating that everyone would be better off coding math style, which is what some are desperately trying to pull off; doesn't make any sort of sense.
It's remarkable how far you can get within such a rigid and formal framework. But for most messy real world problems, there are better solutions. Lisp being the most powerful invented so far.
As the author acknowledged, real life rarely allow such clean division.
One tool that I find very useful to interleave the three - or at least to allow shorter loops - is jupyter notebooks. The name is quite accurate, it can be used as a notebook to come up with solutions, and can easily be discarded once used. Unlike prototype code which has a tendency to evolve into the final codebase.
It's a common tool in data science but I'm not sure about other fields. Has someone used it for other purposes?
You're right that if I am going to add another template to a Django site, there isn't much math to think about. But anything bigger than that, there are always questions worth considering.
Is there a similar programming language that makes mathematicians feel at home? Something that makes them feel that they would rather write their implementation in that language itself instead of writing it with math notation on paper first?
It’s exactly like Paul Graham says, you might think that Python is just allowing you to write executable pseudo-code, but the interaction isn’t so simple.
I’ve programmed a lot of Python and when I first started out, I felt like it was very frictionless, like you said. An easy way to put down thoughts. But as I learned more about functional programming and type theory, I realized that Python is inadequate and operates a top low level. i.e it feels like there’s so much friction there.
I have used a variety of languages professionally (Scala, Haskell, OCaml, Racket, C, and Python mostly) and they all fall short (some more than others) on what I feel like I should be able to express. But if I had to chose, I would probably say OCaml or Racket come the closest to my thoughts, depending on the problem.
Anyway, my point is that it’s not obvious how your tools affect the level and abstraction of your thoughts. It’s almost always a bi-directional relationship, and therefore, choosing (or making) the right tool and method of abstraction is very important. See Beating the Averages. PG talks about the a hypothetical language called Blub. Blub isn’t the best, but it’s not the worst either. If there was a platonic form of Blub, it would most definitely be Python.
Also is it Racket specifically that makes it a good contender or is it the fact that it is a Lisp that makes it a good contender? Would any other Lisp like Scheme or Clojure or Common Lisp be equally good?
The following line of code produces all the prime numbers below the value R.
T = range(1, R)[1:] # T←1↓ιR
u = [[t*u for u in T] for t in T] # T∘.×T
v = [t for ei, t in
zip([any(t in ui for ui in u) for t in T], T)
if not ei] # (~T∈u)/T
I wrote some hacks in APL,
each on a single line.
They're mutually recursive,
and run in n-squared time!
T = numpy.arange(2, R)
print T[~numpy.equal.outer(T, numpy.multiply.outer(T, T)
A much more compelling demonstration of APL is, in my mind, the interactive development process leading up the the one-liner Game of Life in this livecoding video: https://www.youtube.com/watch?v=a9xAKttWgP4
† This is not the Sieve of Eratosthenes, despite the article title; the Sieve is an immensely more efficient algorithm than trial division, producing the same results in near-linear time.
The problem is not writing stuff down. The problem is reasoning about what you have written down.
With popular languages it's really hard, to say what a line of code does.
It depends on so many things (global state, scoping, local state), that you have to spell out.
Google "Semantics of Programming Languages" to get an idea, what's involved with formally reasoning about code.
To have a chance to do manipulations by hand, you have to give up, at least:
- Mutable State
- Side-effects (I/O)
(pure) Scheme and Haskell come into mind as contenders.
It feels a bit like having a programming language that uses unique emojis as function names.
Personally, I find it much easier to read code (from a high-level programming language), than to read math formulas.
Instead, think of a formula like a very dense sentence in a novel.
The protagonists are usually introduced in the paragraph before. You are assumed to know their names to make sense of the formula.
I can program just fine, write functions left and right, receive data, do something with it, return a result...
But in maths? Oh dear god no I couldn't write a math function on a piece of paper to save my life.
Someone recently pointed out the parallels and while I couldn't deny it, as it was plain as day... I never once considered it during all these years.
This is a shame, because we spend a lot of effort in deliberately avoiding common abstractions in case it scares programmers away, but really we're just making it harder for everyone to learn all these things. We either eventually recognise the underlying pattern through our brain's capacity to extrapolate from examples (hard, slow), or learn the pattern explicitly and recognise it, or often never learn the pattern and have to learn each new language's way of doing things one by one.
And of course when we either don't know or deliberately avoid effective abstractions we make or propagate mistakes. See Java's first crack at Future, for example, which was practically useless since the only thing you could do with a Future was wait for it to be the present.
I don't know what you mean by "fancy algorithms", but all algorithms that run on your computer have a basis in math.
As for making things go as fast as possible, well that's a special case of the field of optimization, another mathematical discipline.
These days I'm trying to be mostly an embedded guy, and 100% understand what you're talking about re: problems that don't lend themselves well to mathematical modelling. Figuring out that your SPI bus is going slow because you've got the wrong multipler in a clock domain isn't a math problem :)
What I'd like to add to your y = f(x) examples though is that many Business Problems can (and probably should!) be modelled as y=f(x) type problems. I've seen a ton of business logic over the years that modifies objects in a pretty ad-hoc manner and is incredibly hard to reason about, especially in the big picture. The vast majority of the time, those problems can be modelled roughly as:
new_state = f(old_state, event)
I've done a few embedded implementations that had pretty complicated state machines under the hood (off the top of my head, a LoRaWAN implementation). I modelled the states in TLA+, and it was a wonderful platform for discovering the flaws in the model I'd put together. It took a couple iterations before the model checker was happy, and from there the implementation was mostly mechanically translating my TLA+ model into code. There was some housekeeping stuff to keep track of (the TLA+ model was an abstraction), but it pretty much worked first try.
I'm not sure how you can claim that the entire field of optimization is a "mathematical discipline". Algorithm analysis is, I suppose, but most other practical optimization work has little if anything to do with math.
When I've spent time doing optimization work, it has often involved things like:
* Discovering some API I'm using is particularly slow and finding an alternate one that's faster.
* Adding caching.
* Reorganizing code to take advantage of vector instructions. (Well, I haven't personally done this, but I know it's a think many others do.)
* Reorganizing data to improve CPU cache usage.
* Evaluating some things lazily when they aren't always needed.
* Making objects smaller to put less pressure on the GC.
* Inlining functions or switching some functions to macros to avoid call overhead.
* Tweaking code to get some values into registers.
The first two cases are somewhat special:
- It may be immediately obvious that an API is terrible, and that the replacement is not. If API 1 takes 1 sec to call, and API 2 takes 100ms to call, easy choice without stats.
- Caching can be dangerous. While not really a stats problem, you do need to have a really solid model of what is getting cached, and how to know when to invalidate those cache entries.
For the rest of the examples you provided, you're making changes that may make the problem better, may have no effect, or may make the problem worse. You absolutely need to use statistics to determine whether or not changes like those are actually having an effect. Performance analysis is part math and part art, and without the math background, you're likely going to be spinning your wheels a bunch. Beyond stats, fields like queuing theory are going to make a huge impact when you're doing performance optimization in distributed systems.
> Discovering some API I'm using is particularly slow and finding an alternate one that's faster.
On it's own it has nothing to do with Math, but writing code as components/services/abstract layers with well-defined boundaries/interfaces/types mean it's easier to reason about the code and avoid bugs. Implicitly here I'm saying we should use a language that has strong type support.
> Adding caching.
This is memoization and without the basis that general code functions should behave like their math counterparts this is hard to reason about.
> Reorganizing code to take advantage of vector instructions
These are mapping operations that are well defined in functional languages. The vectorized interface numpy provides is an abstraction of maps.
> Making objects smaller to put less pressure on the GC
This is orthogonal to the actual math basis for the code. For example using enums over strings is a localized change.
Why "as fast as possible"? Usually when programmers say this it's because they think "the faster, the better!". But obviously there is some speed beyond which there's no discernible improvement. At that point, pursuing the as-fast-as-possible mandate at the expense of other concerns is the wrong thing to do.
Therefore, there's an assumption built into this statement that the system will never be that fast, and therefore "as fast as possible" is a good target. Needless to say this isn't a safe assumption.
The thing is that we're built to be really bad at knowing when we're making assumptions. Thinking mathematically is to some extent a way of trying to overcome that limitation.
Why would mathematical thinking involve rejecting physical realities? If there is a performance constraint you are trying to optimize, account for it.
> Write a program that captures network packets and stores them on disk as fast as possible
How are you going to do that without formulas involving:
- network bandwidth
- disk bandwidth
- SATA bandwidth
- packet ordering
- compression time and ratios
- amdahl's law
This sounds ripe for mathematical reasoning! Absolutely the way you model the problem is informed your knowledge of the hardware.
Do you think they also do a lot of graph theory about it?
A small difference along one of these directions amplifies itself over time, as problems are assigned based on aptitudes and interests, resulting in a growing division. And I don't know if there's a consistent ratio across industries or businesses, but at the shop where I work, it's roughly 90% qualitative, and 10% quantitative. Any problem involving math is brought to the small handful of "math people." They end up being busy enough with just that kind of work and nothing else. If a project runs into a math problem, it grinds to a halt until one of the math people can spare some attention.
Likewise, the qualitative folks are perpetually busy too. Nobody is short of work because of gaps in their abilities. So I think it's quite fair to say that a programmer can do without math, if they find the right niche, but a programming project might need one or two math people from time to time.
At least, for functions use self-documenting names instead of "f", "g", as in math.
Then you're thinking in math and also writing in math, so you're getting the second bit wrong. You need to write in code, and code should be optimized for readability.
Actually, I think a good, justly polymorphic function should probably have a word of a reasonable length and parameters that are called `a`, `x`, `f`, since the parameters convey almost no information. If they have any greater length, it's just a restatement of the known information about the type.
The more unique information a name conveys, the less information the types convey and the bigger a chance you have of bugs or coding yourself into a hole.
But also, if you've called it `x` hopefully there's only one thing it can be and its scope is just one or two lines so you've written a single unit. If there's anything else `x` could be, then you've got a problem - your unit is too much.
Not every piece of code should be written this way, but your vocabulary should be built up of pieces like this.
I don't. One of the things I like most about eslint is the id-length rule - https://eslint.org/docs/rules/id-length
I think there are a lot of advantages to abiding by styling conventions and using linters to set a baseline, but there are always exceptions to the rule.
Note that the id-length rule also allows you to limit the maximum length of a variable name. This is why.
/* product of luminance and dot product of surface normal and light direction divided by scattering constant */
const prod = ...
In math and in programming this isn't always the case. Often you have a variable "epislon" that means "the tiny amount of space between this circle and the other circle that is shrinking".
In these cases there is no good name to give the variable. The meaning comes from the context.
In programming I follow the practice of using longer identifiers for more global scope, and shorter ones for smaller scope.
Which do you prefer?
(define (sqr x)(* x x))
(define (sqr number_to_square) (* number_to_square number_to_suqare))
(define (s x) (* x x))
Which is effectively what the OP does when they write their example. Their simple example quickly becomes unreadable after a few definitions.
Coming to math from computing, the only explanation I can come up with for the disaster that is mathematical notation is that mathematicians are universally sadomasochists.
And in the design space when we're thinking about problems of concurrency or liveness there are great tools like TLA+ that take a pure mathematical model and automate the checking that it satisfies our expectations. 
It's not all figures and drawings these days! I see maths and engineering integrating more closely in the future.
Not to me. The statements suffer from a common math problem: using single letters. if the names were better, maybe they would be obvious. But instead I have to keep back referencing former definitions to remember what they were.
I have to do something similar with p(1) and p(2) - I need to make sure my memory is correct on which data is in which place. If you could reference them in a more obvious way that would help.
I also have to make an assumption from the very start - what "t" refers to is only obvious in definition 3, when it was used in definitions 1 & 2.
It's ironic that the article recommends thinking in math and writing in code when they have thought in math and written in math.
Often when I have a problem that isn't easily expressed in mathematical notation (albeit to my limited knowledge of math), I usually got a good idea of how I could express it in code.
When I write pseudo code it often feels like I already have the code in my mind before I describe in plain English. That feels like a waste of time. So pseudo code doesn't feel like a great tool to express models of my programs.
So if you want your programming to reap the benefits of (others', mostly) mathematical reasoning--use a functional language that is all about expressing the ways in which things compose!
IO is modelled pretty well through monads. As are many other things, like nondeterministic processes, exceptions, state, etc.
I do prefer using functional language over imperative ones, but I don't understand that is connected to model software on paper?
> IO is modelled pretty well through monads. As are many other things, like nondeterministic processes, exceptions, state, etc.
Isn't there any easier way to describe I/O than with category theory? I understand that it's _possible_ to model IO with monads, but how would I communicate it with colleagues that doesn't have a background in category theory (or myself for that matter)?
Part of the benefits with modelling a program on paper should be to make communication easier. And to require people you communicate with to have knowledge in category theory to understand your design fells silly.
I probably misinterpret you, so could you please give a more detailed explanation or example?
From there, input handling is just a state machine. Easy to draw on paper as a graph.
> Part of the benefits with modelling a program on paper should be to make communication easier. And to require people you communicate with to have knowledge in category theory to understand your design fells silly.
This is an odd complaint, because you can also say:
requiring people you communicate with to have knowledge in (state machines | graphs | `if` statements | ...) to understand your design feel silly.
I didn't convey my question clearly. What I'm wondering is how I should express programs, or part of them, using mathematical notation when I don't see them being mathematical in nature to begin with?
if (name == "Bob"):
print("Hello, " + name)
It just feels weird that to convey this simple program on paper both I and the person I try to communicate with needs to have a grounding in category theory.
Hope you're able to understand my question. :)
This is both definitely useful and definitely math.
Thanks! You don't happen to have some learning resources for modelling programs as state machines? I can't find anything when I search.
You might also know these as flowcharts when applied to program flow control.
I don't think I answered the original question well. I just wanted to point out that math is more than just equations. Others have done a much better job in this thread.
I don't think you can actually:D
I mean for each category of programmer there is a pretty clear line separating common knowledge from things you can't expect people to know. And for pretty much every category of programmer, if statements and category theory are the opposite ends of that line.
I mean I feel like you agree with this based on your first paragraph, his complaint isn't odd because it's saying you can't expect people to know category theory, it's because he thinks that category theory is necessary here.
I was recently making a script engine for a game. It was really neat to realize that the "runScript" method was literally just a mapping between two monads. No special state inbetween, no complex logic, no file lookup or anything like that. These types of insight accumulate, and there's really a tonne of stuff to learn (this potential for learning the language itself feels much greater in functional programming for me).
> Isn't there any easier way to describe I/O than with category theory?
This isn't category theory! Do you really think every working Haskell programmer is some mathematician? No. Look at this random image I googled, you think Haskell programmers understand this? https://i.stack.imgur.com/4IzGk.png Most mathematicians don't!
The notion of a monad in functional programming might be inspired by category theory, but you're really better of not taking that connection too seriously. Functors, applicatives, and monads are all very simple notions that should be understood as programming constructs, not arcane math. If you want an area of math to research to most benefit your functional programming, that is undoubtedly mathematical logic and/or intro-level type theory, and not category theory. (This should take you in the direction of dependent types.)
Really, types are the key. The notion of a monad is best understood not through vague real-world analogies with sandwiches, but through the type and implementation of its >>= method. The reason for that is that the point of monads is in composition. And basic linear algebra is enough to understand the importance of composition, not category theory. Just look at the Maybe monad to immediately understand it: Nothing >>= f = Nothing, Just x >>= f = f x, where f : a -> Maybe b. Isn't this a really clear, intuitive way of composing operations which might fail?
Same goes for IO. The only thing you're doing is composing some values. When you compose an IO Int with some function of the type Int -> IO (), you get back a value of type IO () (which your runtime executes if you bind it to main). All of this is right in the type, and it's just as intuitive a way of composing IO values as Maybe ones, IMO.
You get the added benefit of execution becoming not a side-effect, but a first-class member. Evaluation of IO programs is not their execution, you could evaluate putStrLn "asdf" a million times without it being executed. You can literally store those programs (values) somewhere and execute them later.
I would say that modeling the computational process in math is not typically helpful (you already have a formal programming language) you should model the real-world (or at least computer world) problem you are trying to solve.
Carefully define operations and constraints, introduce abstractions for solving them, etc.
Do you have an example of an I/O problem you have thought about that you would like me to talk more about?
The question I'm trying to ask is how I should express (part of) programs on paper using math when I don't see how they are related to math.
Example: How could I easily express a function that takes a string as input and outputs the string capitalized using math notation?
I understand that the problem you solve in the post should probably be put on paper first before you began writing any code.
My problem is to express programs in math when "calculate" isn't part of the problem description, like it is in the example in the blog post.
Edit: Changed example question.
If I needed uppercase I would just say:
`up(s)` is a function that maps a string s to its uppercase string
This is of course assuming `uppercase` is a minor part of another algorithm. If it was the subject of discussion I might describe it like this:
Let `u(c)` be a function that maps a characters to its uppercase character. In ASCII `u(c) = c + K` where K is some offset.
To capitalize an entire string we need to apply that function to each character.
Let `up(s)` = (u(c_1), u(c_2), ... u(c_n))
where the string s = (c_1, c_2, ... c_n)
I highly recommend that book I linked in the article: "Introduction to Graph Theory" By: Trudeau
`func1(r, s)` is a function that sends a request `r` to a given server `s`. It returns a status code from the server.
> I highly recommend that book I linked in the article: "Introduction to Graph Theory" By: Trudeau
I'll check it out! Does the book cover all the relevant parts you mentioned before?
Btw, it seems you don't link the book in the article. At least I couldn't find it when looked now.
Here is another example
> Does the book cover all the relevant parts
No, it isn't quite so comprehensive, but it will absolutely help you get started and help you decide if you want to learn more.
First thing that comes to mind is category theory, especially monads. Haskell's I/O monad for example: https://en.wikipedia.org/wiki/Monad_(functional_programming)...
I have most of my feet in the C++/Common Lisp camps these days. Where you accept that the world is a messy place with complex problems that don't fit neatly into labeled boxes, and use the most powerful tools you can think of to deal with it.
> Recently, I worked on an API at work for pricing cryptocurrency for merchants. It takes into account recent price changes and recommends that merchants charge a higher price during volatile times.
You shouldn’t look at the past price for a cryptocurrency to determine the current one. If few trades happen in a period you’re using outdated data.
The price of interest for merchants who want to sell cryptocurrency is the best bid (highest-priced buy order) in a market where fiat bids on crypto (e.g. BTC/USD, ETH/USD). You should be downloading order book data from exchanges (e.g. ), and quoting the best bid to merchants.
Also, what you call high volatility (of past trade prices) might simply be a proxy for a large difference between the highest-priced buy order and lowest-priced sell order (large “spread”). Instead of looking at volatility of past trades, I recommend you look at monitoring the spread of the order book of interest. Although this might not be all that relevant, since merchants (who sell crypto for fiat) are only interested in the price they can sell for, not the price they can buy for.
Additionally, I would clearly recommend learning math in English, since there are a lot of synergies that can be really, really helpful in the jargon of computer science. Even if it is just the little nudge that allows you to connect problems.
Furthermore you can unlearn a tool if you don't use it. I learned frequency dissection in school. Forgot about it 2 weeks later. Only as I started to implement my own shitty jpeg-compression is ehrn I really started to use it as a tool. Had to relearn it of course. Turns out there are many applications for general image recognition. Great. Now the math got useful and is indeed needed.
There are some "purely" mathematical tools like adding zeros, multiplicating by 1 or logic that allows for further transformations. But I would argue that it doesn't as much help to solve a problem as it helps to verify the solution.
But some thousand years ago someone must have made the decision to make mathematical notations specifically unreadable when expressed with ascii symbols. Really don't like that guy.
But then again, without him we probably would miss a lot of what makes programming elegant, like functions as f(x).
In an alternate timeline, we might all be using Arabic mathematical notation .
And they are called papers.
However, in this article I am not advocating that mathematical thinking is using one letter notation. I simply follow that convention for standard things like function.
Shake up your thinking about math: "Iconic Math" http://iconicmath.com/
Math-first programming: CQL - Categorical Query Language
> The open-source Categorical Query Language (CQL) and integrated development environment (IDE) performs data-related tasks — such as querying, combining, migrating, and evolving databases — using category theory, a branch of mathematics that has revolutionized several areas of computer science.
Thinking about physics math in computer language: "Structure and Interpretation of Classical Mechanics" by Gerald Jay Sussman and Jack Wisdom with Meinhard E. Mayer.
I'm not sure if this is true. Harold Abelson creates the distinction between Mathematics being the study of truth and Computing being the study of process. It seems to me that these really are different things and Mathematics isn't the "natural language" to discuss computations, but rather truth and patterns. But of course process (computing) can only happen within the boundaries of mathematical truths and patterns.
 https://www.youtube.com/watch?v=2Op3QLzMgSY the first few minutes
They still approach computation using mathematical reasoning methods. Note how they define car and cdr and how they approach problems in those videos.
I believe Abelson and Sussman use the kind of mathematical reasoning I am talking about in all their work. SICP being a prime example.
Math is the world without abstraction leaks; programming is the act of plugging those leaks.
I do think starting with basic architecture and design is a good idea, but it's important to jump in and test your assumptions before you become too attached to them. In my mind the ideal flow goes something like this:
1. Short design/modeling/architecture/etc session
2. Test assumptions by hacking together a quick prototype
3. Revamp design.
4. Implement more robust prototype.
5. Iterate as necessary.
Before I was a programmer I studied mechanical engineering. There is a lot of maths in that and the closest thing you get to programming is Control Engineering.
What is the meaning of the below terms (possibly in various contexts).
- information hiding
How do these relate to and differ from each other?
I feel these get conflated a lot. Would be nice to tell them apart.
Great coders craft code that looks like the problem domain.