
Think in Math, Write in Code - selff
https://justinmeiners.github.io/think-in-math/
======
warent
There's an unpopular and somewhat seemingly contradictory opinion that I have
regarding this, because this isn't the first time I've seen this topic brought
up. Mathematics and programming are not really all that related to each other
and I think there's an overemphasis on the importance of math in programming
for 99% of applications.

Sure, mathematical thinking can be useful, but it's only one type of logical
thinking among many types which can be applied to programming.

I've been programming so much for so long now that before I even start writing
code my mind launches into an esoteric process of reasoning that I'm not
confident would be considered "thinking in math" since I'm not formally
skilled in mathematics. It's all just flashes of algorithms, data structures,
potential modifications, moving pieces, how they all affect each other and
what happens to the entire entangled web when you alter something.
Fortunately, my colleagues are often pleased and sometimes even impressed with
my code, and yet I'm not so sure I would consider my process "thinking in
math."

So, this isn't necessarily a direct refutation to the article. In fact, maybe
what I'm talking about is the same thing as what this article is talking
about. But, anyway, my point is that I feel that there's more ways to think
about problems and solutions than pushing the agenda of applying formal
mathematics.

As an aside, I noticed this part of the article:

 _" Notice that steps 1 and 2 are the ones that take most of our time,
ability, and effort. At the same time, these steps don’t lend themselves to
programming languages. That doesn’t stop programmers from attempting to solve
them in their editor"_

Is this really a common thing? How can you try to implement something without
first having had thought of the solution?

~~~
geomark
I've been thinking about this quite a bit, but coming from a different angle.
I've been helping at my kid's school with coding clubs for primary students.
When teachers are recruiting for the coding club they always mention the
students who are good at math as good candidates. But what I have noticed is
that the students who do the best in coding are more often musically inclined
or linguistically talented. It seems to me to make some sense. The ones who
can parse and understand languages at an early age might also have the
aptitude for programming. It's a small sample size, a few dozen students.
Still seems kind of interesting.

~~~
Junk_Collector
It's also worth thinking about that you don't really learn math in primary
school so much as you learn numbers and computation, so when a teacher says a
child is good at math they usually mean good at numbers. Very few teachers
understand math well enough to identify who would be good at it and a lot of
unfortunate students find this out when they get to college.

~~~
geomark
Sure, I can see that. But isn't it similar with language and other subjects?
You just learn the basics, nothing deep, no turns of phrases, little
expressiveness.

Perhaps there is little correlation between those who excel at coding at a
young age and those who go on to be good programmers when they get older. I
just find it interesting that at this young age I see a correlation between
coding skills and language skills more than math (really just arithmetic)
skills.

Another observation was that we did the Hour of Code activity in December last
year with Year 2 to Year 6 students (equivalent to Grade 1 to Grade 5 in the
US). And in each group there was one or two student who really stood out. And
every one of them was a girl. Small sample size of only about 100 students so
maybe I shouldn't be wondering what is going on here.

~~~
justinmeiners
Math is somewhat unique in that the high-school and early college version is
not at all representative of the real thing. Its not "just a taste", its
qualitatively different.

As the other comment above mentioned, I think this has to do with education of
the teachers. Very few teachers know what math is either.

------
pgcj_poster
I was surprised to learn that I really enjoy coding, whereas I rather dislike
math. I feel like this article might resonate with some people, but not with
me.

> Programming languages are implementation tools, not thinking tools. They are
> strict formal languages invented to instruct machines in a human-friendly
> way. In contrast, thoughts are best expressed through a medium which is free
> and flexible.

I don't find math to be "free and flexible," at least, not compared with
prose. It's more of an uncomfortable middle ground. When I write code, the
computer forces me to be 100% precise and will spit errors at me as soon as I
do something wrong. When I write prose, I can sort of proceed however I like
within the very broad allowances of English grammar. But when I write math, I
feel like I don't know what's allowed and what isn't, what I have to prove and
what I can take for granted, what I have to define and what I don't, etc.

> Just as programming languages are limited in their ability to abstract, they
> also are limited in how they represent data. The very act of implementing an
> algorithm or data structure is picking just one of the many possible ways to
> represent it. Typically, this is not a decision you want to make until you
> understand what is needed.

I disagree pretty strongly on this point. I find that implementing the
structures involved in a problem almost always gives me a better understanding
of it and helps me find the solution.

~~~
FiberBundle
> When I write code, the computer forces me to be 100% precise and will spit
> errors at me as soon as I do something wrong.

This is similar to math though, with the exception that you don't have
something or someone telling you that you made a mistake, but you have to
seriously question every step in your proof by applying the rules of logic and
your knowledge. At a certain point you develop sufficient intuition to spot
steps that might be wrong. Most mathematicians ignore steps, which they are
not completely confident of being true for the time being, assume the step is
true and continue to see whether their derivation leads to what was to be
proved, only later checking the steps which they had doubts about. It's more
similar to programming than you think. If you like the logical part and
algorithmic thinking involved in programming, I'm sure you would also enjoy
math if you'd give it a real chance.

~~~
naturlich
Indeed, I'd recommend the "interesting proof" vignettes by 3Blue1Brown:
[https://www.youtube.com/watch?v=OkmNXy7er84&list=PLZHQObOWTQ...](https://www.youtube.com/watch?v=OkmNXy7er84&list=PLZHQObOWTQDPSKntUcMArGheySM4gL7wS)

These are my favorite videos of his

------
pron
As the author notes, Leslie Lamport makes much of the same point, but more
rigorously. You can find it in many of his writings, e.g.
[http://lamport.azurewebsites.net/pubs/state-
machine.pdf](http://lamport.azurewebsites.net/pubs/state-machine.pdf)

Lamport's TLA+ makes this formal. It is a language based on _simple_
mathematics + some temporal logic for reasoning about discrete systems
(software, hardware) as well as hybrid discrete-continuous systems, and is
increasingly used in industy to lower the cost of software development
(Amazon, Microsoft, Oracle and others). The idea of directly representing the
relationships between abstractions and their implementation is the organizing
principle of TLA+. For example, no program in any programming language (at
least not in its runnable portion) can directly express Quicksort, as even
though its specification
([https://en.wikipedia.org/wiki/Quicksort](https://en.wikipedia.org/wiki/Quicksort))
is complete, _none_ of its steps is deterministic enough to be conveyed to a
computer; the best a programming language can do is describe one particular
implementation (/realization/refinement) of Quicksort. In TLA+, you can
specify Quicksort itself precisely, and then show that a particular sorting
program is indeed an implementation of Quicksort.

~~~
svieira
> and then show that a particular sorting program is indeed an implementation
> of Quicksort.

I wasn't aware that TLA+ made this part possible. How do you map from TLA+ to
C++ (for example) with certainty?

~~~
pron
Oh, you usually specify in multiple levels of detail in TLA+ and relate them
and only informally relate them to code, but if you want a formal relation to
code you could compile your program to TLA+ (e.g.
[http://tla2014.loria.fr/slides/methni.pdf](http://tla2014.loria.fr/slides/methni.pdf)).

... But you probably don't really want to do that. Code-level verification
using _any_ "deep specification" tool (TLA+, Coq, Isabelle, Lean, F* etc.) is
_extremely_ limited in scale compared to specifying in TLA+ at a higher level.
Because there is no known way to directly verify programs larger than several
thousand lines affordably, and because that's precisely the kinds of programs
that most engineers need to verify most of the time, it's far more common to
use TLA+ at a higher-than-code level.

------
UK-AL
I honestly believe the majority of the problems in the industry comes from the
refusal to treat programming as mathematics.

Everything has to be "easy", so anyone can understand from a basic level.

It's one of the reasons we don't like verifying software using TLA, coq(proofs
for programs), refinement types or using functional programming techniques.
"It's too difficult for the average programmer".

The first excuse is that we don't have enough time to do things "right". When
I get the metrics in how much time we waste fixing bugs after the fact, then
it moves on the too difficult argument.

The result is we have an entire ecosystem full of buggy unreliable software.

~~~
TrackerFF
Well, from a practical standpoint, it's more desirable to put out a partially-
working system fast, which can be improved upon as it is live - than spend too
much time planning and implementing a fully working system. Note: By partially
and fully, I simply mean systems with more and less bugs.

Rarely do companies and startups have the luxury to just lean back and take
their time. It's a race against competitors, and everyone wants the advantage
of being first.

I don't blame the devs, as much as I blame the market. People start taking
shortcuts when they're judged by how fast they can crank out codes, and
whether they can finish their sprints on time. You develop a culture of
constantly putting out fires.

Hell, for some consulting firms this is a profitable business model: Deliver a
partially working product, then spend 10-15 years on patching it up, on your
clients bill.

~~~
hwayne
This is why I usually pitch formal specification as "you build your program
faster and spend less time debugging it later." Framing it as a cost-saving
measure over a "well ya GOTTA be correct" measure.

------
heinrichhartman
I spent more than 10years in Academia doing math. One thing I loved about this
time, is that I was able to sit down anywhere and pound at my current research
problems, without having any additional notes or books with me. Just pen and
paper.

Formal reasoning feels very empowering.

You write down assumptions, apply transformation, arrive at conclusion. Proof
one Lemma at a time. Work through some examples. Eventually you arrive at a
deeper understanding of the problem, and maybe even a solution.

When I started working in Software, I largely lost the ability to reason
formally about the thing I am doing. Also I need manuals and computers around
to make progress. This still frustrates me today. And I have tried hard to
reason formally about code, to gain back this experience:

\- Doing Lambda Calculus by hand is possible.

\- LISP is already tedious (scoping rules/mutable state!).

\- Register machines are a nightmare to do by hand (See Knuth's books)

\- The semantics of C are nearly impossible to write down by hand, and work
with.

I am excited about this blog post, because it shows a new way of approaching
the situation. Model the problem domain with mathematical language, and
leverage the Lemmas/Theorems in your implementation.

Some, food for thought. Thanks Justin!

~~~
justinmeiners
Glad you liked it.

I mentioned in another comment, I am really emphasizing modeling the problem
mathematically, not really using formal programming methods like lambda
calculus. Sounds like you got my idea!

------
stcredzero
_Some even defend a form of Linguistic Determinism that thinking is limited to
what is typable._

Expressible conscious thought is limited to just a bit beyond what is readily
typed or spoken. Not all thought is conscious. Just about every take-home test
in grad school, I'd wrestle with the problem, go to bed thinking I was going
to flunk, then find myself writing down the answers over breakfast.

There's the "tip of the tongue" experience, where you know you should be able
to know something, but can't quite get it out. This not only happens with
memory. It also happens with problem solving. This tells me that there's also
unconscious thought and inexpressible thought we are conscious of.

------
codr7
It's not math, though.

It's abstract reasoning, as is math. And that's about as deep as the
similarities go.

Math can't deal with imperfect input and/or side effects without turning into
something else. And code without real world side effects is useless, as is
real world perfection.

~~~
pron
> Math can't deal with imperfect input and/or side effects without turning
> into something else.

This is just not true. Differential equations certainly deal with change over
time (except it's not called "side effects" there) and imperfections. The
analog for discrete systems in temporal logic.

~~~
sixbrx
It's not called side effects there because there are none, in the modern
mathematical formulation at least. The goal is to identify solution functions,
and those are just sets of ordered pairs or something similarly inert.

~~~
pron
Except that even in programming, the name "side-effect" is not too clear. It
stems from the fact that subroutines sometimes behave a bit like functions,
and when they don't, we call the added behavior a "side effect". If you want
to be mathematically precise, subroutines are predicate transformers, and,
again, there are no "side effects" (This is a mistake functional programmers
make when they speak of referential transparency; most languages are
referentially transparent, and that was the whole point of the paper that
first introduced the term to programming. They are only not referentially
transparent with respect to an incorrect semantic model). The philosophical
difference between a subroutine with side effects and a derivative is small.

------
m3at
_1\. Identify a problem_ _2\. Design algorithms and data structures to solve
it_ _3\. Implement and test them_ ... _In practice, work is not so well
organized as there is interplay between steps. You may write code to inform
the design._

As the author acknowledged, real life rarely allow such clean division.

One tool that I find very useful to interleave the three - or at least to
allow shorter loops - is jupyter notebooks. The name is quite accurate, it can
be used as a notebook to come up with solutions, and can easily be discarded
once used. Unlike prototype code which has a tendency to evolve into the final
codebase.

It's a common tool in data science but I'm not sure about other fields. Has
someone used it for other purposes?

[1] [https://jupyter.org/](https://jupyter.org/)

Edit: formatting

~~~
wa1987
On a similar note, this looks interesting as well (not affiliated):
[https://observablehq.com/](https://observablehq.com/)

------
hannofcart
I think that the total programming man hours spent on writing pure math
computation would be dwarfed by the much larger number (I think) spent on
doing menial computation: fetching data, transforming it and sending it
forward/rendering some output. I suppose this article is targeted at that
privileged former group.

~~~
0815test
Just because some tasks are 'menial', doesn't mean they don't involve math at
some level. You still want to get your data transforms right, and make sure
that what you're fetching/sending over is what these other services expect.

~~~
jhbadger
Indeed. Many people are turned off by reading Knuth because his "Art of
Programming" deals with tasks, such as sorting, that are viewed as menial and
(almost) never implemented by hand these days. But there was a lot of math
involved in figuring out how to make such tasks as efficient as possible, as
Knuth shows. Theoretical computer science _is_ math (but then so is much of
theoretical anything).

------
arendtio
IMHO, the biggest problem with math is the complex syntax. I mean, you can't
just read it like a normal language, but instead, you have to know exactly
what each symbol means and in which order they have to be evaluated.

It feels a bit like having a programming language that uses unique emojis as
function names.

Personally, I find it much easier to read code (from a high-level programming
language), than to read math formulas.

~~~
heinrichhartman
Oftentimes people make the mistake, of jumping directly to the formulas when
reading mathematical texts. You will not have enough context to parse them!

Instead, think of a formula like a very dense sentence in a novel.

The protagonists are usually introduced in the paragraph before. You are
assumed to know their names to make sense of the formula.

------
KoenDG
I must have some deep-rooted, maybe repressed, anger towards maths.

I can program just fine, write functions left and right, receive data, do
something with it, return a result...

But in maths? Oh dear god no I couldn't write a math function on a piece of
paper to save my life.

Someone recently pointed out the parallels and while I couldn't deny it, as it
was plain as day... I never once considered it during all these years.

Kinda scary.

~~~
codebje
The most common intersection, IMO, is inductive reasoning. If you can write a
working recursive function, you are also writing an inductive proof. The only
explanation I have for why people think there's either no link to math, or
that they can program but can't do math, is a cultural sledgehammer smashed
into our collective skulls while we're young that tells us we can't.

This is a shame, because we spend a lot of effort in deliberately avoiding
common abstractions in case it scares programmers away, but really we're just
making it harder for everyone to learn all these things. We either eventually
recognise the underlying pattern through our brain's capacity to extrapolate
from examples (hard, slow), or learn the pattern explicitly and recognise it,
or often never learn the pattern and have to learn each new language's way of
doing things one by one.

And of course when we either don't know or deliberately avoid effective
abstractions we make or propagate mistakes. See Java's first crack at Future,
for example, which was practically useless since the only thing you could do
with a Future was wait for it to be the present.

------
deadgrey19
Thinking in math first has been the catch cry of functional programmers (and
their formal logic/verification friends) for decades. And there's nothing
wrong with it, unless the problem you are trying to solve actually requires
performance. Then, you have to think in "system" first. For example: Write a
program that captures network packets and stores them to disk _as fast as
possible_. There's no maths to think of here. The complexity is all in the
"fast" part, and to solve that, a deep understanding of the architecture of
the system is necessary. Fancy algorithms (maths) will not help you here. e.g.
Will compression help? Depends on the properties of the system. Will batching
help? Depends on the system. Will threading help? Depends on the system.

~~~
chongli
What you're describing is a very limited view of math, resembling the general
public view of math as being algebra, trigonometry, geometry, and calculus;
that is, all the math people are exposed to in secondary school. Look further
and you'll see disciplines such as mathematical logic, combinatorics, and
graph theory, without which you wouldn't have networking or binary or
computers at all, really.

I don't know what you mean by "fancy algorithms", but all algorithms that run
on your computer have a basis in math.

As for making things go as fast as possible, well that's a special case of the
field of optimization, another mathematical discipline.

~~~
munificent
_> that's a special case of the field of optimization, another mathematical
discipline._

I'm not sure how you can claim that the entire field of optimization is a
"mathematical discipline". Algorithm analysis is, I suppose, but most other
practical optimization work has little if anything to do with math.

When I've spent time doing optimization work, it has often involved things
like:

* Discovering some API I'm using is particularly slow and finding an alternate one that's faster.

* Adding caching.

* Reorganizing code to take advantage of vector instructions. (Well, I haven't personally done this, but I know it's a think many others do.)

* Reorganizing data to improve CPU cache usage.

* Evaluating some things lazily when they aren't always needed.

* Making objects smaller to put less pressure on the GC.

* Inlining functions or switching some functions to macros to avoid call overhead.

* Tweaking code to get some values into registers.

~~~
tonyarkles
I'm not sure why you're being downvoted, but I agree with all of your points.
Those aren't things that really lend themselves well to mathematical
modelling. But... there is a huge field of math that _does_ apply to this:
statistics.

The first two cases are somewhat special:

\- It may be immediately obvious that an API is terrible, and that the
replacement is not. If API 1 takes 1 sec to call, and API 2 takes 100ms to
call, easy choice without stats.

\- Caching can be dangerous. While not really a stats problem, you do need to
have a really solid model of what is getting cached, and how to know when to
invalidate those cache entries.

For the rest of the examples you provided, you're making changes that may make
the problem better, may have no effect, or may make the problem worse. You
absolutely need to use statistics to determine whether or not changes like
those are actually having an effect. Performance analysis is part math and
part art, and without the math background, you're likely going to be spinning
your wheels a bunch. Beyond stats, fields like queuing theory are going to
make a huge impact when you're doing performance optimization in distributed
systems.

------
_zachs
I disagree with the author’s points that you can’t represent a graph with a
single interface. Write an interface with the methods you need, i.e.,
`findNeighbors()`, `getAllFriends()`, `retrieveDogPhotos()`, and implement it
for the specific underlying data structure/implementation of Graph you choose
later. A graph is a graph, the interface doesn’t care if you’re using an
adjacency matrix or a node. I don’t see how writing a mathematical function
would be more robust and helpful in designing and implementing those features.

~~~
justinmeiners
Can you represent Facebook's graph of friends as an interface?

Do you think they also do a lot of graph theory about it?

~~~
_zachs
`getFriends(userId)`, how's that? :D

------
analog31
I'm not employed as a programmer, so I'm somewhat detached from the business.
I've noticed that the programmers and other kinds of engineers tend to split
off into two different camps. There are "qualitative" engineers who are good
at arranging and organizing things, and fitting things together. There are
"quantitative" engineers, who solve problems through the application of math
and science principles.

A small difference along one of these directions amplifies itself over time,
as problems are assigned based on aptitudes and interests, resulting in a
growing division. And I don't know if there's a consistent ratio across
industries or businesses, but at the shop where I work, it's roughly 90%
qualitative, and 10% quantitative. Any problem involving math is brought to
the small handful of "math people." They end up being busy enough with just
that kind of work and nothing else. If a project runs into a math problem, it
grinds to a halt until one of the math people can spare some attention.

Likewise, the qualitative folks are perpetually busy too. Nobody is short of
work because of gaps in their abilities. So I think it's quite fair to say
that a programmer can do without math, if they find the right niche, but a
programming _project_ might need one or two math people from time to time.

------
stared
In that line, I started "Thinking in tensors, writing in PyTorch"
[https://github.com/stared/thinking-in-tensors-writing-in-
pyt...](https://github.com/stared/thinking-in-tensors-writing-in-pytorch).

------
enriquto
I do that. Then people hate me because all my variable names are single
letters...

At least, for functions use self-documenting names instead of "f", "g", as in
math.

~~~
onion2k
_I do that. Then people hate me because all my variable names are single
letters..._

Then you're thinking in math and also writing in math, so you're getting the
second bit wrong. You need to write in code, and code should be optimized for
readability.

~~~
skohan
I think single letter variable names are fine within the scope of a single
function. Especially if I'm doing something with a lot of math, like a
lighting calculation, the computations are often more readable with short
variable names. I'll also usually give a detailed description of what the
variable is in comments for clarity.

~~~
onion2k
_I think single letter variable names are fine within the scope of a single
function._

I don't. One of the things I like most about eslint is the id-length rule -
[https://eslint.org/docs/rules/id-length](https://eslint.org/docs/rules/id-
length)

~~~
skohan
I think it depends. In the context of a complex mathematical formula, where
you have many intermediate results with very abstract meanings, I think it's
more clear to use a single character name than to try to be descriptive with
something like:
`productOfLuminanceAndDotProductOfSurfaceNormalAndLightDirectionDividedByScatteringConstant`.

I think there are a lot of advantages to abiding by styling conventions and
using linters to set a baseline, but there are always exceptions to the rule.

~~~
onion2k
* productOfLuminanceAndDotProductOfSurfaceNormalAndLightDirectionDividedByScatteringConstant*

Note that the id-length rule also allows you to limit the _maximum_ length of
a variable name. This is why.

~~~
skohan
I understand that, but what in your mind would be a good replacement to name
such a complex, abstract value? A long acronym perhaps? I don't see how that's
any more clear or less arbitrary than a single character name.

------
codereflection
Related: Jon Skeet blogged about the concept of programming "in" a language vs
programming "into" a language back in 2008.

[https://codeblog.jonskeet.uk/2008/04/23/programming-quot-
in-...](https://codeblog.jonskeet.uk/2008/04/23/programming-quot-in-quot-a-
language-vs-programming-quot-into-quot-a-language/)

------
agentultra
I think there are languages and tools that mathematically minded programmers
will find accessible enough and useful to aid them in their thinking.
Dependently-typed programming languages such as Lean [0] and Agda [1] are both
expressive enough to search for proofs to theorems and practical enough to
execute programs.

And in the design space when we're thinking about problems of concurrency or
liveness there are great tools like TLA+ that take a pure mathematical model
and automate the checking that it satisfies our expectations. [2]

It's not all figures and drawings these days! I see maths and engineering
integrating more closely in the future.

[0] [https://leanprover.github.io/](https://leanprover.github.io/)

[1] [https://github.com/agda/agda](https://github.com/agda/agda)

[2]
[https://lamport.azurewebsites.net/tla/tla.html](https://lamport.azurewebsites.net/tla/tla.html)

------
bcrosby95
> As you read this example, I think your tendency may be to think that its
> statements are obvious.

Not to me. The statements suffer from a common math problem: using single
letters. if the names were better, maybe they would be obvious. But instead I
have to keep back referencing former definitions to remember what they were.

I have to do something similar with p(1) and p(2) - I need to make sure my
memory is correct on which data is in which place. If you could reference them
in a more obvious way that would help.

I also have to make an assumption from the very start - what "t" refers to is
only obvious in definition 3, when it was used in definitions 1 & 2.

It's ironic that the article recommends thinking in math and writing in code
when they have thought in math and written in math.

------
dr0wsy
How do I express my models of programs on paper that doesn't easily translate
into math? I understand that the part of programs there you have a formula on
beforehand (e.g., convert between Celsius and Fahrenheit) is easily expressed
in a mathematical formula before implementing it. However, how should I
express I/O?

Often when I have a problem that isn't easily expressed in mathematical
notation (albeit to my limited knowledge of math), I usually got a good idea
of how I could express it in code.

When I write pseudo code it often feels like I already have the code in my
mind before I describe in plain English. That feels like a waste of time. So
pseudo code doesn't feel like a great tool to express models of my programs.

~~~
jcora
Math isn't formulas. The part of math that is most useful for programming are
algebraic structures. Many "patterns", conventions, frameworks, etc., are just
bastardizations of mathematical patterns. What really matters is the
composition of concepts, and what better way to think about that than
mathematically?

So if you want your programming to reap the benefits of (others', mostly)
mathematical reasoning--use a functional language that is all about expressing
the ways in which things compose!

IO is modelled pretty well through monads. As are many other things, like
nondeterministic processes, exceptions, state, etc.

~~~
dr0wsy
> So if you want your programming to reap the benefits of (others', mostly)
> mathematical reasoning--use a functional language that is all about
> expressing the ways in which things compose!

I do prefer using functional language over imperative ones, but I don't
understand that is connected to model software on paper?

> IO is modelled pretty well through monads. As are many other things, like
> nondeterministic processes, exceptions, state, etc.

Isn't there any easier way to describe I/O than with category theory? I
understand that it's _possible_ to model IO with monads, but how would I
communicate it with colleagues that doesn't have a background in category
theory (or myself for that matter)?

Part of the benefits with modelling a program on paper should be to make
communication easier. And to require people you communicate with to have
knowledge in category theory to understand your design fells silly.

I probably misinterpret you, so could you please give a more detailed
explanation or example?

~~~
naturlich
You don't need to know category theory. You just need to understand a few
things that came from it, like `Maybe`, which is the most obviously useful and
trivially easy monad. Just forget about the fact that it's a monad and any
general rules about monads and just learn how to use `Maybe`.

From there, input handling is just a state machine. Easy to draw on paper as a
graph.

> Part of the benefits with modelling a program on paper should be to make
> communication easier. And to require people you communicate with to have
> knowledge in category theory to understand your design fells silly.

This is an odd complaint, because you can also say: requiring people you
communicate with to have knowledge in (state machines | graphs | `if`
statements | ...) to understand your design feel silly.

~~~
dr0wsy
> You don't need to know category theory. You just need to understand a few
> things that came from it, like `Maybe`, which is the most obviously useful
> and trivially easy monad. Just forget about the fact that it's a monad and
> any general rules about monads and just learn how to use `Maybe`.

I didn't convey my question clearly. What I'm wondering is how I should
express programs, or part of them, using mathematical notation when I don't
see them being mathematical in nature to begin with?

An example:

    
    
      def hello(name):
          if (name == "Bob"):
              print("Hello, " + name)
          else:
              print("Hello, World!")
    

How could I express this easily using mathematical notation?

It just feels weird that to convey this simple program on paper both I and the
person I try to communicate with needs to have a grounding in category theory.

Hope you're able to understand my question. :)

~~~
naturlich
That was the second part of my answer: a graph on paper. Draw nodes of
execution and represent the branch as two arrows coming from the node and
heading to new nodes. Label the arrows with the predicates of the branch.

This is both definitely useful and definitely math.

~~~
dr0wsy
> That was the second part of my answer: a graph on paper. Draw nodes of
> execution and represent the branch as two arrows coming from the node and
> heading to new nodes. Label the arrows with the predicates of the branch.

Thanks! You don't happen to have some learning resources for modelling
programs as state machines? I can't find anything when I search.

~~~
naturlich
This is more of a practical writeup on state machines than a theoretical one:
[http://gameprogrammingpatterns.com/state.html](http://gameprogrammingpatterns.com/state.html)

You might also know these as flowcharts when applied to program flow control.

I don't think I answered the original question well. I just wanted to point
out that math is more than just equations. Others have done a much better job
in this thread.

------
runeks
I agree that math is a useful tool in programming, but domain knowledge always
trumps math knowledge. Case in point:

> Recently, I worked on an API at work for pricing cryptocurrency for
> merchants. It takes into account recent price changes and recommends that
> merchants charge a higher price during volatile times.

You shouldn’t look at the past price for a cryptocurrency to determine the
current one. If few trades happen in a period you’re using outdated data.

The price of interest for merchants who want to sell cryptocurrency is the
best bid (highest-priced buy order) in a market where fiat bids on crypto
(e.g. BTC/USD, ETH/USD). You should be downloading order book data from
exchanges (e.g. [1]), and quoting the best bid to merchants.

Also, what you call high volatility (of past trade prices) might simply be a
proxy for a large difference between the highest-priced buy order and lowest-
priced sell order (large “spread”). Instead of looking at volatility of past
trades, I recommend you look at monitoring the spread of the order book of
interest. Although this might not be all that relevant, since merchants (who
sell crypto for fiat) are only interested in the price they can sell for, not
the price they can buy for.

[1] [https://docs.pro.coinbase.com/#get-product-order-
book](https://docs.pro.coinbase.com/#get-product-order-book)

------
floor_
I wish math whitepapers were written like code instead of in mathematical
notation.

~~~
raxxorrax
The syntax of mathematics hasn't grown very well in my opinion. It is clear
for people already into the relations described, but often lacks context for
anybody else. Code doesn't suffer from this, since any context must be
explicitly defined. Absolutely nobody without a significant bag of premises
can read e=mc² (yeah, yeah physics...) and understands the correlation between
mass and energy. It is just not happening. And for 99% percent this context is
much more important and the actual relations are often trivial.

Additionally, I would clearly recommend learning math in English, since there
are a lot of synergies that can be really, really helpful in the jargon of
computer science. Even if it is just the little nudge that allows you to
connect problems.

Furthermore you can unlearn a tool if you don't use it. I learned frequency
dissection in school. Forgot about it 2 weeks later. Only as I started to
implement my own shitty jpeg-compression is ehrn I really started to use it as
a tool. Had to relearn it of course. Turns out there are many applications for
general image recognition. Great. Now the math got useful and is indeed
needed.

There are some "purely" mathematical tools like adding zeros, multiplicating
by 1 or logic that allows for further transformations. But I would argue that
it doesn't as much help to solve a problem as it helps to verify the solution.

But some thousand years ago someone must have made the decision to make
mathematical notations specifically unreadable when expressed with ascii
symbols. Really don't like that guy.

~~~
lioeters
Looks like Euler's your guy, responsible for much of modern mathematical
notation [0].

But then again, without him we probably would miss a lot of what makes
programming elegant, like functions as f(x).

In an alternate timeline, we might all be using Arabic mathematical notation
[1].

[0]
[https://en.wikipedia.org/wiki/Mathematical_notation#Modern_n...](https://en.wikipedia.org/wiki/Mathematical_notation#Modern_notation)

[1]
[https://en.wikipedia.org/wiki/Modern_Arabic_mathematical_not...](https://en.wikipedia.org/wiki/Modern_Arabic_mathematical_notation)

------
carapace
Hmm...

Shake up your thinking about math: "Iconic Math"
[http://iconicmath.com/](http://iconicmath.com/)

Math-first programming: CQL - Categorical Query Language

> The open-source Categorical Query Language (CQL) and integrated development
> environment (IDE) performs data-related tasks — such as querying, combining,
> migrating, and evolving databases — using category theory, a branch of
> mathematics that has revolutionized several areas of computer science.

[https://www.categoricaldata.net/](https://www.categoricaldata.net/)

Thinking about physics math in computer language: "Structure and
Interpretation of Classical Mechanics" by Gerald Jay Sussman and Jack Wisdom
with Meinhard E. Mayer.

[https://en.wikipedia.org/wiki/Structure_and_Interpretation_o...](https://en.wikipedia.org/wiki/Structure_and_Interpretation_of_Classical_Mechanics)

------
CaioAlonso
"The natural language which has been effectively used for thinking about
computation, for thousands of years, is mathematics."

I'm not sure if this is true. Harold Abelson creates the distinction[1]
between Mathematics being the study of truth and Computing being the study of
process. It seems to me that these really are different things and Mathematics
isn't the "natural language" to discuss computations, but rather truth and
patterns. But of course process (computing) can only happen within the
boundaries of mathematical truths and patterns.

[1]
[https://www.youtube.com/watch?v=2Op3QLzMgSY](https://www.youtube.com/watch?v=2Op3QLzMgSY)
the first few minutes

~~~
justinmeiners
I believe this is a comment about interests of the fields akin to differences
between math and physics.

They still approach computation using mathematical reasoning methods. Note how
they define car and cdr and how they approach problems in those videos.

I believe Abelson and Sussman use the kind of mathematical reasoning I am
talking about in all their work. SICP being a prime example.

------
invalidOrTaken
What knits together "thinking in math" and "writing in code" is...human
effort. If the domain is mathy, then it won't take much. If it's not...

Math is the world without abstraction leaks; programming is the act of
plugging those leaks.

------
anderspitman
I might be misunderstanding here, but this sounds like it might lead to too
much up front design, which is a trap I'm naturally inclined to fall into.
Again maybe I'm misunderstanding. I found the example pretty opaque (granted I
gave up trying to understand it very quickly).

I do think starting with basic architecture and design is a good idea, but
it's important to jump in and test your assumptions before you become too
attached to them. In my mind the ideal flow goes something like this:

1\. Short design/modeling/architecture/etc session

2\. Test assumptions by hacking together a quick prototype

3\. Revamp design.

4\. Implement more robust prototype.

5\. Iterate as necessary.

------
davesmith1983
This sort of of article seems to come up every so often. The vast majority of
programs are boring programs that are replications of business processes that
were on paper in the past. The vast majority of code I have written is "if the
user is from this country and they have an address show this screen". That is
business rules and I would wager outside of specialist fields that is the vast
majority of the work that programmers do.

Before I was a programmer I studied mechanical engineering. There is a lot of
maths in that and the closest thing you get to programming is Control
Engineering.

------
bluetwo
Good coders take a problem domain and turn it into what looks like code.

Great coders craft code that looks like the problem domain.

------
vikiomega9
If we start with the notion that everything is a model and that some are less
wrong, then being as close as possible to math simply means the models are in
some sense provably correct. Some models need more rigour in being proved as
such.

------
wa1987
This article reminds me of something I'm wondering about for too long now.

What is the meaning of the below terms (possibly in various contexts).

\- abstraction

\- indirection

\- encapsulation

\- information hiding

How do these relate to and differ from each other?

I feel these get conflated a lot. Would be nice to tell them apart.

~~~
blain_the_train
the book "Elements of Clojure" covers this.

------
cutler
Mathematical thinking is only relevant to a subset of programming tasks. Your
typical CRUD web app has nothing to do with maths but functional-style data
processing may well fit the model. See DHH's distinction between engineers and
developers where he characterises Ruby/Rails programmers as writers, many of
whom come from an arts or humanities background.

------
ErotemeObelus
Programming is reasoning about complex closed systems. Mathematics is
reasoning about complex open systems (like the universe). Programming has more
in common with biology than it does with math!

------
gitrebase
I like that Python is quite close to my raw thoughts for simple problems. So
writing an algorithm in Python almost feels like writing pseudocode. Heck,
these days given an option between writing pseudocode and writing Python code,
I choose Python for simple problems.

Is there a similar programming language that makes mathematicians feel at
home? Something that makes them feel that they would rather write their
implementation in that language itself instead of writing it with math
notation on paper first?

~~~
mruts
I’ve always been in the weak Sapir-Whorf hypothesis camp that your tools of
expression influence and (sometimes) define your thoughts. A great example is
the idea of matrices in math. Matrices don’t allow you to represent anything
that a system of equations can’t. But it turns out that they are a very
helpful tool and let you abstract over the problem space, much in the same way
that higher order functions do.

It’s exactly like Paul Graham says, you might think that Python is just
allowing you to write executable pseudo-code, but the interaction isn’t so
simple.

I’ve programmed a lot of Python and when I first started out, I felt like it
was very frictionless, like you said. An easy way to put down thoughts. But as
I learned more about functional programming and type theory, I realized that
Python is inadequate and operates a top low level. i.e it feels like there’s
so much friction there.

I have used a variety of languages professionally (Scala, Haskell, OCaml,
Racket, C, and Python mostly) and they all fall short (some more than others)
on what I feel like I should be able to express. But if I had to chose, I
would probably say OCaml or Racket come the closest to my thoughts, depending
on the problem.

Anyway, my point is that it’s not obvious how your tools affect the level and
abstraction of your thoughts. It’s almost always a bi-directional
relationship, and therefore, choosing (or making) the right tool and method of
abstraction is very important. See Beating the Averages[1]. PG talks about the
a hypothetical language called Blub. Blub isn’t the best, but it’s not the
worst either. If there was a platonic form of Blub, it would most definitely
be Python.

[1][http://www.paulgraham.com/avg.html](http://www.paulgraham.com/avg.html)

~~~
gitrebase
Can you describe some attributes of Ocaml and Racket that make them good
contenders for expressing thoughts of mind?

Also is it Racket specifically that makes it a good contender or is it the
fact that it is a Lisp that makes it a good contender? Would any other Lisp
like Scheme or Clojure or Common Lisp be equally good?

