
Translating math into code with examples in Java, Racket, Haskell, Python (2011) - signa11
http://matt.might.net/articles/discrete-math-and-code/
======
dreen
I dream of material that would explain some advanced (for a regular person
understanding of the word) math concepts using Python. Maybe it's because I've
programmed more in my life than did math, but the "language" we use to write
it is absolutely crazy: one letter variables everywhere, no structure,
thousands of custom notations etc. I'm not criticising it, I realise it exists
this way for a reason and if I wasn't so lazy I would have learnt it in
school. But with my 31-year old brain oriented for looking at a sequential
flow of code, it's quite difficult to learn math now.

~~~
enriquto
you will take single letter variables from my cold, dead, hands!

I'm more used to math than to code, and I find multiple-letter variables
ridiculous when not unreadable.

In the ancient times, mathematicians used to write formulas using latin
sentences. Even the simplest arithmetic result occupied a few lines of text.
Are you proposing that we go back to that age?

~~~
jerf
" Even the simplest arithmetic result occupied a few lines of text. Are you
proposing that we go back to that age? "

What if students learned "Force = mass * acceleration", and only later moved
to "F = ma" once they'd gotten tired of writing it, instead of smacking
students in the face with "F = ma" right out of the gate, to say nothing of
all the other stuff we smack them with right away?

~~~
enriquto
but even "force = mass * acceleration" uses those sneaky single-letter symbols
that are not even letters! In appropriate, user friendly notation it would be
"force equals mass multiplied by acceleration".

But this is not clear enough. What force? What acceleration? More precisely,
what is the direction of the acceleration? The original statement of this law
is less ambiguous: "Mutationem motus proportionalem esse vi motrici impressae,
et fieri secundum lineam rectam qua vis illa imprimitur. "

Or, in plain english: "The alteration of motion is ever proportional to the
motive force impress'd; and is made in the direction of the right line in
which that force is impress'd"

There! we are good to go. Now write the fundamental theorem of calculus in
english, readable by everybody, without those ugly, unreadable variables and
fancy symbols...

~~~
jerf
I'm not seeking abstract philosophical purity, or trying to make the mistake
of including a complete transitive concept closure in every equation [1]. I'm
seeking pragmatic high-quality education that doesn't blow our student's
cognitive resources on stupid shit instead of the stuff I want them to learn.
By the time they get to physics, they know what multiplication and equality
are. It's just the "force", "mass", and "acceleration" that they don't know
yet, so we don't abbreviate them until they do.

[1]: AIUI, the actual problem the original "New Math" had. The original "New
Math" tried to start students with set theory, because that was the
foundations of mathematics, so obviously, the best place to start an
education, right? (Hint: No.) This is why parents said they didn't understand
"New Math". By comparison "Common Core" math is all-but-identical to what I
learned in school 30 years ago.

~~~
enriquto
I do not disagree with you. When you learn the stuff, it is perfectly OK to
explain equations with words and even write equations with words. But this is
akin to the comments of a program. Once you get down to work, in a
professional setting, all mathematics uses single-letter variables

(and I may add that all programming should do the same).

I also agree with you that starting math with set theory is utterly ridiculous
and even harmful. You should start with arithmetic and with geometry
(obviously, using single-letter variables when needed).

------
pron
> Mathematics is a purely functional language.

It is not. E.g. x ∈ ℕ ∧ x > 2 ∧ x < 5 ∧ x % 2 = 0 implies x = 4. This is not
something pure functional languages can express directly. In math we make
common use of relations, of which functions are a special case, but not the
common one and certainly not the only one. But even when we do work with
functions, a functional PL does not suffice. E.g. writing f(t) = t + 1 ∧ x ∈ ℕ
∧ f(x) = 3 is equivalent to writing x = 2. Also, we commonly employ objects
that are not computable.

This is just one of the many mathy myths that plague the FP community. Yes,
there are some similarities between FP and mathematical notation, but FP and
imperative are _much_ closer to one another than either is to mathematics.

~~~
brianberns
I'm not sure what point you're trying to make. You can certainly express those
statements in both FP and imperative languages, and you can prove the
equivalences you mention in FP languages like Agda, Coq, and F*.

~~~
cinnamonheart
An example of

> _E.g. x ∈ ℕ ∧ x > 2 ∧ x < 5 ∧ x % 2 = 0 implies x = 4._

in Agda would be this, I think:

    
    
      _ : ∀ { x : ℕ } → x > 2 → x < 5 → (x % 2) ≡ 0 → x ≡ 4

~~~
pron
The type level of dependently typed languages or other languages that allow
directly expressing relations (including Java's JML) can do that. That's
little to do with pure FP.

~~~
cofunctor
In some sense it does though. Type Theories (and their associated pure FP
languages) often have the exact same algebraic structure as different classes
of logic. To my understanding, JML uses Hoare Logic, which is a great tool for
proving correctness of imperative languages, but is not quite as expressive.
For example, Hoare Logic cant really deal with Higher-Order things. That's not
to say Coq/Agda are perfect though. They do struggle with more "extensional"
properties, such as function extensionality.

~~~
pron
> but is not quite as expressive

It's as expressive.

> For example, Hoare Logic cant really deal with Higher-Order things.

1\. It can. 2. It doesn't matter so much, as there are no higher order
"things" but rather higher-order ways to _describe_ things. For example, in
mathematics higher-order systems can be described as either higher-order ODEs
or equivalent first-order ODEs. In other words, "order" is a feature of the
signifier, not the signified. For example, things that would be higher-order
in Agda are first-order in TLA+ (and you don't need to go that far: formal set
theories are usually first-order, yet you need higher-order typed logics to
describe the same things).

> They do struggle with more "extensional" properties, such as function
> extensionality.

Well, that's because they're constructive. There are type-theory-based proof
assistants that more easily support classical mathematics, like Lean.

------
Abishek_Muthian
Nice article, I was wondering why Julia wasn't featured considering its
support for math syntax. I wondered whether this could have been an old
article, but there is no date in the article.

I checked the RSS feed and couldn't find the article in it. I cross-checked
the oldest article in the feed with the index of articles on the site, this
article seems to be written before Dec 2014. This makes sense for why Julia
isn't included.

~~~
Mathnerd314
Don't forget the Internet Archive!

The first version is from Nov. 14:
[https://web.archive.org/web/20111114185849/http://matt.might...](https://web.archive.org/web/20111114185849/http://matt.might.net/articles/discrete-
math-and-code/) The article index from Nov. 8 does not include the article:
[https://web.archive.org/web/20111108003034/http://matt.might...](https://web.archive.org/web/20111108003034/http://matt.might.net/articles/)

So we can say with confidence that this should have a [2011] tag and was
published sometime Nov 8 - Nov 14 in 2011.

~~~
kbd
It's frustrating when Internet articles don't include publication dates. It's
not like it's paper where someone didn't think to record the date. The date
certainly _exists_ in the database, filesystem, etc, the author just chooses
not to show it, forcing forensics to figure out when it's from.

(I was reading another article from his site yesterday after it was linked on
lobste.rs and was also trying to figure out how old it was.)

~~~
jodrellblank
Why would the date "certainly" exist in the database or filesystem? The file
might be one file created in 2001 and modified every date after containing
every post in it like a mail spool file. It might be a record in a database
table with no date column. If it was a single file, then the creation time
could be weeks or months before the publication time, and the modification
time could be the last time an edit or tweak was made, rendering both useless
for this purpose. I think it's more likely not to have a useful date anywhere
unless someone went out of their way to put one in.

(Which they should do, it's useful)

------
eggy
I would think Clojure vs. Java would be used given immutability. I like both
Haskell and Racket for math, especially this article on geometric algebra with
Haskell:

[https://crypto.stanford.edu/~blynn/haskell/ga.html](https://crypto.stanford.edu/~blynn/haskell/ga.html)

I am personally learning Shen[1] because Lisp, optional typing (great type
system), and the formal work done within it, and for it in proving,
verification and other related endeavors. It seems to tick the Haskell, Lisp,
Prolog and logic boxes for me. Its portability is great[2], however, it is
still young and building libraries. I am thinking on trying to mimic the DL
work done by Dragan Djuric[3] for Clojure in Shen. We'll see how that goes...

[1] [http://www.shenlanguage.org/](http://www.shenlanguage.org/) [2]
[https://shen-language.github.io/](https://shen-language.github.io/) [3]
[https://dragan.rocks/](https://dragan.rocks/)

------
beagle3
APL is an executable mathematical notation - in fact, it was designed by
Iverson for describing algorithms on paper (in 1958 IIrC), and then the team
working on it implemented it in software, because ... it was a well specified
way to describe algorithms.

I highly recommend reading [0], which contains the simplest (IMO) description
of the simplex algorithm, which is both mathematical and executable. If you
find that inspiring, [1] is also worth a read.

[0]
[https://www.jsoftware.com/papers/DFSP.htm](https://www.jsoftware.com/papers/DFSP.htm)

[1]
[http://www.eecg.toronto.edu/~jzhu/csc326/readings/iverson.pd...](http://www.eecg.toronto.edu/~jzhu/csc326/readings/iverson.pdf)

------
adamnemecek
Julia is notably missing.

Packages like this one are dope
[https://github.com/korsbo/Latexify.jl](https://github.com/korsbo/Latexify.jl)

Or this

[https://github.com/qojulia/QuantumOptics.jl](https://github.com/qojulia/QuantumOptics.jl)

Or this

[https://www.juliahomotopycontinuation.org/](https://www.juliahomotopycontinuation.org/)

~~~
eggy
I like Julia for all of the libraries, the Lisp underneath, however, from the
article:

"Many of the encodings are as immutable, purely functional data structures
(even in imperative languages), a topic unfortunately omitted from many
computer science curricula."

Julia and similar PLs don't express math like APL, J[1], Haskell[2], Scheme[3]
or even Clojure can with immutable structures and function composition to name
a couple. Sure you can write it in Julia, but I don't think the article is
about creating math output in the Latexify.jl example, but how to code these
math structures where certain languages can express them out of the box in an
easier manner.

[1]
[https://www.jsoftware.com/books/pdf/](https://www.jsoftware.com/books/pdf/)

[2] [https://www.amazon.com/Haskell-Logic-Programming-Second-
Comp...](https://www.amazon.com/Haskell-Logic-Programming-Second-
Computing/dp/0954300696)

[3]
[https://mitpress.mit.edu/sites/default/files/titles/content/...](https://mitpress.mit.edu/sites/default/files/titles/content/sicm_edition_2/book.html)

~~~
ddragon
Can you explain it better? I do agree with Haskell due to laziness by default
and arrow types for example, but against Scheme and Clojure the only aspect
seels like it would be the more strict rebinding/shadowing rules (which
doesn't limit what you can express, just that you can also express non-
mathematical expressions).

Primitive types (like pretty much all languages) are immutable and structs in
Julia are immutable by default (which includes pretty much every math type
like complex numbers), and all math operators are non mutating (in fact
mutating operators should have a ! in the name).

Function composition is just as simple in this scenario:

f(x) = x + 2

g(x) = 2x

h = g ∘ f

h(3) == 10

~~~
eggy
@ddragon your points are well taken. I will look further into Julia. I still
think when I wrote "out of the box" with APL in mind as far as expressing
math, or J for that matter if you learn the ASCII symbols. I've seen other
examples since your post on Julia that show similar math expressiveness, but
they seem to rely on libraries or the syntactic sugar provided by the
libraries. I haven't tried your example, but it looks like vanilla Julia. My
familiarity with Matlab makes Julia an easy choice for me too.

Here is a simple example in J:

+/ % #

The above is an average function in J. It is a fork, where you fold right over
the input (+/), and divde (%) by the tally (#) or count of items, so that

(+/ % #) 4.5 3 2 12

produces 5.375 as the average of the input vector. Notice the array-based
language deals with singular quantities or scalars, vectors, and multi-
dimensional arrays as fundamentals of the PL.

You could also define it as:

average =: +/ % #

for those who make the readability argument, however, you learn math symbols
and read math papers full of them, and not lengthy verbal descriptions of
these math formulas (average is equal to the "sum all of the input elements,
and divde by the count or tally of input items").

The fact that a highly-functional APL program can fit on one screen negates
the argument that you need readability, so that when you pick up or someone
else picks up, your 110K lines of JavaScript code they can make a change. You
may have to do a refresh read in APL and J, but because it is terse, you can
deal it pretty quickly. Well-commented code in the 100k to 1 million lines of
code scale will never allow one person to see the whole picture.

The work by Aaron Hsu in APL is amazing. Here is a link to the slides from a
talk he gave. It's a bit lengthy but I found it very interesting and
appropriate to this topic:

[https://sway.office.com/b1pRwmzuGjqB30On](https://sway.office.com/b1pRwmzuGjqB30On)

The YouTube link to the talk is here:

[https://www.youtube.com/watch?v=9xCJ3BCIudI](https://www.youtube.com/watch?v=9xCJ3BCIudI)

------
throwawaymath
_> Explicit sequences tend to be wrapped in angle-brackets, so that:

s = <s1, s2, ... sn> (Please note that in LaTeX, one should use \langle and
\rangle for angle brackets--not less than and greater than.)_

The article is pretty good overall, but I find this an little odd. I don't
think I've ever seen sequences denoted with angle brackets. You usually see
those used for denoting an inner product. I've only ever seen sequences
denoted with parentheses, like (a_n) = (a_1, a_2, ..., a_n).

It's kind of humorous to me that we can sort of fingerprint the
textbooks/lecturers people learned from by the idiosyncrasies of their
notation.

------
yters
Not sure what the point of translating math into code is. Just because we
represent something with code doesn't mean it is computable. It's also
technically incorrect to say math is translated into code. The mathematical
symbols are being swapped with codey things, but the underlying semantics are
quite different. For example, we might say the symbol 'oo' is infinity, but
infinity itself is something that cannot be embedded in a finite program. So,
we might be coerce a parser to realize that 1/oo=0 with a background ruleset,
but the identity itself was derived by our minds accessing the mathematical
world of forms that is completely inaccessible to finite computational
mechanisms.

It seems like this effort is going to increase a lot of overhead to do math,
and obscures the role of thinking through the math conceptually. There seems
to be a Principia Mathematica 2.0 idea in the programming world that we just
need to reduce mathematics to code and it'll become easily accessible and
usable. I'm not convinced that will be the case.

~~~
throwawaymath
You and I read the article very differently. Based on your comment, it sounds
like you interpreted the article as implicit advocation for the idea that
mathematics should be represented in a programmatic (i.e. computable) way.

I didn't pick up any ideology in my reading. My interpretation of the article
is that the author wanted to provide tips on how to implement mathematics, for
two reasons:

1\. People _have to_ implement nontrivial mathematics from time to time, and

2\. Most mathematics doesn't concern itself with obstacles to implementation,
because it doesn't have to.

~~~
yters
That's true, and numerical analysis can be quite helpful for mathematical
insight. However, I've never found the math to code part to be tricky. It's
mostly an after thought compared to the conceptualization itself. So the
emphasis on coding math suggests a reversal of priorities. In my experience,
focusing on the coding part becomes very inefficient, especially with
combinatorial problems. No matter how tricky I get with the implementation, a
combinatorial explosion is still a combinatorial explosion. Coming up with a
neat analytic solution ends up being super efficient in comparison.

------
plibither8
Past discussions:

* [https://news.ycombinator.com/item?id=9022676](https://news.ycombinator.com/item?id=9022676)

* [https://news.ycombinator.com/item?id=3231525](https://news.ycombinator.com/item?id=3231525)

------
new4thaccount
This is great, but I'd love to see more Mathy languages added like APL,
Mathematica, and Matlab/Octave.

------
misterdoubt
Despite the title, this article does a real disservice to Python.

For the purpose of constructing a generator, the author wrote:

> _In an object-oriented setting like Python or Java, streams can be
> constructed from an interface_

Yikes. itertools was introduced in, let's see... 2003?

~~~
scardine
Python has both generator functions and generator expressions. A generator
function is like this:

    
    
        def int_generator():
            i = 0
            while True:
               yield i
               i += 1
    

A generator expression looks like a list comprehension (a Python construct
based on the set-builder notation in Math) and is used in situations where you
don't really need to materialize the list - for example when you are just
iterating over it in a `for` loop:

    
    
        for n, square in ((i, i * i) for i in int_generator()):
            print(f"{n}² = {square}")
            time.sleep(1)

