
Why Don't Computer Scientists Learn Math? - nkurz
http://research.microsoft.com/en-us/um/people/lamport/tla/math-knowledge.html
======
Animats
This is Leslie Lamport's writing, and he's complaining that nobody uses his
TLA+ formal specification language. (Here's the TLA book.[1] Read the
introduction, and it's much the same rant.)

I used to work on formal specifications and program proofs, around the time
Lamport was developing temporal logic. I took the position that, rather than
trying to make programmers learn math, we should automate and divide up the
problem so that they didn't have to.[2][3] This meant minimizing the formalism
and making the verification information look like ordinary programming. It
also meant carving out the harder math problems into a separate system where
someone using a theorem prover would prove the necessary theorems in a way
which allowed their use and reuse by ordinary programmers. The underlying math
is still there, but not in your face so much.

There's a mathematician mindset: Terseness is good. Cleverness is good.
Elegance is good. Generality is good. Case analysis is ugly. This is unhelpful
for program verification.

Program verification generates huge numbers of formulas to be proven. Most of
the formulas are very dumb, and can be dealt with automatically by a simple
theorem prover. Where it's necessary to prove something hard, our approach was
to encourage users to write more assertions in the code to narrow the thing to
be proven, until they were down to "assert(expra); assert(exprb);" Getting to
that point is much like ordinary debugging.

Now they just needed a theorem, independent of the program, that could prove
"exprb" given "expra". That job could be handed off to a theorem-proving
expert. A library of proved and machine-checked theorems would be built up.
Some would be general, some project-specific. This is the software engineer's
approach to the problem.

The mathematician's approach is to capture much of the program semantics in
mathematical form and work on them as big math formulas. Most of the work is
done in the formula space, where mathematicians are comfortable, not the code
space, where programmers are comfortable. This seems to hold back the use of
formal methods in program verification.

Much effort has gone into trying into trying to find some clever way to make
program verification elegant. "A great variety of program logics are in use in
computer science for the specification of (software and/or hardware) systems;
for example: modal and temporal logics, dynamic logic, Hoare logic, separation
logic, spatial logic, nominal logic, lax logic, linear logic, intuitionistic
type theory, LCF."[4] We used to call this the "logic of the month club".

[1] [http://research.microsoft.com/en-
us/um/people/lamport/tla/bo...](http://research.microsoft.com/en-
us/um/people/lamport/tla/book-02-08-08.pdf) [2]
[http://www.animats.com/papers/verifier/verifiermanual.pdf](http://www.animats.com/papers/verifier/verifiermanual.pdf)
[3]
[http://portal.acm.org/citation.cfm?id=567074](http://portal.acm.org/citation.cfm?id=567074)
[4]
[http://homepages.inf.ed.ac.uk/als/PSPL2010/](http://homepages.inf.ed.ac.uk/als/PSPL2010/)

~~~
gravypod
I know I've posted this a few times but I don't think I've ever seen you in a
thread about this. Do you have any simple way for a computer scientist to
learn about proven methods.

More precisely, how do I engage in the dark incantations required to conjure
up my own proven methods? For my critical code, right now I just write unit
tests. Do you know some place that can shed some light as to what I actually
do? I mean, I can just tell my coworkers "yea I formally proved this, it won't
have bugs" but I don't think that's quite in good spirit.

~~~
throwaway729
What programming language(s) do you typically use for your critical code?

~~~
gravypod
My next "big" project will be a data visualization system that's probably
going to need some Python, JavaScript, and some simple bash piping between the
python scripts.

It's not going to be mission critical but it'll be nice to flash and say "I
used formal methods to prove this implementation" as it is going to be for an
academic's project.

~~~
throwaway729
_> My next "big" project will be a data visualization system that's probably
going to need some Python, JavaScript, and some simple bash piping between the
python scripts._

Unfortunately, there aren't great deductive verification tools for either
Python or JavaScript.

Perhaps more importantly, there's also not a particularly good understanding
of how to formally verify front-end code.

The most important property of a visualization system is something like "the
visualization accurately represents the data being visualized", but in a
problem domain like this, formalizing the high-level intent is probably harder
than actually proving that this intent is achieved.

Additionally, the cost of throwing an exception at runtime are presumably
pretty low (unlike in a control system, for example). So once you've
formalized your spec, you can just check your specs at run time throw an
exception/report a bug instead of or in addition to generating the possibly
faulty visualization.

Are there key components of the visualization pipeline that could threaten its
correctness? E.g. data reduction, interpolation, fancy graphics code, etc.? If
so, I'd focus on those and come up with a good set of formal, checkable-at-
runtime contracts that capture your high-level informal specification.

Alternatively, is it possible to do some computation on your output to check
that it matches the input data? E.g. you could imagine counting pixels of
certain colors on a heat map or pie chart and comparing that to the original
data. That way you don't have to verify (or trust) all of the graphics
generation code, thus significantly reducing your trusted computing base.

Saying "we have a formal mathematical description of what it means for this
portion of the visualization system to be correct, and a way of checking
input/output matches that description" is probably a more significant
contribution than actually churning through a static verification effort.

 _> it'll be nice to flash and say "I used formal methods to prove this
implementation" as it is going to be for an academic's project._

Are you sure this is the case? Most academics, even within CS, don't care much
about formal methods.

If you're just looking to learn formal methods, then I guess it's whatever.
But if you're looking to stand out, you might want to figure out what's
important to your stake holders and focus on that. I'd be pleasantly surprised
if the answer to that question is formal methods.

~~~
gravypod
There are multiple portions of this project only one of which is rendering.
The rest is data acquisition, job scheduling, and some other stuff. I've got
to sample a source of data and run a lot of code on it.

> If you're just looking to learn formal methods, then I guess it's whatever.
> But if you're looking to stand out, you might want to figure out what's
> important to your stake holders and focus on that. I'd be pleasantly
> surprised if the answer to that question is formal methods.

I was just joking around a bit. I've been interested for some time.

------
aswanson
Perhaps its time to rethink the sanctity of mathematical symbolic
communication. Seriously, that equation borders on parody in attempting to
relay a rather simple concept. We dont expect CS people to communicate
concepts through compiled, machine language style syntax (minimum number of
bits uber alles) so why not revisit the communication of mathematical concepts
with weight given to clarity over compression?

~~~
jjoonathan
∀+∃+{|} are the epitome of clarity over compression. Mathematicians like them
for the same reason why many programmers like sophisticated static typing.
They might not always improve the clarity of toy examples to beginners, but
they certainly help improve the clarity of untamed, wart-ridden, real life
problems to initiates.

I think the first time I truly appreciated the power of ∀ and ∃ was during
functional analysis when I found myself juggling an overwhelming number of
different notions of continuity, convergence, smoothness and the like. ∀ and ∃
drew out the subtle distinctions that the blurry, imprecise words of the
English language conspired to hide. They let me break down logic hairballs
completely indigestible to my mammalian intuition into small chunks that I
could individually process. Same idea as algebra.

 _That 's_ their purpose, and they're very good at it. Nothing I've
encountered in the world of programming languages really comes close to doing
their job. Don't fall in to the trap of assuming they're pointless just
because they don't apply to your problems.

~~~
kabdib
I use ∀ and ∃ and so forth, mostly in private notes. It's great shorthand.

On the other hand, in MathLand we have conventions like saying "2x" when you
want to multiply x by 2, but if you write "dx" you'd better know if you're
doing calculus or not, because it means something totally different depending
on context.

Mathematical notation is pretty squishy. There are probably thousands of uses
of every greek letter and their italic, bold and variously curlicue'd
isomorphs. And it's mostly culture, with a lot of history, personalities and
land-grabs thrown in. You can't necessarily crack open a paper without a lot
of baggage-level context ("Oh yeah, _this_ community uses brackets for things
_that_ community uses subscripts for, okay...")

On the other hand, C++ is complicated and messy and also mostly a train wreck,
but at least there's a standard you can go read.

[I once thought it would be worthwhile to learn denotational semantics. The
"at least one new hijacked hieroglyph must appear on every page" nature of the
reading put me off it in short order. I had no idea where to start
interpreting the notation other than maybe attending a class to get the proper
starter culture. It was frustrating to see notation go in the "squishy"
direction when I thought it should have been using something you could parse
and run tools against. Oh well.]

~~~
tsilva
> I had no idea where to start interpreting the notation other than maybe
> attending a class to get the proper starter culture.

More than anything, I think this is the main reason why studying many subfield
of mathematics is so difficult, when we spend more time and effort in things
like parsing instead of in the actual subject matter.

Even popular subjects may show similar problems. For example, in Sedgewick's
Analysis of Algorithms, he gives the following common definition:

> O(f(N)) denotes the set of all g(N)....

In the next page, he presents an exercise:

> Show that f(N) = N lg N + O(N) implies f(N) = ϴ(N log N)

Reader looks at that "N lg N + O(N)" and tries to make sense of the addition
of a number to a set of functions. Note, this is in page 5, so many readers
unfamiliar with the culture of the area are likely to just abandon the book
(and perhaps the study of the subject) as they could not translate to
themselves the very first formula presented by the author.

The only clue to anything that could help the newbie to get some answers lies
in a couple of references that point to the historical uses of these greek
symbols for computational complexity. Within that reference [1], by no other
than Knuth, one can find the explanation for such syntax:

> "1+O(n^-1) " can be taken to mean the set of all functions of the form
> 1+g(n), where |g(n)| < Cn^-1 for some C and all large n.

And then he goes on about the problem of that syntax in respect to one-way
equality:

> we write 1+O(n^-1) = O(1) but not O(1) = 1+O(n^-1). The equal sign here
> really means ⊆ "set inclusion", and this has bothered many people who
> propose that we not be allowed to use the = sign in this context. My feeling
> is that we should continue to use one-way equality together with O-notations
> since it has been common practice of thousands of mathematicians for so many
> years now, and since we understand the meaning of our existing notation
> sufficiently well.

The above is not only a reason for logicians to laugh at the pretense that
mathematical language is formal, but also an example of things that are likely
not to be in books but in an unwritten culture of a field.

\--

[1] - Knuth, Big Omicron and big Omega and big Theta

------
gravypod
This is like "Why Don't Mathematicians Learn to Program?"

"I had this talk where I wanted to feel superior so I showed an obtuse way to
notate a simple situation"

"You can see my code I showed a room full of people who have more important
things to do then to study arcane math symbols that are of no use to them:"
[https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...](https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpriseEdition/blob/master/src/main/java/com/seriouscompany/business/java/fizzbuzz/packagenamingpackage/impl/Main.java)

"I mean who in the hell could call themselves a mathematician without learning
how to recognize obscure abstraction principles and language features of Java,
something they'll probably never need to use?!?!"

"Some of these people where even smart, I tell you! They are big people who
are productive but I have no idea how you could be productive or smart without
knowing how to recognize this obscure thing"

Seriously, if you pulled in a bunch of CS people and showed them this:

    
    
        lambda x: x
    

And they didn't know what it meant, you have a point. If you pull them in and
show them something they don't use often, probably will never really use, and
use that as a basis of their worth then I don't feel you have a correct set of
KPIs for what makes a good scientist or researcher.

------
PNWChris
Honestly, this is mostly an issue of syntax (a problem, but one that is
remedied with a quick look-up table/cheat sheet).

Giving a legend explaining those symbols would turn the statement into this:

Given a function (f) that maps integers in the range 1 to n onto the same
range (1 to n), for every y in that range (1 to n) there exists an x in that
same range (1 to n) where applying the function (f) to an input x results in
the value y.

(please correct the parts I got incorrect, it's been a while since my last
discrete math class).

These symbols are important tools in discrete mathematics, but are easy to
forget or get lost in due to the very high information density they achieve.
Adding to the complexity, we explain the range of both inputs and outputs
before defining the function. Intuitively, I feel CS folks would prefer to
define the input range, the function's mode of operation, and finally the
mapped output range.

This is especially complex when these discrete mathematics symbols operate
upon one another. Throw in some sigmas and boolean operations, and it quickly
become very difficult to parse.

The underlying intuition the author is trying to communicate is likely
something that entire audience could understand. They simply weren't fluent in
the "language" of mathematics. It's like being very articulate in English, and
having to defend a thesis in French. Not only do you have to translate your
thoughts, many idioms you may usually rely on are no longer valid in a new
language.

Finally, why not define that same statement in Python or matlab? It's just as
valid in my opinion.

------
dllthomas
I am a programmer and not a mathematician. I studied more math than typical
(almost minored in it) but all of it undergrad and it's been more than a
decade since I've done anything formally academic. I read some of the comments
here before reading the article, and I was expecting something significantly
more complicated and esoteric. Set builder notation, element-of, and
quantification - it's practically Python. I find myself genuinely _shocked_
that people here are finding it inscrutable, much less those in the audience
at the talk. I can't help but wonder whether most of those with their hands
down weren't able to understand but lacking confidence that their read was
correct?

------
Gruselbauer
I'm npt a CS major, and I also don't have the slightest idea what that formula
is about. Don't even know half of the symbols.

Granted, my bachelor's isn't even a scientific degree, I'm an ex-addict turned
sysadmin recently turned programmer, self-taught entirely. But I'm freaking
sad I was a dumb kid who thought math was stupid once and that our school
system (Germany) made it so easy to use my talent for language to average out
my dare-I-say intentionally bad grades in the subject.

It's not even that I was bad at it. What drove me to computers, to code or
crafting ridiculous zsh scripts in the first place was formulating my own
approach to solving a problem. As much as I sucked at "solve these twenty
equations like this", I used to love problems stated in natural language and
coming up with a solution. Should've built on that.

Anyway, besides the late night regret... Can anyone of you recommend a book
for me? Think little to no familiarity with the terminology, many years of
only using what math a supermarket or insurance situation entails, eagerness
to learn and fascination with the subject. No delusions about becoming a
regular mathlete in my early thirties, I just want to fix a lack of knowledge
caused by pubescent idiocy.

Thanks!

~~~
billsix
Introduction to Automata Theory, Languages, and Computation
[https://www.amazon.com/dp/8131720470/ref=cm_sw_r_other_apa_4...](https://www.amazon.com/dp/8131720470/ref=cm_sw_r_other_apa_4a8lybXX8SA2X)

------
noobermin
Alright, hold up a section of C++ or js code and repeat similar questions, and
then ask why mathematicians don't learn to code given how relevant computer
programming is to applications of mathematics today.

Not that I believe mathematicians should learn how to code as a perquisite for
their degree, I don't. I also have similar feelings for CS students and
understanding mathematical notation. Yes, if you're headed for grad school or
a researcher, then it's a different story (so the grad students/academics
should have tried may be). But should knowing that notation, say, be on the
same level of importance for understanding programming concepts like cache-
locality for someone headed to a typical dev job? Probably not.

------
OJFord
I'm surprised anybody calling them self a 'computer scientist' couldn't follow
that - developer, sure, but as far as I'm aware everyone on any CS degree
programme in the world "learns math"; enough to follow that for sure.

~~~
geoka9
Yes, to me "Computer Science" has always been a branch of (applied) math that
deals with mathematical theories that apply to computing - complexity,
discrete math, numerical methods, operations research, automata, etc.

------
empath75
It's sort of weird that a mathematician would confuse 'notation' for 'math'.

This is like giving an algorithms class to a mixed group of perl and Haskell
programmers, using Haskell for all the examples and wondering why perl
programmers don't learn computer science.

~~~
nabla9
He is not confusing notation with math.

If you don't speak the language, It indicates that you never learned the stuff
that is spoken in that language. At least well enough to be fluent in it.

~~~
gravypod
Do you think that the people in his room didn't understand the concept of a
function mapping over x coordinates whereby the height of each y component
equals that of it's position on the x axis?

Or more simply put, do you think these people have never taken a math class in
high school that went over

    
    
        y = f(x)

~~~
nabla9
Lamport is not talking about basic math.

If you study math enough, you learn to read this math notation fluently. From
his toy example you can infer that they have not studied much math very
deeply. Like topology or advanced algebra.

------
manyxcxi
I loved math and sailed through it in college. Algebra, calculus, linear
algebra, theoretical classes- no problem, I never got less than a B+. We had a
lot of classes where using the calculator was forbidden and you had to
actually know how to break these things down. When I was practicing math I
can't understate how easy it was for me.

The problem is that's it's been 12 years since I used a differential equation
or any of my other 'hard' math. It's really frustrating that I used to be able
to look at these equations and know what they meant and now I can barely make
heads or tails of simple f(x) type stuff. I could solve (simple) differential
equations without even jotting a note down, I honestly don't know if I could
read one now.

It sucks.

~~~
louthy
I suspect if you wrote some code to solve differential equations 12 years ago,
and went back to that code today, you'd understand it without much hassle.
This is the failure of maths as a language IMHO.

~~~
bloaf
But you'd be looking at two different things. When looking at a math textbook
on differential equations, you're looking at _why_ a class of expressions
constitutes a solution to a differential equation. Looking at code, all you'd
get is the mechanical _how_ something is computed, and you'd have to take it
on faith that what was computed constituted the solution you were looking for.

------
pcr0
It took me a little while to parse the syntax, but understanding the formula
itself was simple.

I think more people struggle reading math than understanding the concepts.

------
wott
He obviously runs though the speech he had ready (there is not half a second
of a break at any point) whatever the actual assistance response may be.

Also note that he inundated the audience with a different formula popping in
random places of his screen every 5 seconds for at least the previous 2
minutes, with constant more or less related babbling.

It sounds to me like a dishonest behaviour and conclusion. I must say I am
neither a mathematician, nor a computer scientist, so I don't have a side to
take, and I understand the formula, but I would not have raised my hand due to
the fact there was only 5 seconds between the time the formula popped up about
at the third of an already filled screen, and the request to raise one's hand.
(And also because the one who raises his hand may end up cleaning the toilets.
Beware.)

------
shoefly
We should rephrase the question as "why don't computer scientists learn
traditional math symbols?"

Try writing traditional equations with your keyboard. Try posting your answer
online. Modern computer science breaks down these symbols into plain english.

~~~
j2kun
we do have math typesetting you know... With a standard programming language
for typesetting handled by all major browsers. Just head over to Mathoverflow
and see it in action

~~~
skeuomorf
Also, when writing code it's not unusual (at least among LISP and Haskell
programmers) to have the editor replace a word for its corresponding symbol,
the most famous example of which is (lambda -> λ) because some people find it
easier to parse e.g. in emacs[0]. You can easily imagine doing that for other
expressions, like (forall, there-exists, ...etc).

All in all I think it's a matter of preference which is in turn related to
each individual's experiences.

[0]
[https://www.emacswiki.org/emacs/PrettyLambda](https://www.emacswiki.org/emacs/PrettyLambda)

~~~
j2kun
Indeed, Slack has native support for ~2k emojis, but not basic math symbols.
It's clearly a matter of priorities, not feasibility.

------
czheo
Math notations are so badly designed (if there exists any design) If you'd
spoken in plain English, Lamport, most people in the room could have
understood the concept of your tedious formula. So it's not because we don't
understand math - it's because the math notations are usually over
abbreviated, obscure and inconsistent. Math itself is strict but there's no
strict common language to express it, which eventually prevents people from
understanding it.

~~~
Smaug123
There is a strict common language to express it. The speaker used that
language (except for the nonstandard replacement of brackets with square
brackets, which was apparently clearly pointed out at the start). The problems
start when you speak in plain English: then you get things like everyone using
different definitions of continuity and thinking everyone else is just
spouting nonsense until someone comes up with epsilon-deltas.

------
cs2818
As a CS researcher I probably wouldn't have raised my hand, thinking it was
some sort of a trick or that the speaker was making a point about how simple
concepts are sometimes conveyed in confusing ways.

I appreciate mathematics, but because I don't use this notation on a regular
basis, I wouldn't have been confident enough to raise my hand.

~~~
dllthomas
Yeah, I don't judge those in the audience too harshly. I could certainly see,
given limited time allotted, thinking "it's probably not that simple, maybe
I'm missing something...?"

------
CalChris
> For example, a programming-language designer who uses symbols that aren't on
> any keyboard deserves to be laughed out of the room.

Not quite. APL.

[https://en.wikipedia.org/wiki/APL_(programming_language)](https://en.wikipedia.org/wiki/APL_\(programming_language\))

------
lesserknowndan
When I went through Uni we learnt several different types of Math including
discrete math, advanced calculus, and probability and stats. The math
described in the article falls under discrete math. The notation used is
similar to that used within formal methods.

However, since Uni I have rarely needed to use the math notation, and so I am
unable to understand it without "interpreting" it. Such notations have most
relevance to understanding database schemas and other set theory type
problems, however, generally speaking, in software development using such a
notation is unneeded.

In summary, the reason computer scientists don't know math is not because they
haven't been taught it, but rather that the notations are not very relevant,
and therefore soon forgotten.

------
mcbits
Here is some math notation. What does it mean?

    
    
        x
    

It means nothing. Or it could mean anything. The surrounding text is what
explains the meaning you're supposed to derive from mathematical glyphs. Of
course there are conventions. "x + y" _usually_ means what most people would
assume it does. And sometimes it doesn't. Read the paper to be sure.

I haven't finished watching the talk yet, but he seems to be arguing that
programming languages aren't expressive enough for mathematical purposes. I
suspect he's not counting all of the wordy scaffolding necessary to support
those terse equations. Can it all be combined into a formal language that
stands on its own? If so, might it look like a programming language?

------
pjmorris
I was struck by Butler Lampson's observation along the same lines, in
'Programmers At Work', Lammers [0]

INTERVIEWER: What kind of training or type of thought leads to the greatest
productivity in the computer field?

LAMPSON: From mathematics, you learn logical reasoning. You also learn what it
means to prove something, as well as how to handle abstract essentials. From
an experimental science such as physics, or from the humanities, you learn how
to make connections in the real world by applying these abstractions.

[0] [http://research.microsoft.com/en-
us/um/people/blampson/37a-P...](http://research.microsoft.com/en-
us/um/people/blampson/37a-ProgAtWork/37a-ProgAtWork.htm)

------
boxcardavin
I have noticed that CS majors tend to be separated into two groups, those who
got linear algebra and those who got THROUGH it. Programmers are very logical
in their thinking but most are not what I would call traditionally
mathematical in their approach to problems.

~~~
kasey_junk
Put me in the got "through" it camp. To this day my Pavlovian response to any
linear algebra is "put it in a matrix & solve", even if I don't have an
intuitive understanding of what that means.

------
kristianov
We are sudo scientists. Sorry.

~~~
gumby
I see what you did there!

------
dahart
Does anyone else feel like the font made it harder to read? It doesn't look
like latex. :P I have no problem reading math formulas, but this particular
one is harder to read than normal for some reason.

Who am I to argue with Leslie Lamport, but still I reject the premise.
Computer scientists do learn math, regardless of whether they read Greek-
letter math notation. Write a function with a for loop or array map, and
you've used all the same math concepts as this example formula.

All the best math comes with pictures anyway. ;)

------
paulddraper
For the same reason mathematicians don't learn computer science.

And that answer is: most do. The fields are very complementary.

Set theory? I personally don't have need for that right now. But I've been
working my way through
[https://pomax.github.io/bezierinfo/](https://pomax.github.io/bezierinfo/)
which the most complete resource to bezier curves I've seen, including closed-
form solutions, analytical methods, and interactive demos.

------
baking
Slightly off topic, but the video linked in the OP was worth watching for more
context. However, he mentions at several points that the Virtuoso operating
system for the Rosetta spacecraft was designed using TLA+. That is not the
case: [http://research.microsoft.com/en-
us/um/people/lamport/tla/co...](http://research.microsoft.com/en-
us/um/people/lamport/tla/correction.html)

------
theobold
Maybe the CS people don't know set notation for the same reason that Lamport
dismisses Homotopy Type Theory and Galois Connections at the 2:00 minute mark.

------
koga-ninja
As far as I know, linear algebra and discrete math are Prerequisites for CS
majors.

I didn't formally study math after grade 11. A lot of Math anxiety is simply
being too young to have enough Knowledge or neuroconnections.

I'm 40, and while I do not understand Roger Penrose's Books, they don't
intimidate me like they would have At age 20.

------
CosmicShadow
At the University of Waterloo a CS degree was a Bmath degree with a CS major
up until like 12 years ago, when a BCS was an option with slightly less Math
courses. It was so much math I almost dropped out because I hated it, felt
like no programming or real world stuff, I just kept waiting for it to get
better until I finally realised this is it, it's mostly just math, that's what
makes it "computer science". I grunted through it and got my degree and
learned everything I use today mostly on my own. There was a good base but
I've forgotten most of the math and notation now because it's mostly useless
to me and I didn't like it or fully get it. Maybe that was just a UW
experience as it was a highly touted math school.

TL;DR Comp Sci I took was all math so I don't get the title!

------
aabajian
This is entirely dependent on the program. My undergrad had few set-theoretic
math classes. Sure we had to take discrete math and combinatorics, but this
kind of notation was lacking. I learned it by way of double-majoring in
mathematics.

Conversely, my master's program pretty much assumed you were comfortable with
set theory. CS103 is a freshman/sophomore level class at Stanford and is a
prerequisite/foundational course for the master's degree. I was out for five
years before returning for my master's. It is 100% true that programming
skills are more useful in industry, and re-learning the mathematics was the
hardest part of returning to school after being out for five years.

With that said, machine learning is eating the world, and it requires a very
good understanding of the underlying mathematics.

------
throwaway2016a
I certainly learned enough math to understand this when doing my CS degree.
The better question is "Why do people who don't use something every day forget
the details of how it works?"

When phrased that way the answer is more obviously. I've learned plenty of
things I have let myself forget.

------
novaleaf
My reason:

Math is boring. Cryptic math is... cryptic.

CS is interesting because it's about logical expression. You have an idea, now
figure out the steps to make that become reality. Sure, some higher maths may
be required along the way, but almost always as an ancillary task.

~~~
davidivadavid
> You have an idea, now figure out the steps to make that become reality.

A.K.A. writing a proof. Isn't that what mathematicians do all day?

~~~
novaleaf
i suppose, but one reason I had no interest in math in school was I saw no
practical point to a formula, and less-so for a proof. It may be a personality
issue with me, compounded by less-than-engaging math education techniques, but
I think a good analogy would be: I always loved legos as a kid, and
programming is like grown-up legos to me :)

------
gumby
That kind of math was common on the whiteboards around the MIT AI lab in the
early/mid 1980s when I was there. Then again, (to kapitza's comment) ∀, ∃, and
∈ were on the Lisp Machine keyboard (and the KTV keyboards on which it was
based).

------
mannykannot
This is _not_ about mathematical notation. It is about a lot of people working
in computing, researchers even, not being able to follow what is going on at
what may be the leading edge (cue the arguments that it is not relevant...)

On the other hand, what percentage of physicists understand string theory? I
do not know where that figure lies, whether a low number would be a problem,
or whether it would be relevant to the computer science case. I do suspect
that I would be better at computing if I understood category theory or TLA+ (I
would at least be better able to evaluate their prospects for advancing the
field), and I am not going to make any excuses for my ignorance.

------
j-pb
I would argue that this formulation is outright confusing not because it is so
dense but because it has so much unnecessary cruft to misguide the reader.

The entire formula of

{f ∈ [1..N ⟶ 1..N] : ∀ y ∈ 1..N : ∃ x ∈ 1..N : f[x]=y}

could actually be written as

[1..N ⟶ 1..N]

Most CS people myself included will not trust their own senses when confronted
with such a tautological statement and continue to search for meaning where
there is non because it is so common to encounter cases in your everyday code
where something that you thought was dead code has actually semantic meaning.

~~~
berbc
If by [1..N ⟶ 1..N] you mean the set of functions from [1..N] that have values
in [1..N], I think you are wrong. The formula in the post describes a subset,
the set of such functions that are surjective.

~~~
j-pb
Yeah I think your right, in this context I saw the case of [ A -> B ] as a
function from coimage A to image B which are all bijective. But it is meant as
the more standard domain A to codomain B.

His natural language description is somewhat lacking in that aspect

> I had already explained that [1..N ⟶ 1..N] is the set of functions that map
> the set 1..N of integers from 1 through N into itself,

~~~
sampo
Well, he did say "into itself", not "onto itself".

------
johnwheeler
This feels like naivete and arrogance of the question asker. Why should a
group of computer scientists understand a particular equation? What does it
mean if they don't?

~~~
dllthomas
> Why should a group of computer scientists understand a particular equation?

Because it's incredibly simple, and expresses concepts they should already be
familiar with as they're both incredibly basic and plenty relevant to CS. The
notion of "one-to-one" and "onto" functions is taught in high-school algebra!

> What does it mean if they don't?

It mostly means they've not been using this notation, which is a problem not
because the notation is necessarily perfect (although I dispute those saying
what's used here is particularly arcane - this is _super basic stuff_ ) but
because there is a whole lot of relevant knowledge encoded that way (including
plenty of CS papers!) and clearly they haven't been accessing it.

------
kaa2102
I took a few CS classes while studying engineering undergrad. I also have a
grad degree in Industrial Engineering from Columbia U. Now-a-days, I develop
software and apps.

The training in mathematics, operations research, and mathematical modeling
has been extremely helpful in the way I approach and solve computer science
problems. It has helped with everything from smart SQL queries to developing
scalable software and enterprise architecture for software.

------
KKKKkkkk1
This post requires some context. This is not about math, which is the field
that studies things like stochastic PDEs and elliptic curves. It's about the
notation of predicate calculus, which is a somewhat obscure field of logic
that used to be in vogue in the early 20th century, and that programmers
fixate on for some strange reason.

------
thiht
I'm a computer scientist in France and I think it's weird not to know these
symbols, don't you learn how to use them in other countries? I mean, I learnt
how to read this kind of formula in high school and used them intensively
during my scholarship

------
intralizee
Probably because some don't need to and don't want to spend more of their life
on it.

------
eli_gottlieb
Wait. That's just the set of all functions from naturals to naturals whose
image covers the entire codomain.

Mind, set-builder notation is a horrible crock, since you wind up not being
able to say "image covers the entire codomain" in plain language.

~~~
Smaug123
We like having small sets of primitive notions ("exists", "for all", "set").
The reason we have words like "surjection" is to abbreviate things which are
otherwise unwieldy to state precisely using our small set of primitives. It's
analogous to the reason we might program in high-level languages and then
compile down to machine code: because the low-level code is much easier to
execute directly (analogously, the primitives "forall", "exists", "set" are
extremely intuitive and we're nearly all sure that they are non-
contradictory).

------
ryanSrich
I wasn't a CS student, but the story at my college seemed to be that if you
liked math then you were a CS major, if you didn't then you were an SE major.

~~~
droidist2
I wish my college had SE <sigh>

------
wickedjust
Such an arrogant thing to say about computer scientists. It's like saying why
don't mathematicians know how to write an app?

------
wtfishackernews
My university teaches discrete math, which covers this formula, in the first
semester of computer science. Is this not standard?

------
trelliscoded
I understood it. They covered this stuff in the discrete math class I took at
community college.

------
lithos
That's not testing for math, it's testing for notation.

------
dekhn
odd comment from lamport. He wasn't referring to learning math, just learning
a specific type of math notation.

------
paulmuaddib
Long story short : but they do.

------
paulmuaddib
Long story short : they do.

------
pera
But why he decided to use f[x] instead of just the very common f x?

------
joelbondurant
LOL rrrright, why don't poly-sci majors learn math?

------
kentf
We do

------
kapitza
I suspect most of the CS researchers in the room were, in a sense, lying. I'm
going to guess (a) they could all understand the notation if they tried; (b)
they were in fact taught the notation in some undergraduate class, math camp,
etc, etc; and (c) what happened is that they _intentionally forgot it_.

At least, this is the case for me! I think one of the dirty secrets about the
CS distaste for many areas of math, including conceptual areas like category
theory that are in fact quite closely related to computing and directly
applicable there, is that when judged as CS, they don't seem like very good
CS.

For example, a programming-language designer who uses symbols that aren't on
any keyboard deserves to be laughed out of the room. What are these weird
19th-century squiggles? I see what you're doing, sort of, but why are you
doing it?

This is obviously a petty example. But computer scientists, at least ones with
a certain kind of systems background, have put a lot of work into designing
abstract logical structures for human beings to use. They've learned a lot of
rules about how to do this well.

The corpus of pre-CS math that gets applied to CS doesn't seem to care about
any of these rules. It takes a lot of work to come up with a CS aesthetic, and
when you have one, your brain only has room for "good" systems that play by
your rules. If a system doesn't map to your rules, it's almost physically
painful to have to contemplate it.

Practicality is not quite the issue. TLA+ seems to be a very useful thing and
Lamport has produced a large, large, body of extremely useful work. At the
same time, I can't help feeling it would be neat if someone did for TLA+ what
Ousterhout did for Paxos.

~~~
nabla9
>east ones with a certain kind of systems background, have put a lot of work
into designing abstract logical structures for human beings to use. They've
learned a lot of rules about how to do this well.

This is very arrogant and wrong. The exact opposite is true. Mathematicians
have worked hundreds of years and streamlined notations that can express
complect things well.

~~~
corndoge
I am a mathematics student working as a systems software engineer. I find
mathematical notation excessively dense and tedious to interpret.

Given the expression:

{f ∈ [1..N ⟶ 1..N] : ∀ y ∈ 1..N : ∃ x ∈ 1..N : f[x]=y}

I can instantly read it in my head as:

The set of all f where f is a function mapping 1..N to 1..N such that for all
Y in 1..N there exists an X in 1..N such that f[x] = y.

One might argue that the ability to express such a long definition in such a
compact form is a benefit, but I don't think so. In this particular case the
definition is so trivial that it's easy to understand what is being talked
about. Anything more complicated than this quickly becomes tedious and boring.
I would prefer that mathematicians simply used longer, more verbose, but more
clear and explanatory names for quantities and concepts. Just my opinion,
though.

~~~
Cyph0n
The notation and its meaning are both pretty clear, but I don't see how it
converts to "the set of all permutations from 1 to N". And permutations of
what? The integers 1 to N?

~~~
wott
I guess that's why he talks about applications and not functions, so that
guarantees there is a single _x_ for each _y_.

~~~
mannykannot
I think he was just clarifying a point about the notation.

Given that each f is a function, mapping each member of its domain to one
member of its range, and that for each member of the range, there is a member
of the domain that the given function maps to it, and that the domain and
range are the same size, then there can be no member of the domain not mapped
to some member of the range, and each member of the domain must be mapped to a
distinct member of the range (or else there would not be enough members of the
domain to cover all the members of the range), so, with the domain and range
being the same set, each of the given functions is a permutation of the
domain, and collectively they must be all of them.

This is a verbose version of the argument for this being the set the author
says it is. Arguably, it would be easier to read if I had used x for 'member
of the domain', y for 'member of the range' and f for 'the given function'.

------
GFK_of_xmaspast
As a mathematician, I understood Lamport's notation, but that doesn't mean
it's good notation.

For that audience, why do you even need it? Just say "the set of all
permutations of N elements" or "the symmetric group S_N" if you're nasty.

------
douche
I think I remember this particular notation, but only because I beat it into
my brain trying very hard to not tank my GPA in a ridiculously useless
discrete mathematics class that was required for me to finish my CS minor.

------
sheeshkebab
b/c any math I learned was long forgotten while was figuring out how to
compile javascript...duh

~~~
loukrazy
You figured out how to compile JavaScript?

~~~
dahart
I figured out how to compile JavaScript
[https://developers.google.com/closure/compiler/](https://developers.google.com/closure/compiler/)

