
Math is an insurance policy - ingve
https://bartoszmilewski.com/2020/02/24/math-is-your-insurance-policy/
======
dcolkitt
To paraphrase what's said about fusion energy: Haskell is the language of the
future and always will be.

I love Haskell, but there's really not a single shred of evidence that
programming's moving towards high-level abstractions like category theory. The
reality is that 99% of working developers are not implementing complex
algorithms or pushing the frontier of computational possibility. The vast
majority are delivering wet. messy business logic under tight constraints,
ambiguously defined criteria, and rapidly changing organizational
requirements.

By far more important than writing the purest or most elegant code, are "soft
skills" or the ability to efficiently and effectively work within the broader
organization. Can you effectively communicate to non-technical people, do you
write good documentation, can you work with a team, can you accurately
estimate and deliver on schedules, do you prioritize effectively, are you
rigorously focused on delivering business value, do you understand the broader
corporate strategy.

At the end of the day, senior management doesn't care whether the codebase is
written in the purest, most abstracted Haskell or
EnterpriseAbstractFactoryJava. They care about meeting the organizational
objectives on time, on budget and with minimal risk. The way to achieve that
is to hire pragmatic, goal-oriented people. (Or at the very least put them in
charge.) And that group rarely intersects with the type of people fascinated
by the mathematical properties of the type system.

~~~
darzu
I would argue that Haskell lets you respond to nebulous requirements better
than almost any other language, because refactoring is so much easier and
safer.

I self identify much more with being pragmatic and goal-oriented than math-y
and perfectionist, and I think for almost every programming domain we'd
achieve our goals faster by moving more towards having strong static
guarantees in an ergonomic, expressive language.

Finally, I would also put forth that debugging state corruption or randomly
failing assertions is much harder than learning to avoid side-effects and
leaning into immutability.

~~~
KurtMueller
Alexis King, a professional Haskell coder, recently wrote an article on
exactly that:

[https://lexi-lambda.github.io/blog/2020/01/19/no-dynamic-
typ...](https://lexi-lambda.github.io/blog/2020/01/19/no-dynamic-type-systems-
are-not-inherently-more-open/)

~~~
mumblemumble
It's a very insightful article. It's clear and precise and lucidly written,
and I'd recommend anyone who cares about these things read it.

That said, I found the article unconvincing. The author's writing is perhaps
_too_ precise, to the point where the forest has been lost amidst the trees. I
can draw the main reasons why I'm losing my religion w/r/t to static typing
from out of the appendix portion of the article itself:

> _not only can statically-typed languages support structural typing, many
> dynamically-typed languages also support nominal typing. These axes have
> historically loosely correlated, but they are theoretically orthogonal_

The fact that they're theoretically orthogonal is small consolation. They
_are_ loosely correlated, but the looseness doesn't happen in a way that's
particularly useful to me. The fact of the matter is, the only languages I'm
aware of that have decent ergonomics for structural typing are either
dynamically typed, or constrained to some fairly specific niches. If I want
structural typing in a general-purpose language, I'm kind of stuck with
Clojure or Python. The list of suggested languages that comes a paragraph
later fails to disabuse me of that notion. As does this observation:

> _all mainstream, statically-typed OOP languages are even more nominal than
> Haskell!_

~~~
jdc
Re structural typing, have you tried OCaml/ReasonML? If so what was your
experience like?

~~~
mperktold
Another option is TypeScript, which is well suited for building SPA web apps.

------
juped
>In mathematics, a monad is defined as a monoid in the category of
endofunctors.

Sure. But like everything in math what it really is is a thing that the
mathematician has come to know through application of effort. The definition
is a portion of a cage built to hold the concept in place so the next
mathematician can come along and expend effort to know it.

You've probably done this yourself, just maybe not that deep in the
abstraction tower (though you'd be surprised how abstract some everyday things
can be). For example, you may have internalized the fact about division of
integers that every integer has a unique prime factorization. This is an
important part of seeing what multiplication is, but it's not part of the
tower of abstraction upon which multiplication is built.

Mathematicians tend to end up being unconscious or conscious Platonists
because mathematicians are trained to see the mathematics itself.

~~~
manthideaal
The definition of a monoid here is not the usual definition, is a new
definition for the special case of a strict category as defined MacLane's Book
Categories for the Working Mathematicean. Since you maybe thinking about
composition of endofunctors and unit endofunctors you get a confused picture.
The monad as a monoid in the category of endofunctors is a way to show that
you can confuse people using two different definition of an usual concept and
both give different results. I got this from (1), look for "main confusion":
monoid in the category of endofunctors is defined in a new way, and is not the
expected monoid in the set of all endofunctors.

The definition of monoid in monomial categories is on page 166 and Monads in a
category are on page 133. As a math person I know what is a monoid (usual
term) but I did not know what is a monoid in a monomial category (well I know
now because is on page 166 of the book).

(1) [https://stackoverflow.com/questions/3870088/a-monad-is-
just-...](https://stackoverflow.com/questions/3870088/a-monad-is-just-a-
monoid-in-the-category-of-endofunctors-whats-the-problem)

~~~
manthideaal
I wonder what's the point of using such a phrase, it doesn't help you to grok
the concept of monad. It can help you to know that someone has given a new
definition to sound cute, shame on them. By the way I admire MacLane as a math
person, but people seem to use category theory to sell snake oil. Category
theory is a tool to give names to some diagrams and properties that are used
frequently to avoid repeating the same argument in proofs, is a dry (don't
repeat yourself, as in the ruby motto). If you are an expert in category
theory you can give short proofs of known facts. Category theory is like pipes
in unix, you pipe diagrams to show properties. Grep, sed, awk analoges are
functors, categories and natural transformations. The input is a collection of
diagrams and the output is a new diagram that has a universal property and it
receives a name and a collection of properties that are supposed to be useful
to proof new theorems.

~~~
dddbbb
The phrase appears 6 chapters into a graduate mathematic text on category
theory. If one reads the preceding chapters, it is a useful but pithy
explanation for what a monad is, using terms which have all already been
covered. Its use outside of that context is basically just a joke towards the
Haskell community being overly mathematical.

~~~
manthideaal
I agree with you, the phrase should be preceded with: Caution, this is a math
joke, don't lose sleep trying to grok it.

------
bovermyer
Math is not the insurance policy. Your social skills and your ability to
continually "sell" your worth to others are the insurance policy.

~~~
mathgladiator
Its more like math is the underwriter that one is more than soft skills.

------
state_less
I always figured our domain (computation) is so vast that once programming is
automated, so too are all the other jobs. If we get an AGI that can program
our programs and learn to learn, it won't be hard to have it write a program
to make sales calls, or gather user feedback, or build buzz for the company.

So don't worry, when it happens we can all rest because there won't be any
need for our labor anyway.

~~~
qqii
What about the pix2code example? It's consevable that domain specific
automation will reduce the number of jobs that exist.

~~~
state_less
Assuming pix2code really did automate away traditional UI work, the
development effort would then move to the next subdomain (e.g. a sales bot,
etc).

I suspect as long as you're willing to learn and are competent, you should
have a job until the final effort of a general AI self-learning programmer.

~~~
qqii
I'm also sceptical about pix2code, but the point is that domain specific
automation could consevably reduce the number of overall jobs. The cost of
switching domains is also non negligible.

The question with domain specific automation (and one of my takeaways from the
article) isn't whether or not you'll have a job, but if the effort your put
into getting your current job is worthwhile.

~~~
state_less
I think the total number of jobs (programming + other) that humans can do
economically might go down over time. Programmers can usually pickup the next
ambitious project (e.g. a sales bot) when the old domain is no longer
profitable.

I think Bartosz is saying that Math and Category theory is useful to learn
because it works in a number of subdomains. It can help keep the domain
switching cost down somewhat.

------
yongjik
It seems rather funny to write a treatise, with quicksort being a central
example, where the shown code requires O(N) temporary space.

C/C++ programmers might not be good at category theory, but no one worthy of
their salary would walk past a quicksort routine with O(N) memory without
stopping and asking "Wait, what?"

Seriously, I remember when _this_ used to be the first Haskell code shown on
haskell.org homepage, and I had to stop and wonder if these folks were just
trolling or if they are actually _that_ oblivious of performance. If you want
to promote Haskell, you could have hardly chosen a worse piece of code.

~~~
Cybiote
The article addresses that issue, its core thesis is that AI supports
declarative programming. One of the author's main points is that a
sufficiently intelligent compiler would rewrite code to be better optimized
and have better computational complexity, eliminating low level C++
programming type jobs.

Category theory is to support the creation of specifications that are both
easy to understand for a human who knows category theory and easy to optimize
for the AI, compared to the original quick sort example.

The author also thinks many HTML and JS type jobs will also disappear. What I
am skeptical of is that while Go, Chess and Jeopardy are challenging, they are
closed domains. I think people underestimate just how much complexity building
CRUD apps involves. Just like we underestimated how difficult walking to a
cupboard to retrieve a mug would be for AI.

------
zwieback
_I’m sorry to say that, but C and C++ programmers will have to go._

I've heard that since I started my career in the early 90s and it's always
interesting to compare what reasons people give why low-level languages are
going to go away really soon now.

Other than that the article makes a lot of good points.

~~~
OrangeMango
> I've heard that since I started my career in the early 90s and it's always
> interesting to compare what reasons people give why low-level languages are
> going to go away really soon now.

The main reason: They'll go away when the graybeards retire and the companies
fail without them.

For the last 5 years, I've been babysitting a C-based system that generates
more than 70% of our corporate profit. The youngsters working on the 7+ year-
old project to replace it are on the 3rd refactoring of the 2nd programming
language codebase and the "architect" is already talking about rewriting in a
new language for "productivity improvements." For the 4th year in a row, the
first of five essential elements of the new system will go on line Next Year,
leaving still more years of work until we can shut the old system down and let
the C programmers go (except we all know more new languages than the new guys,
as we all have loads of free time - our old system is quite reliable). Without
a substantial payment directly into my kids' trust fund, I have no intention
of delaying retirement by a day - I have way too many side projects to
explore!

------
nitwit005
> The next job on the chopping block, in my opinion, is that of a human
> optimizer. In fact the only reason it hasn’t been eliminated yet is
> economical. It’s still cheaper to hire people to optimize code than it is to
> invest in the necessary infrastructure.

What would that infrastructure be, exactly? You can write a program that
performs any mathematical operation. A program that can handle any input
program, and optimize it as well as any human could, would need to be an AI
with at least human intelligence, and a deep understanding of all known
mathematics.

But then we're told that mathematical knowledge is a defense against
automation? One wonders why we don't just hand all the math off to this
optimization AI.

------
lordleft
If the author is right, what subfields of mathematics will likely be the most
salient? Linear Algebra? Stats? Category Theory? Isn't it possible that the
species of math you invest in will turn out to not be so valuable in an AI-
driven future? Or is the hope that a baseline mathematical fluency will help
engineers pivot no matter what?

~~~
mamon
You cannot really predict that. Number theory was once considered "pure
mathematics" in a sense that one could not even imagine practical applications
for checking if a number is a prime or not. And then someone invented
cryptography...

EDIT: For all those nitpicking downvoters: yes, I meant public key
cryptography.

~~~
lou1306
To be fair, cryptography and number theory have coexisted without much overlap
from centuries (millennia?). Then, the rise of mechanized cryptanalysis forced
us to look for hard-to-break ways to encrypt stuff, and prime factorization
was a very good candidate.

------
melling
Data science seems like a gateway drug to doing more math.

I’ve been working through Joel Grus’ Data Science from Scratch,

[https://www.amazon.com/Data-Science-Scratch-Principles-
Pytho...](https://www.amazon.com/Data-Science-Scratch-Principles-
Python/dp/1492041130/)

rewriting the Python examples in Swift:

[https://github.com/melling/data-science-from-scratch-
swift](https://github.com/melling/data-science-from-scratch-swift)

------
irrational
If the author is right, how long will it take to get there? I've been working
in the field for 25 years. Frankly what I do today isn't that much different
than what I did 25 years ago and we are busy trying to hire more people who do
the same types of things. I'll probably retire in the next 15-20 years. Will
this wholesale change proposed by the author take place within that time
frame? Based on past experience I doubt it.

~~~
Traster
To add to this: In my career I've personally witnessed so many instances of
people saying "Well, this can all be automated..." and then starting enormous
projects to automate a process that is currently the sole focus of literally
hundreds of engineers. As if the reason there are hundreds of engineers doing
this job is that all those guys are just idiots. The result is these massively
ambitious projects that promise senior management the world, drag on forever,
often justifying themselves with hand-optimized toy examples to show progress.
In the end the actual problem is very obviously intractable and years of
progress are wasted.

It might be true that eventually we can solve "implementing cross-platform UIs
in the general case" but we could also be literally 100 years away from
achieving that, and in the mean time the fact that it's theoretically possible
is worthless.

------
jonas_kgomo
Interesting. If languages like C and C++ are really first to be vulnerable,
why are they the ones used for building AI (computer vision) instead of
category theoretic languages?

~~~
beigeoak
The author is saying that the main reason to use something like C++ instead of
Python or even Java is for speed. He assumes that optimizing for speed is a
problem that can fit neatly under the machine learning domain.

If a machine can optimize performance better than humans, then it would not
make sense to use C++, in the context of performance.

~~~
kevin_thibedeau
We have JIT compilers that can optimize "better" than static. People are still
using the latter because there are other benefits that nobody from the ivory
tower can beat.

------
imjustsaying
When someone asks me, "Why don't you have X insurance?"

My reply is, "Because I can do math."

Was surprised to find the article wasn't about that.

------
kebos
I think this article gives AI more credit than it demonstrates at present and
simplifies the examples.

It's useful I expect for quick fixes/guidance like the loop example.

For example on improving performance - these days that often needs holistic
architectural re-think - surely a creative process? The idea of optimizing
involving a loop seems very distant from heterogeneous asynchronous behavior
of modern hardware.

If AI really does start to solve in the more 'general' way not just a bit of
object recognition here and there, won't software developers incorporate with
it as part of process instead. Enabling even more sophisticated software to be
written?

I think that is the key part of longevity to software development as a career.
The compiler didn't remove the assembly programmer it simply enabled a whole
new level of software complexity to be feasible.

------
ForHackernews
This article purports to be talking about math but then goes off down some
insular functional programming rabbit hole.

For what it's worth, I can easily accept that Haskell programmers' career
prospects will not be altered one whit by improvements in optimisation and
automation...

P.S. Haskell is not math:
[https://dl.acm.org/doi/10.1017/S0956796808007004](https://dl.acm.org/doi/10.1017/S0956796808007004)
[https://www.cs.hmc.edu/~oneill/papers/Sieve-
JFP.pdf](https://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf)

~~~
dddbbb
The paper you linked shows that one Haskell implementation does not exactly
correspond to a specific algorithm, then gives an alternative definition which
does correspond. What does that have to do with the statement 'Haskell is not
math'?

~~~
ForHackernews
It's a salient example of functional programming pretending to demonstrate an
elegant expression of pure mathematics in code, while actually being nothing
of the sort.

------
pietroppeter
> I can’t help but see parallels with Ancient Greece. The Ancient Greeks made
> tremendous breakthroughs in philosophy and mathematics–just think about
> Plato, Socrates, Euclid, or Pythagoras–but they had no technology to speak
> of.

I can’t help but remark how completely untrue this sounds to you if you have
read the magnificent and forgotten book by Lucio Russo, see [1]. Greeks of
centuries III-II BC had plenty of tech and they were much more modern than new
scientists of centuries XVI-VII.

[1]
[https://news.ycombinator.com/item?id=22409445](https://news.ycombinator.com/item?id=22409445)

------
eindiran
In case anyone is interested, the author also wrote the excellent "Category
Theory for Programmers", available in print or online:

[https://bartoszmilewski.com/2014/10/28/category-theory-
for-p...](https://bartoszmilewski.com/2014/10/28/category-theory-for-
programmers-the-preface/)

If you're interested in what Category Theory is about, it's a great place to
start for people with a background in programming but not necessarily
mathematics.

------
jart
I agree with the OP that junk programming will likely die. But so will junk
math. I don't think programming is going to be automated anytime soon, but I'd
imagine that the inputs of an optimizer capable of doing so would look more
like vagueish business goals and policy constraints that businessmen /
politicians like to write, rather than some functional monad, which I guess is
why they still continue to be the master character classes of humanity.

~~~
mattkrause
I'm skeptical of these sorts of claims for two reasons.

1\. Creativity. What objective function do you optimize to write the first
Super Mario Bros? Can you then get from there to RocketLeague or Braid? (I
think not).

2\. Imagine that you somehow obtain a magical technology that takes in a
natural language spec and emits highly-optimized code, just like a human.
Who's writing that spec?

For interactions with actual humans, there's usually a professional drafter
(lawyer, contracting officer, etc) carefully specifying what the other parties
should and should not do. We'd presumably need the same thing even with some
fancy AGI. This is perhaps a bit different from worrying about whether foo()
returns a Tuple or List, but it's not totally dissimilar from programming.

~~~
qqii
I agree with your first point, such objective function would have to optimise
something extremely abstract like player enjoyment.

As for the second, the point is not to have a specificition on the human side.
Most of the time when communicating with others we don't need lawyers, and
spoken contracts are valid and by law. Even with lawyers contracts can be
disputed in court as there is not a formal definition of every exact scenario.

Assuming we had the power to do (1), all we'd need in (2) is something that
doesn't provide unexpected outcomes.

~~~
mattkrause
If you don't have some kind of specification, how are you going to control
what you get?

It's true that you can turn to a teammate and say "hey, can you write me a
pipeline to import data from JSON files?" and you'll usually get something
usable. However, you and your teammates have shared goals and background
information about your particular project and the world at large, etc.

Projects go off the rails all the time because the generally intelligent
(allegedly, anyway) humans don't share these things. Right now, the front page
has an article called "Offshoring roulette" about this. If you don't want to
click through, here's my story about a contract programmer working in the lab.
I asked him to look into a problem: the software running our experiments
crashed after a certain number of events occurred. It turned out that the
events were being written into a fixed-size buffer, which was overflowing.
This _could_ have been fixed in many ways (flush it to disk more often, record
events in groups, resize the buffer). However, he chose to make the entire
saving function into a no-op. This quickly "fixed" the problem--but imagine my
delight when the next few runs contained no data whatsoever. In retrospect,
although this guy had a PhD, he wasn't particularly interested in the broader
context, namely that it crashed _while collecting data that I wanted._

An optimizer is going to take all kinds of crazy shortcuts like that unless
it's somehow constrained by the spec. You could certainly imagine building
lots of "do-what-I-mean" constraints into this optimizer but that requires
even more magic.

------
jerf
I've been hearing this for 25 years, since I was just starting college.

We are _marginally_ closer to it now technologically, and probably actually
significantly (as in, "more than marginally") farther _away_ from it overall,
because the explosion in programming use cases has greatly exceeded our
improvement in the ability to automate.

Even the _given example_ is beyond our reach right now! We already can't
automate that simple specification of a sorting algorithm into an efficient
one. How are we supposed to automate the creation of a graphics card driver
with AI? We'd have much better leverage just applying _better software
engineering_ to that task, and I say that without particularly criticizing the
people doing that work or anything... I just guarantee that they are not so
well greased up with engineering that there's no improvement they can make
bigger than "let's try to throw AI at it".

There are still places where category theory is a good idea. A lot of our
distributed systems would be better off if someone was thinking in terms of
CRDTs or lattices or something. But we're farther than ever from automating
our computing tasks.

------
seibelj
Self driving cars are not coming. Jobs are not being eliminated en masse. This
is hyperventilation and is not borne out by history, facts, or reality. Don't
fall for the scary AI nonsense.

[https://medium.com/@seibelj/the-artificial-intelligence-
scam...](https://medium.com/@seibelj/the-artificial-intelligence-scam-is-
imploding-34b156c3537e)

------
ukj
I am not convinced. Most mathematicians embrace denotational semantics, but
most engineers intuitively default to operational semantics, because
ultimately, we need to make choices/trade-offs that have consequences which
are too complex for any optimiser which lacks a holistic view of the system
our system is part of.

Operational semantics are actually a higher order logic than Category theory
when expressed in a Geometry of Interaction (GoI) grammar.

[https://ncatlab.org/nlab/show/Geometry+of+Interaction](https://ncatlab.org/nlab/show/Geometry+of+Interaction)

I don't have the fancy nomenclature to utter the Mathematical phrase
"endomorphism on the object A⊸B.", but I intuitively understand what
operational semantics are and why they are useful to me. When a programming
language/grammar comes along which implements most of the design-patterns I
need/use to turn my intuitions into __behavioural __specifications - I am
still going to be more productive than a Mathematician because I will not have
to pay the (upfront cost) of learning a denotational vocabulary. The compiler
will do it for me, right?

In the words of the late Ed Nelson: The dwelling place of meaning is syntax;
semantics is the home of illusion.

Languages are human interfaces. Computer languages are better interfaces than
Mathematics because we design (and evolve them) to be usable by the average
human, not Mathematicians. Good interfaces lower the barrier to entry by
allowing plebs like myself to stand on the shoulder of giants. Mathematics
expects everybody to be a giant.

And in so far as dealing with ambiguity goes, Programming languages are way
less ambiguous than Mathematical notation!

Ink on paper has no source code - no context.

------
sapientiae3
> As long as the dumb AI is unable to guess our wishes, there will be a need
> to specify them using a precise language. We already have such language,
> it’s called math. The advantage of math is that it was invented for humans,
> not for machines. It solves the basic problem of formalizing our thought
> process, so it can be reliably transmitted and verified

The same is true of most programming languages - they were all made for
humans. Math has the advantage of being to prove formally that it solves the
problem, but that isn’t a requirement for most software.

> If we tried to express the same ideas in C++, we would very quickly get
> completely lost.

Same is true for many things that you can express in C++, but not in Math.

Math used this way is essentially just another programming language - with
massive advantages in some circumstances, and massive disadvantages in others
- I can’t find any argument in this article as to why it is a better bet than
any other language.

------
iainmerrick
It’s a little weird for this article to focus on qsort, as surely the key
point is that the algorithm doesn’t have to be stated at all, just the
requirements -- that the output is sorted.

This doesn’t necessarily invalidate the argument, but it means all the
concrete examples seem rather beside the point.

------
Tehchops
Lines like this:

> The AI will eventually be able to implement any reasonable program, as long
> as it gets a precise enough specification

make me less worried about the job apocalypse.

It feels to me like getting good, consistent specifications for software is
incredibly hard, particularly if some unbound set of humans is meant to
interact with it.

Delivering domain specifications for even simplistic "things" is incredibly
hard:

[https://www.youtube.com/watch?v=vNOTDn3D_RI&t=2782s](https://www.youtube.com/watch?v=vNOTDn3D_RI&t=2782s)

------
collyw
"So any time you get bored with your work, take note: you are probably doing
something that a computer could do better."

So can I send the computer to the meetings for me while I carry on coding?

------
asow92
> One such task is the implementation of user interfaces. All this code that’s
> behind various buttons, input fields, sliders, etc., is pretty much
> standard. Granted, you have to put a lot of effort to make the code portable
> to a myriad of platforms: various desktops, web browsers, phones, watches,
> fridges, etc. But that’s exactly the kind of expertise that is easily
> codified.

Good luck. There will always be a need for bespoke UI. Just as there's always
a need for bespoke anything that can otherwise be mass produced.

~~~
asow92
[https://tonsky.me/blog/swiftui/](https://tonsky.me/blog/swiftui/)

------
xorand
I don't quite understand this conflation of mathematics with category theory.
This obsession of some programmers with mathematics, actually with a tiny part
which is category theory, looks to me very strange. By chance, there is this
recent comment of Scott Aaronson, a strong mathematician who is a rising star
in quantum computing, which contains what is it probably the more balanced
view.

I quote from the source [0] the relevant part: "With some things I don’t
understand well (nuclear physics, representation theory), there are
nevertheless short “certificates / NP witnesses of importance” that prove to
me that the effort to understand them would be amply repaid. [...] And then,
alas, there are bodies of thought for which I’ve found neither certificates or
anti-certificates—like category theory, or programming language semantics
[...] For those I simply wish the theorizers well, and wait around for someone
who will show me why I can’t not study what they found."

[0]
[https://www.scottaaronson.com/blog/?p=4616#comment-1830447](https://www.scottaaronson.com/blog/?p=4616#comment-1830447)

------
skybrian
We already do automate jobs by improving our development environments. A
better language can eliminate important classes of bugs, and once a reliable
library is available, you don't have to write it yourself. The tools do get
better, gradually.

This doesn't seem to be happening particularly quickly, though, and it's not
clear that it's accelerating. Setting standards is a social process and it
often takes years for new language changes to get widely deployed.

Another thing that slows things down is that every so often some of us decide
what we have is terrible and start over with a new language, resulting in all
the development tools and libraries having to be rebuilt anew, and hopefully
better.

I expect machine learning will result in nicer tools too, but existing
standards are entrenched and not that easy to replace, even when they are far
from state-of-the-art.

------
ndonnellan
> You might think that programmers are expensive–the salaries of programmers
> are quite respectable in comparison to other industries. But if this were
> true, a lot more effort would go into improving programmers’ productivity,
> in particular in creating better tools.

...

> I am immensely impressed with the progress companies like Google or IBM made
> in playing go, chess, and Jeopardy, but I keep asking myself, why don’t they
> invest all this effort in programming technology?

It seems like Google does invest in programming technology, but a lot of that
tech is internal. Google spends an order of magnitude more money on employee
productivity than any other job I've worked at. But that's probably because at
previous jobs we spent <<1% of salary on tools and didn't have economies of
scale.

~~~
gowld
programming productivity gets absolutely massive investment -- all of secret,
proprietary, and open source.

They reason it seems otherwise is that software has infinite appetite for
increased productivity, since there is minimal friction and energy cost
commonly seen in almost every other endeavor. There are essentially two
throttles on the exponential improvement in computing: (1) the speed of
electromagnetic objects, and (2) the speed of humans to learn new things
recently invented and use them to invent newer things.

------
tcldr
It's interesting that he thinks UI will be 'the first to be automated,' but in
my experience the variation and creativity, constraints and rules – not to
mention the animations – that are emergent from a UI design are so varied and
complex that although this seems intuitively correct, history has told us it's
anything but.

Arguably the best way to describe an interface is through a declarative
programming language, and unless we're saying that 'creativity in UI design is
superfluous to our future requirements' it seems like this will remain the
case for the foreseeable future.

~~~
ukj
+10000

Mathematics is a user interface. Black ink on white paper is the medium which
we've been using to communicate complex concepts/ideas for thousands of years.

In 2020 there are better communication mediums. Interactive mediums.

------
cryptozeus
"One such task is the implementation of user interfaces." Clearly author has
not used image to code tools before. The amount of junk code that is created
through these tools is unusable in production.

~~~
mamon
For now. But the very fact that the code compiles and does it's basic tasks is
already an achievement.

------
maerF0x0
> So the programmers of the future will stop telling the computer how to
> perform a given task; rather they will specify what to do. In other words,
> declarative programming will overtake imperative programming.

This has been happening for a very long time with abstraction. Layers upon
layers of computing "just works" without the programmer having to think about
it or why. Things like GRPC/OpenAPI etc make it conceivable of a day where a
product manager just needs to write the schema and methods and hit "Deploy to
AWS" .

------
philshem
> Experience tells us that it’s the boring menial jobs that get automated
> first.

I doubt that the economic drivers of automation consider if the job is boring
or menial for the worker. I think this “experience” needs a source.

Furthermore, imagine working 40 years in a field you didn’t enjoy in order to
have an “insurance policy”. (Just do what you like.)

~~~
vb6sp6
> I doubt that the economic drivers of automation consider if the job is
> boring or menial for the worker. I think this “experience” needs a source

You are right, no one says "lets automate the boring jobs". What happens is
these types of jobs naturally select themselves because they are uncomplicated
which makes them prime targets for automation

------
bobjones2013
The one liner quicksort implementation in Haskell is only really possible
because a good chunk of the hard work was handled by a partition library
function... I'm not sure how different that is than just calling quicksort
from a library in any other high level language.

~~~
gowld
Haskell quicksort isn't quicksort, it's an illustration of quicksort. It's not
an in-place constant-memory implementation.

See also the Haskell Sieve of Eratosthenes, which isn't a Sieve of
Eratosthenes, and is in fact even slower than naive trial division:
[https://www.cs.hmc.edu/~oneill/papers/Sieve-
JFP.pdf](https://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf)

------
woah
It's odd that functional programming enthusiasts always love to put the mantle
of "math" on their favorite functional tidbits, while real mathematicians
write imperative, stateful (and often sloppy) Python.

~~~
shadowfox
You may well be talking about different kinds of math here.

Most of what functional programmers / theorists tend to care about are from
sub-fields like logic, abstract algebra and the like.

The "real mathematicians" that you mention are most likely working in other
fields like linear algebra, statistics etc. While it is certainly possible
that logicians and algebraists work with sloppy python (in as much as they
write _any_ code at all), but I don't feel that it had be a great fit.

------
huherto
I agree that a lot of software will be done declaratively rather than
imperatively. But we will create DSLs for that. I don't think we will use math
for that. And somebody needs to create the DSLs.

------
carapace
So why hasn't Excel already eaten our lunches?

It's the most popular, successful programming language ever but it hasn't
taken the _whole_ market for programming, why not?

------
ditonal
I got interested in programming in 2000 and graduated with a CS degree in
2009. Throughout the 2000s (remember this was post-dot com) I got told
software was a bad field to get into for two reasons:

1) The advent of advanced tooling would make software engineers unnecessary.
Tools like Visual Basic and UML diagrams were the tip of the iceberg of
"visual coding" where a business person could just specify requirements and
the software would be automatically created.

2) Jobs were all going to go overseas. There is no reason to pay a programmer
in California 10x what you can pay someone in Mumbai. It's better to study
something like finance where the secret domain knowledge is held within the
chambers of ibankers in Manhattan. The future of tech startups is a few
product managers in NYC outsourcing the coding work to India.

There's also a 3rd argument that gets floated around that the field will be
oversaturated, "everyone" is learning to code, etc. In 2015 I asked a startup
for 125k and they told me, while that did seem to be market at the time, they
thought salaries had peaked and were going in the opposite direction. In 2020
you probably can't hire a bootcamp grad for 125k.

Since then the field has exploded and wages have gone off the charts, but I
still hear the same type of arguments over and over.

In 2020, you hear stuff like:

1) AI is the future, it's going to automate away all of the menial programming
jobs.

2) Bay Area is overcrowded, all the jobs are going remote. The future of tech
startup is a few marketing execs in SF outsourcing the menial tech work to
flyover states.

Personally, I didn't believe the hype then and don't believe the hype now. I
find it amusing that the author questions the wisdom of Google for not using
AI to automate development, as if that's never occurred to Google.

Of course the author is right that tech is a treadmill, new skills move into
the spotlight while old ones become outdated, although even then the mass of
legacy code means consultants will have lucrative jobs taking care of it for a
long time.

In my experience, new tooling always creates more software jobs, not less.
Software is not like high frequency trading, the more people that compete to
make software, the more people we seem to need to make software.

Sure, Bay Area is getting insanely expensive, but Google still tries to fit as
many as it can into Mountain View, Facebook still crams as many as it can into
Menlo Park. Every 3 months some VC will have the bright idea that, what if we
just pay people to do the lowly engineering work somewhere else and just have
execs in the Bay Area? And 99% of those startups go nowhere and Google is
worth a trillion dollars.

There is a very intuitive line of reasoning that goes, software can be done my
machines and software can be done anywhere. There is a thread of truth to both
those narratives, but it leads people to very incorrect conclusions that there
won't be as many software jobs in the future and they will be done in cheap
places.

Despite all intuition, and all the logical narratives about costs and
automation, a group of people dedicated to technology in the same physical
room have defied that intuition. Virtually every extremely important software
company grew in the West Coast of the USA, in some of the most expensive
places in the world, and the more software tools have improved the more
headcount these companies have had. So take all this intuition about the end
of days for software engineers with a huge grain of salt.

------
leftyted
This is historicism. For example, the idea that, because compilers have
eliminated most hand-optimization, that process will inexorably continue,
moving further up the chain of abstraction until "trivial programming" has
been eliminated. The author thinks he's derived some law of historical
progress. Along these lines, many smart people have predicted the "end of
labor" since at least Marx.

I think that predicting the future is hard and most people are better off
"optimizing local minima".

