I love Haskell, but there's really not a single shred of evidence that programming's moving towards high-level abstractions like category theory. The reality is that 99% of working developers are not implementing complex algorithms or pushing the frontier of computational possibility. The vast majority are delivering wet. messy business logic under tight constraints, ambiguously defined criteria, and rapidly changing organizational requirements.
By far more important than writing the purest or most elegant code, are "soft skills" or the ability to efficiently and effectively work within the broader organization. Can you effectively communicate to non-technical people, do you write good documentation, can you work with a team, can you accurately estimate and deliver on schedules, do you prioritize effectively, are you rigorously focused on delivering business value, do you understand the broader corporate strategy.
At the end of the day, senior management doesn't care whether the codebase is written in the purest, most abstracted Haskell or EnterpriseAbstractFactoryJava. They care about meeting the organizational objectives on time, on budget and with minimal risk. The way to achieve that is to hire pragmatic, goal-oriented people. (Or at the very least put them in charge.) And that group rarely intersects with the type of people fascinated by the mathematical properties of the type system.
And herein lies the fundamental reason why most large aggregations of business software that I've seen in the wild are only strongly typed at about the same scale as the one on which the universe actually agrees with Deepak Chopra. Beyond that, it's all text.
It doesn't have to be full-blown Unix style the-howling-madness-of-Zalgo-is-held-back-by-sed-but-only-barely; it can be CSV or JSON or XML or whatever. And a lot of it's really just random magic strings. But it's still all stringly typed at human scales.
> full-blown Unix style the-howling-madness-of-Zalgo-is-held-back-by-sed-but-only-barely
I'm a very long-time UNIX guy, so the I think I get the zen of what you're saying here, but I don't really get the Zalgo reference. Is this it? https://knowyourmeme.com/memes/zalgo Thanks!
The only barely held back part is presumably that when we notice a corner case, we might add a little more logic to handle that specific case, but we never really approach the goal of handling all possible inputs with valid or useful behavior. (We might even declare that a non-goal.)
I self identify much more with being pragmatic and goal-oriented than math-y and perfectionist, and I think for almost every programming domain we'd achieve our goals faster by moving more towards having strong static guarantees in an ergonomic, expressive language.
Finally, I would also put forth that debugging state corruption or randomly failing assertions is much harder than learning to avoid side-effects and leaning into immutability.
That said, I found the article unconvincing. The author's writing is perhaps too precise, to the point where the forest has been lost amidst the trees. I can draw the main reasons why I'm losing my religion w/r/t to static typing from out of the appendix portion of the article itself:
> not only can statically-typed languages support structural typing, many dynamically-typed languages also support nominal typing. These axes have historically loosely correlated, but they are theoretically orthogonal
The fact that they're theoretically orthogonal is small consolation. They are loosely correlated, but the looseness doesn't happen in a way that's particularly useful to me. The fact of the matter is, the only languages I'm aware of that have decent ergonomics for structural typing are either dynamically typed, or constrained to some fairly specific niches. If I want structural typing in a general-purpose language, I'm kind of stuck with Clojure or Python. The list of suggested languages that comes a paragraph later fails to disabuse me of that notion. As does this observation:
> all mainstream, statically-typed OOP languages are even more nominal than Haskell!
I wouldn't say it rarely intersects - I'm one and I've worked with plenty. And I've seen plenty with strength in one join a production Haskell shop and pick up the other (in both directions!)
Haskell definitely gives you a lot to help manage the messiness you mention. You have to have know-how and awareness to do it, but it can help like none other.
But overall, stuff like this is why I wish to eventually become as independent as I can manage in my career. The intersection of Haskell and management thinking is frequently a bad faith shitshow in my experience, so the sooner I can start creating on my own terms, the better. The best part about Haskell is how little effort I exert solving problems compared to my comparable experience in Go, Python, Java. A perfect match for personal creation.
> As long as the dumb AI is unable to guess our wishes, there will be a need to specify them using a precise language. We already have such language, it’s called math.
Math is wonderful and elegant, but it is useless (in itself) to specify things in the real world. You need tons of conceptualisation and operationalisation and common sense etc. for the foreseeable future.
Heck, logic is useless for the real world - everyone knows the example that “a bachelor is a man that is not married”, but even if you can implement it as a conjunction of two logical predicates, then try to define, with logic a computer can understand, what “man” is, or “married”, or “game”, or “taxable income” or “murder” or “fake news”. Go on, use math and category theory. Good luck.
EDIT to add: For a wonderful overview of the problems with the naive Aristotelian view of definition (easily captured by predicate logic) in the real world, see George Lakoff’s book “Women, Fire and Dangerous Things: What Categories Reveal About the Mind”.
Also, my aunt is a Bachelor of Arts, and her martial statues doesn't effect that.
Language, life, and business logic is messy, confusing, and conditional/contextual to the point of absurdity. You will never realistically reduce it to pure mathematics in a way that makes it easier to deal with.
An innovation like that, which was not stumbled on by mainstream imperative language designers, arguably comes from a focus on abstract mathematical structure. You might be right if you believe that profunctor optics are going nowhere, or that dependent types and type–level programming are just too complicated, but as a research project, strongly typed functional programming has many strengths, and I don't just mean strengths that are not essentially related to the type system like purity or laziness.
'“The ability of the [strong] type system to catch [mundane] errors allows us to spend more time thinking about other, deeper issues in our code and drives down our error rate, making it possible to move faster while still maintaining safe and reliable systems,” says Yaron Minsky, co-head of the technology group at liquidity provider Jane Street. The firm, as it puts it, writes “everything that we can in OCaml”, a functional language, and employs 300 OCaml developers.'
> The vast majority are delivering wet. messy business logic
Ok, notice something wrong here?
90% of the work ends up being input validation and normalization. Humans suck at consistency, especially if your data comes from multiple sources, and potentially multiple languages. None of those complex algorithms are going to work until you make the data sane.
You know it's kind of funny how this very conversation mirrors the criticisms of Haskell leveled in the top post! Someone is trying to insist that language fit in a neat mathematical box where words are stripped of their colloquial meanings and force us to only use dictionary definitions. But language and the real world just refuse to be so clean! Ha!
Functional languages have been around in production for decades now. They have met and adapted to pretty much all the real-worldliness and messiness and pragmaticness you can think of. And these days, the concepts they came up with are being massively adapted to enterprise darlings such as java or c#.
Could we get more consideration than the old "You don't know real world"?
Dealing with the tons and tons of crap people will put down in a field no matter how explicit you make your directions is not a fancy algorithm. It's down and dirty drudgework that is of little interest to the more math oriented algorithm side of the house.
So while not everyone needs to care about abstractions and their properties. We really need the people who put together frameworks and libraries to.
(Although if I could speculate based on past performance: values won't work with collections, pure functions won't allow recursion, do-notation will be incompatible with checked exceptions, and the higher-kinded types won't have inference. And all of them will be jam-packed with unnecessary syntax.)
This kind of ad-hominem attacks on their character has no place in evaluating the value of a tool or a concept.
Please consider evaluating ideas based on their merit with regards to context, rather than delivering blanket considerations about people based on your prejudice.
Sure. But like everything in math what it really is is a thing that the mathematician has come to know through application of effort. The definition is a portion of a cage built to hold the concept in place so the next mathematician can come along and expend effort to know it.
You've probably done this yourself, just maybe not that deep in the abstraction tower (though you'd be surprised how abstract some everyday things can be). For example, you may have internalized the fact about division of integers that every integer has a unique prime factorization. This is an important part of seeing what multiplication is, but it's not part of the tower of abstraction upon which multiplication is built.
Mathematicians tend to end up being unconscious or conscious Platonists because mathematicians are trained to see the mathematics itself.
The definition of monoid in monomial categories is on page 166 and Monads in a category are on page 133. As a math person I know what is a monoid (usual term) but I did not know what is a monoid in a monomial category (well I know now because is on page 166 of the book).
> The more I think about this, the more it seems to me that a monad is not at all a monoid in the category of endofunctors, but actually a monoidal subcategory.
> That's the problem.
Like the definition of prime that I was taught in primary school, "a prime is a positive integer whose only divisors are itself and 1". The right definition is "a prime is a positive integer with exactly two divisors", because otherwise all interesting statements about primes would have to be confusingly rewritten as statements about primes greater than 1. (Sometimes we have theorems about "odd primes", i.e., not 2, but the important set, primes, definitely includes 2.)
I don't get the monad joke fwiw; I've done a ton of Haskell and a ton of math, but the Haskell wasn't deep abstraction and the math was differential geometry and the vicinity.
(When are they different? When you're working with some kind of number other than the ordinary integers. For instance, suppose you decide to treat sqrt(-5) as an integer, so your numbers are now everything of the form a + b sqrt(-5) where a,b are ordinary integers. In this system of numbers, 3 is irreducible, but it isn't prime because (1+sqrt(-5)) (1-sqrt(-5)) = 6 which is a multiple of 3 -- but neither of those factors is a multiple of 3 even in this extended system of numbers.)
I mean, many theorems about primes do not apply to 1. 1 is very often not considered a prime. And, as another commenter noted, there is not just one definition of prime.
> But like everything in math what it really is is a thing that the mathematician has come to know through application of effort. The definition is a portion of a cage built to hold the concept in place so the next mathematician can come along and expend effort to know it.
Well put. I'd say this extends beyond mathematics. All definitions are like that.
hunt writhing concepts
So don't worry, when it happens we can all rest because there won't be any need for our labor anyway.
Then there's the problem of edge cases - in this case, what if the user has not had an account for more than 6 weeks but has met the other conditions? Now the AGI has to detect that context and formulate the question back to the developer.
The "code will eat your coding job" hype sounds a lot like "we'll have self-driving cars all over the country by 2000" hype (yes, that hype did exist back then,) or going further back, "All of AI is done, we just need a few more decision rules" hype back in the seventies.
For sure, many coding frameworks are a lot simpler now than they were two decades ago, and yes, I think it has meant many aspects of digitized services are now much cheaper. You can build a Wix website for yourself, or a Shopify e-business, without paying a developer, which you needed to do in the year 2000. But the consequent growth in digital businesses has led to induced demand for more developers, as businesses constantly test the edges of these "no-code" services.
I would say we have reached some amount of saturation already. Anec-datally, it seems that if you segmented salaries by experience, you might find some amount of decline or stagnation in the lower levels of experience relative to a decade ago. So in that sense, the original point has some valence, but I don't think it has anything to do with "AI"
I suspect as long as you're willing to learn and are competent, you should have a job until the final effort of a general AI self-learning programmer.
The question with domain specific automation (and one of my takeaways from the article) isn't whether or not you'll have a job, but if the effort your put into getting your current job is worthwhile.
I think Bartosz is saying that Math and Category theory is useful to learn because it works in a number of subdomains. It can help keep the domain switching cost down somewhat.
You’re right that the jobs as they exist today won’t in the future. Just like manufacturing replaced agriculture which was then replaced by services, there will be higher orders of creativity and problem solving that most humans will be engaged in the future.
There really isn’t a lack of problems to solve. Consider that we’ve barely scratched the surface of the earth and the farthest we’ve sent humans to is the Moon. The Space Industry might as well be the economic market that keeps expanding forever...
One of the things I didn't like about Star Trek was that they have this super powerful computer and yet Picard has to ask Wesley to punch in the coordinates on the console and engage the warp coils. What kind of theater is this? Sure a floating monolithic slab of computation in space has less cinema appeal than a crew of humans, but a hardened machine seems more plausible for space travel. Humans can't sustain much more than 1g acceleration and don't live too long.
C/C++ programmers might not be good at category theory, but no one worthy of their salary would walk past a quicksort routine with O(N) memory without stopping and asking "Wait, what?"
Seriously, I remember when this used to be the first Haskell code shown on haskell.org homepage, and I had to stop and wonder if these folks were just trolling or if they are actually that oblivious of performance. If you want to promote Haskell, you could have hardly chosen a worse piece of code.
Category theory is to support the creation of specifications that are both easy to understand for a human who knows category theory and easy to optimize for the AI, compared to the original quick sort example.
The author also thinks many HTML and JS type jobs will also disappear. What I am skeptical of is that while Go, Chess and Jeopardy are challenging, they are closed domains. I think people underestimate just how much complexity building CRUD apps involves. Just like we underestimated how difficult walking to a cupboard to retrieve a mug would be for AI.
I've heard that since I started my career in the early 90s and it's always interesting to compare what reasons people give why low-level languages are going to go away really soon now.
Other than that the article makes a lot of good points.
The main reason: They'll go away when the graybeards retire and the companies fail without them.
For the last 5 years, I've been babysitting a C-based system that generates more than 70% of our corporate profit. The youngsters working on the 7+ year-old project to replace it are on the 3rd refactoring of the 2nd programming language codebase and the "architect" is already talking about rewriting in a new language for "productivity improvements." For the 4th year in a row, the first of five essential elements of the new system will go on line Next Year, leaving still more years of work until we can shut the old system down and let the C programmers go (except we all know more new languages than the new guys, as we all have loads of free time - our old system is quite reliable). Without a substantial payment directly into my kids' trust fund, I have no intention of delaying retirement by a day - I have way too many side projects to explore!
Because C has killed off most assembly language already, compared to the 1970s.
Because you can talk about writing C++ for embedded systems without busting up laughing.
Because "embedded" means ARM and not TMS1000 these days.
Because not even BART is still running PDP-8s in production anymore.
Because the extreme low end changes slowly, but it does change.
What would that infrastructure be, exactly? You can write a program that performs any mathematical operation. A program that can handle any input program, and optimize it as well as any human could, would need to be an AI with at least human intelligence, and a deep understanding of all known mathematics.
But then we're told that mathematical knowledge is a defense against automation? One wonders why we don't just hand all the math off to this optimization AI.
Also, as a note: the field of computer science is a branch of mathematics. Whenever you deal with data structures or algorithms, you're dealing with how to procedurally design information structures and procedures to solve problems and model the real world. Mathematics? Well, depending on what branch you're into, it's using symbols and information ...to, well, do the same thing!
Out of all of the professions, it seems rather silly to me to think that our own profession is in danger of being replaced. I would think that it's the LEAST likely one, since programmers, to me at least, represent workers who are paid to communicate and find ways to solve problems in the real world using a tool which will continue to improve and enable our species to excel and automate jobs filled with repetitive drudgery. Out of all of the professions, I’m not sure why the author would think that ours is likely to go out of demand soon. I would conjecture that us playing a prominent role in automating other jobs ensures that we stay in demand for quite a long time!
As far as the most useful mathematical branches are concerned – if you’re interested, the branches I find have the highest ability to help in solving real world problems are: calculus, linear algebra, probability and statistics, computer science (as already mentioned), convex and computational optimization, quantum mechanics, complex analysis, ordinary and partial differential equations, data mining and analysis, information theory … well, I could keep going, but I think you’d greatly benefit from checking out a great book which gives a pretty good overview on the key areas in applied mathematics: The Princeton Companion to Applied Mathematics. The above are just my own guesses, and it’s highly dependent on which problem you’re looking to solve.
Rust is the best shot to replace C, and it has yet to figure out how to manage memory without a complicated ownership/borrowing/lifetimes model and not impact performance.
That and Rust is probably 5-6 years ahead of its time. The future where Math, Cryptocurrencies, AI have eaten the world is still very far away.
So in my opinion, your best bet/insurance for the future is to learn Rust (which I'm doing); that being said I don't know what I'm talking about.
EDIT: For all those nitpicking downvoters: yes, I meant public key cryptography.
Also cryptography is one of the oldest applications, and has interacted with a variety of sub-disciplines in mathematics for basically ever.
Most people hate math, and there will always be some security in being willing to do things that most people hate. Engineers are assured, through the grapevine, that they will never use their math after college. They're right. Most of them will get so busy with organizing and arranging building blocks, that they'll forget their math. It makes sense to delegate the math work to a small fraction of people who enjoy it. Those people will never be great in number, and might not get paid any more, but will be valuable enough to have stable careers. The math doesn't even need to be particularly hard.
At least, so I hope. ;-)
A second recommendation, a bit odd maybe, do not waste your time trying to engage with that community as most people there are incredibly smug and toxic (but if you don't believe me feel free to try it, ymmv). They are now pushing this idea that they're 'opening' the field to everybody and would like as much people to jump on their bandwagon, but they are just trying to get some traction to keep leeching off grants, etc... they couldn't care less about you, I experienced this first-hand. You can learn most of these things on your own anyway, as it is usual with most mathy things.
I don't disagree with the premise of the post, when the time comes math will save your ass, for sure. Now, what works? As you mentioned linear algebra and some intermediate statistics would greatly improve your chances. I would drop some geometry as well, geometry has an inherent 'visual/spatial' feeling to it but try to learn it in such way that you become familiar of dealing with it in a completely abstract way. Regarding this, I have noticed a small surge in interest for clifford algebras, geometric algebras, etc... if you have the time pay attention to it, there are some greats talks about it on youtube and this one has an amazing community that I would definitely recommend to engage with.
Also interesting to read about, some other algebras that are defined mathematically and are at the foundation of CS, things like lambda calculus come to mind.
You don't have to become an expert on these fields to start seeing the benefits. Just build of the habit of reading about one or two topics per week, what they do, how they do it, and before you know a year has gone by and you will feel much more confident in your skills. Anything that requires more attention will naturally drag you to it.
Finally, this guy Robert Ghrist is amazing (really!) https://www.math.upenn.edu/~ghrist/, check him out as well, his talks and other content. He also wrote a book on applied topology (free online as well) which is amazing. Even if you don't understand anything about it, give it a quick skim to have an idea of all the incredible things math can you do for you that you could not have imagined existed!
This is far from the truth and disregards a significant body of work in programming language theory (not to mention algebraic geometry and topology).
Perhaps most famously, monads, which have been widely adopted in the Haskell and Scala communities, originated in the category theory literature. Eugenio Moggi famously described how monads could be used to structure the denotational semantics of a programming language.
Monads are just the most obvious example of 'new' concepts which originate in the category theory literature and are practically realised by the study of categorical semantics of programming languages. A brief investigation of the literature will reveal a vast array of concepts such as initial algebras/final coalgebras (used to implement safe recursion schemes), adjunctions (giving rise to free constructions such as the infamous free monad), profunctor optics (bidirectional data accessors). This is just to give a few examples in PLT and does not even touch upon lesser-known constructions in type theory.
‘In the end, it's a nice theoretical framework to think about how to structure your code but that's it.’
This comes across as overly dismissive of the problem of ‘how to structure code’, which is in no way trivial. Formalising desirable properties of programs, and designing languages/constructs which restrict to programs exhibiting these desirable properties, is a general focus of programming language theory. ’A nice theoretical framework’ in which such problems can be formalised is very significant, and the comments that programming language theorists working on categorical semantics are “incredibly smug and toxic” and “are just trying to get some traction to keep leeching off grants” is especially insulting to many wonderful people working in this domain.
I’ve been working through Joel Grus’ Data Science from Scratch,
rewriting the Python examples in Swift:
It might be true that eventually we can solve "implementing cross-platform UIs in the general case" but we could also be literally 100 years away from achieving that, and in the mean time the fact that it's theoretically possible is worthless.
If a machine can optimize performance better than humans, then it would not make sense to use C++, in the context of performance.
My reply is, "Because I can do math."
Was surprised to find the article wasn't about that.
It's useful I expect for quick fixes/guidance like the loop example.
For example on improving performance - these days that often needs holistic architectural re-think - surely a creative process? The idea of optimizing involving a loop seems very distant from heterogeneous asynchronous behavior of modern hardware.
If AI really does start to solve in the more 'general' way not just a bit of object recognition here and there, won't software developers incorporate with it as part of process instead. Enabling even more sophisticated software to be written?
I think that is the key part of longevity to software development as a career. The compiler didn't remove the assembly programmer it simply enabled a whole new level of software complexity to be feasible.
For what it's worth, I can easily accept that Haskell programmers' career prospects will not be altered one whit by improvements in optimisation and automation...
P.S. Haskell is not math: https://dl.acm.org/doi/10.1017/S0956796808007004 https://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf
I can’t help but remark how completely untrue this sounds to you if you have read the magnificent and forgotten book by Lucio Russo, see . Greeks of centuries III-II BC had plenty of tech and they were much more modern than new scientists of centuries XVI-VII.
If you're interested in what Category Theory is about, it's a great place to start for people with a background in programming but not necessarily mathematics.
1. Creativity. What objective function do you optimize to write the first Super Mario Bros? Can you then get from there to RocketLeague or Braid? (I think not).
2. Imagine that you somehow obtain a magical technology that takes in a natural language spec and emits highly-optimized code, just like a human. Who's writing that spec?
For interactions with actual humans, there's usually a professional drafter (lawyer, contracting officer, etc) carefully specifying what the other parties should and should not do. We'd presumably need the same thing even with some fancy AGI. This is perhaps a bit different from worrying about whether foo() returns a Tuple or List, but it's not totally dissimilar from programming.
As for the second, the point is not to have a specificition on the human side. Most of the time when communicating with others we don't need lawyers, and spoken contracts are valid and by law. Even with lawyers contracts can be disputed in court as there is not a formal definition of every exact scenario.
Assuming we had the power to do (1), all we'd need in (2) is something that doesn't provide unexpected outcomes.
It's true that you can turn to a teammate and say "hey, can you write me a pipeline to import data from JSON files?" and you'll usually get something usable. However, you and your teammates have shared goals and background information about your particular project and the world at large, etc.
Projects go off the rails all the time because the generally intelligent (allegedly, anyway) humans don't share these things. Right now, the front page has an article called "Offshoring roulette" about this. If you don't want to click through, here's my story about a contract programmer working in the lab. I asked him to look into a problem: the software running our experiments crashed after a certain number of events occurred. It turned out that the events were being written into a fixed-size buffer, which was overflowing. This could have been fixed in many ways (flush it to disk more often, record events in groups, resize the buffer). However, he chose to make the entire saving function into a no-op. This quickly "fixed" the problem--but imagine my delight when the next few runs contained no data whatsoever. In retrospect, although this guy had a PhD, he wasn't particularly interested in the broader context, namely that it crashed while collecting data that I wanted.
An optimizer is going to take all kinds of crazy shortcuts like that unless it's somehow constrained by the spec. You could certainly imagine building lots of "do-what-I-mean" constraints into this optimizer but that requires even more magic.
When I'm purchasing a chair, I don't normally care about the detailed specifics of wood types, craftsmanship and the specifics of the joins, nor doesn't matter if a master craftsman doesn't approve.
We are marginally closer to it now technologically, and probably actually significantly (as in, "more than marginally") farther away from it overall, because the explosion in programming use cases has greatly exceeded our improvement in the ability to automate.
Even the given example is beyond our reach right now! We already can't automate that simple specification of a sorting algorithm into an efficient one. How are we supposed to automate the creation of a graphics card driver with AI? We'd have much better leverage just applying better software engineering to that task, and I say that without particularly criticizing the people doing that work or anything... I just guarantee that they are not so well greased up with engineering that there's no improvement they can make bigger than "let's try to throw AI at it".
There are still places where category theory is a good idea. A lot of our distributed systems would be better off if someone was thinking in terms of CRDTs or lattices or something. But we're farther than ever from automating our computing tasks.
Operational semantics are actually a higher order logic than Category theory when expressed in a Geometry of Interaction (GoI) grammar.
I don't have the fancy nomenclature to utter the Mathematical phrase "endomorphism on the object A⊸B.", but I intuitively understand what operational semantics are and why they are useful to me. When a programming language/grammar comes along which implements most of the design-patterns I need/use to turn my intuitions into behavioural specifications - I am still going to be more productive than a Mathematician because I will not have to pay the (upfront cost) of learning a denotational vocabulary. The compiler will do it for me, right?
In the words of the late Ed Nelson: The dwelling place of meaning is syntax; semantics is the home of illusion.
Languages are human interfaces. Computer languages are better interfaces than Mathematics because we design (and evolve them) to be usable by the average human, not Mathematicians.
Good interfaces lower the barrier to entry by allowing plebs like myself to stand on the shoulder of giants. Mathematics expects everybody to be a giant.
And in so far as dealing with ambiguity goes, Programming languages are way less ambiguous than Mathematical notation!
Ink on paper has no source code - no context.
The same is true of most programming languages - they were all made for humans. Math has the advantage of being to prove formally that it solves the problem, but that isn’t a requirement for most software.
> If we tried to express the same ideas in C++, we would very quickly get completely lost.
Same is true for many things that you can express in C++, but not in Math.
Math used this way is essentially just another programming language - with massive advantages in some circumstances, and massive disadvantages in others - I can’t find any argument in this article as to why it is a better bet than any other language.
This doesn’t necessarily invalidate the argument, but it means all the concrete examples seem rather beside the point.
> The AI will eventually be able to implement any reasonable program, as long as it gets a precise enough specification
make me less worried about the job apocalypse.
It feels to me like getting good, consistent specifications for software is incredibly hard, particularly if some unbound set of humans is meant to interact with it.
Delivering domain specifications for even simplistic "things" is incredibly hard:
So can I send the computer to the meetings for me while I carry on coding?
Good luck. There will always be a need for bespoke UI. Just as there's always a need for bespoke anything that can otherwise be mass produced.
I quote from the source  the relevant part: "With some things I don’t understand well (nuclear physics, representation theory), there are nevertheless short “certificates / NP witnesses of importance” that prove to me that the effort to understand them would be amply repaid. [...] And then, alas, there are bodies of thought for which I’ve found neither certificates or anti-certificates—like category theory, or programming language semantics [...] For those I simply wish the theorizers well, and wait around for someone who will show me why I can’t not study what they found."
This doesn't seem to be happening particularly quickly, though, and it's not clear that it's accelerating. Setting standards is a social process and it often takes years for new language changes to get widely deployed.
Another thing that slows things down is that every so often some of us decide what we have is terrible and start over with a new language, resulting in all the development tools and libraries having to be rebuilt anew, and hopefully better.
I expect machine learning will result in nicer tools too, but existing standards are entrenched and not that easy to replace, even when they are far from state-of-the-art.
> I am immensely impressed with the progress companies like Google or IBM made in playing go, chess, and Jeopardy, but I keep asking myself, why don’t they invest all this effort in programming technology?
It seems like Google does invest in programming technology, but a lot of that tech is internal. Google spends an order of magnitude more money on employee productivity than any other job I've worked at. But that's probably because at previous jobs we spent <<1% of salary on tools and didn't have economies of scale.
They reason it seems otherwise is that software has infinite appetite for increased productivity, since there is minimal friction and energy cost commonly seen in almost every other endeavor. There are essentially two throttles on the exponential improvement in computing: (1) the speed of electromagnetic objects, and (2) the speed of humans to learn new things recently invented and use them to invent newer things.
Arguably the best way to describe an interface is through a declarative programming language, and unless we're saying that 'creativity in UI design is superfluous to our future requirements' it seems like this will remain the case for the foreseeable future.
Mathematics is a user interface. Black ink on white paper is the medium which we've been using to communicate complex concepts/ideas for thousands of years.
In 2020 there are better communication mediums. Interactive mediums.
This has been happening for a very long time with abstraction. Layers upon layers of computing "just works" without the programmer having to think about it or why. Things like GRPC/OpenAPI etc make it conceivable of a day where a product manager just needs to write the schema and methods and hit "Deploy to AWS" .
I doubt that the economic drivers of automation consider if the job is boring or menial for the worker. I think this “experience” needs a source.
Furthermore, imagine working 40 years in a field you didn’t enjoy in order to have an “insurance policy”. (Just do what you like.)
You are right, no one says "lets automate the boring jobs". What happens is these types of jobs naturally select themselves because they are uncomplicated which makes them prime targets for automation
See also the Haskell Sieve of Eratosthenes, which isn't a Sieve of Eratosthenes, and is in fact even slower than naive trial division:
partition p xs = foldr (select p) (,) xs
select p x ~(ts,fs) | p x = (x:ts,fs)
| otherwise = (ts, x:fs)
Most of what functional programmers / theorists tend to care about are from sub-fields like logic, abstract algebra and the like.
The "real mathematicians" that you mention are most likely working in other fields like linear algebra, statistics etc. While it is certainly possible that logicians and algebraists work with sloppy python (in as much as they write any code at all), but I don't feel that it had be a great fit.
Here's a paper by the author of this post, formalising a popular Haskell pattern in category theory: https://arxiv.org/abs/2001.07488
EDIT: switched to non-PDF link.
It's the most popular, successful programming language ever but it hasn't taken the whole market for programming, why not?
1) The advent of advanced tooling would make software engineers unnecessary. Tools like Visual Basic and UML diagrams were the tip of the iceberg of "visual coding" where a business person could just specify requirements and the software would be automatically created.
2) Jobs were all going to go overseas. There is no reason to pay a programmer in California 10x what you can pay someone in Mumbai. It's better to study something like finance where the secret domain knowledge is held within the chambers of ibankers in Manhattan. The future of tech startups is a few product managers in NYC outsourcing the coding work to India.
There's also a 3rd argument that gets floated around that the field will be oversaturated, "everyone" is learning to code, etc. In 2015 I asked a startup for 125k and they told me, while that did seem to be market at the time, they thought salaries had peaked and were going in the opposite direction. In 2020 you probably can't hire a bootcamp grad for 125k.
Since then the field has exploded and wages have gone off the charts, but I still hear the same type of arguments over and over.
In 2020, you hear stuff like:
1) AI is the future, it's going to automate away all of the menial programming jobs.
2) Bay Area is overcrowded, all the jobs are going remote. The future of tech startup is a few marketing execs in SF outsourcing the menial tech work to flyover states.
Personally, I didn't believe the hype then and don't believe the hype now. I find it amusing that the author questions the wisdom of Google for not using AI to automate development, as if that's never occurred to Google.
Of course the author is right that tech is a treadmill, new skills move into the spotlight while old ones become outdated, although even then the mass of legacy code means consultants will have lucrative jobs taking care of it for a long time.
In my experience, new tooling always creates more software jobs, not less. Software is not like high frequency trading, the more people that compete to make software, the more people we seem to need to make software.
Sure, Bay Area is getting insanely expensive, but Google still tries to fit as many as it can into Mountain View, Facebook still crams as many as it can into Menlo Park. Every 3 months some VC will have the bright idea that, what if we just pay people to do the lowly engineering work somewhere else and just have execs in the Bay Area? And 99% of those startups go nowhere and Google is worth a trillion dollars.
There is a very intuitive line of reasoning that goes, software can be done my machines and software can be done anywhere. There is a thread of truth to both those narratives, but it leads people to very incorrect conclusions that there won't be as many software jobs in the future and they will be done in cheap places.
Despite all intuition, and all the logical narratives about costs and automation, a group of people dedicated to technology in the same physical room have defied that intuition. Virtually every extremely important software company grew in the West Coast of the USA, in some of the most expensive places in the world, and the more software tools have improved the more headcount these companies have had. So take all this intuition about the end of days for software engineers with a huge grain of salt.
I think that predicting the future is hard and most people are better off "optimizing local minima".