Hacker News new | past | comments | ask | show | jobs | submit login
Math is an insurance policy (bartoszmilewski.com)
213 points by ingve on Feb 24, 2020 | hide | past | favorite | 126 comments



To paraphrase what's said about fusion energy: Haskell is the language of the future and always will be.

I love Haskell, but there's really not a single shred of evidence that programming's moving towards high-level abstractions like category theory. The reality is that 99% of working developers are not implementing complex algorithms or pushing the frontier of computational possibility. The vast majority are delivering wet. messy business logic under tight constraints, ambiguously defined criteria, and rapidly changing organizational requirements.

By far more important than writing the purest or most elegant code, are "soft skills" or the ability to efficiently and effectively work within the broader organization. Can you effectively communicate to non-technical people, do you write good documentation, can you work with a team, can you accurately estimate and deliver on schedules, do you prioritize effectively, are you rigorously focused on delivering business value, do you understand the broader corporate strategy.

At the end of the day, senior management doesn't care whether the codebase is written in the purest, most abstracted Haskell or EnterpriseAbstractFactoryJava. They care about meeting the organizational objectives on time, on budget and with minimal risk. The way to achieve that is to hire pragmatic, goal-oriented people. (Or at the very least put them in charge.) And that group rarely intersects with the type of people fascinated by the mathematical properties of the type system.


> The vast majority are delivering wet. messy business logic under tight constraints, ambiguously defined criteria, and rapidly changing organizational requirements.

And herein lies the fundamental reason why most large aggregations of business software that I've seen in the wild are only strongly typed at about the same scale as the one on which the universe actually agrees with Deepak Chopra. Beyond that, it's all text.

It doesn't have to be full-blown Unix style the-howling-madness-of-Zalgo-is-held-back-by-sed-but-only-barely; it can be CSV or JSON or XML or whatever. And a lot of it's really just random magic strings. But it's still all stringly typed at human scales.


I agree with both yours and the parent comments. But this is just glorious:

> full-blown Unix style the-howling-madness-of-Zalgo-is-held-back-by-sed-but-only-barely

I'm a very long-time UNIX guy, so the I think I get the zen of what you're saying here, but I don't really get the Zalgo reference. Is this it? https://knowyourmeme.com/memes/zalgo Thanks!



And in context, the idea is that when we use classic Unix text manipulation tools, we make implicit and sometimes explicit assumptions about the nature and structure of the data. Those assumptions can be wrong and then our pipelines are also wrong. By contrast, a strongly statically typed environment might force us to prove our assumptions in the context of the interactions among the different parts of our code, and so there will be fewer corner cases where we have unexpected or unspecified behavior.

The only barely held back part is presumably that when we notice a corner case, we might add a little more logic to handle that specific case, but we never really approach the goal of handling all possible inputs with valid or useful behavior. (We might even declare that a non-goal.)


I would argue that Haskell lets you respond to nebulous requirements better than almost any other language, because refactoring is so much easier and safer.

I self identify much more with being pragmatic and goal-oriented than math-y and perfectionist, and I think for almost every programming domain we'd achieve our goals faster by moving more towards having strong static guarantees in an ergonomic, expressive language.

Finally, I would also put forth that debugging state corruption or randomly failing assertions is much harder than learning to avoid side-effects and leaning into immutability.


Alexis King, a professional Haskell coder, recently wrote an article on exactly that:

https://lexi-lambda.github.io/blog/2020/01/19/no-dynamic-typ...


It's a very insightful article. It's clear and precise and lucidly written, and I'd recommend anyone who cares about these things read it.

That said, I found the article unconvincing. The author's writing is perhaps too precise, to the point where the forest has been lost amidst the trees. I can draw the main reasons why I'm losing my religion w/r/t to static typing from out of the appendix portion of the article itself:

> not only can statically-typed languages support structural typing, many dynamically-typed languages also support nominal typing. These axes have historically loosely correlated, but they are theoretically orthogonal

The fact that they're theoretically orthogonal is small consolation. They are loosely correlated, but the looseness doesn't happen in a way that's particularly useful to me. The fact of the matter is, the only languages I'm aware of that have decent ergonomics for structural typing are either dynamically typed, or constrained to some fairly specific niches. If I want structural typing in a general-purpose language, I'm kind of stuck with Clojure or Python. The list of suggested languages that comes a paragraph later fails to disabuse me of that notion. As does this observation:

> all mainstream, statically-typed OOP languages are even more nominal than Haskell!


Re structural typing, have you tried OCaml/ReasonML? If so what was your experience like?


Another option is TypeScript, which is well suited for building SPA web apps.


Scott Wlaschin, author of F# for Fun and Profit, has an excellent talk on using F# (an ML language) for domain driven design and business development:

https://www.youtube.com/watch?v=Up7LcbGZFuo


> The way to achieve that is to hire pragmatic, goal-oriented people. (Or at the very least put them in charge.) And that group rarely intersects with the type of people fascinated by the mathematical properties of the type system.

I wouldn't say it rarely intersects - I'm one and I've worked with plenty. And I've seen plenty with strength in one join a production Haskell shop and pick up the other (in both directions!)

Haskell definitely gives you a lot to help manage the messiness you mention. You have to have know-how and awareness to do it, but it can help like none other.

But overall, stuff like this is why I wish to eventually become as independent as I can manage in my career. The intersection of Haskell and management thinking is frequently a bad faith shitshow in my experience, so the sooner I can start creating on my own terms, the better. The best part about Haskell is how little effort I exert solving problems compared to my comparable experience in Go, Python, Java. A perfect match for personal creation.


Agreed. One sentence in the article embodies this problem:

> As long as the dumb AI is unable to guess our wishes, there will be a need to specify them using a precise language. We already have such language, it’s called math.

Math is wonderful and elegant, but it is useless (in itself) to specify things in the real world. You need tons of conceptualisation and operationalisation and common sense etc. for the foreseeable future.

Heck, logic is useless for the real world - everyone knows the example that “a bachelor is a man that is not married”, but even if you can implement it as a conjunction of two logical predicates, then try to define, with logic a computer can understand, what “man” is, or “married”, or “game”, or “taxable income” or “murder” or “fake news”. Go on, use math and category theory. Good luck.

EDIT to add: For a wonderful overview of the problems with the naive Aristotelian view of definition (easily captured by predicate logic) in the real world, see George Lakoff’s book “Women, Fire and Dangerous Things: What Categories Reveal About the Mind”.


To be pedantic a bachelor is a man that is not, and has never been, married. A man that is not married may be a bachelor, a divorcé, or a widower. But bachelor is still used colloquially (in the US) for all three (but not as often for a widower). Now if the age of the person is under a certain age, most people would not seriously call them a bachelor yet, but the cut-off age varies culturally. Now married is a bit complicated too.. married in the eyes of the law, formally or common-law, in a particular country, in the eyes of a particular church?

Also, my aunt is a Bachelor of Arts, and her martial statues doesn't effect that.

Language, life, and business logic is messy, confusing, and conditional/contextual to the point of absurdity. You will never realistically reduce it to pure mathematics in a way that makes it easier to deal with.


I _really_ like the "martial statues" image there, along with the implication that the statues don't convey information about educational attainment... :)


Fantastic discussion, may I throw another title into the mix here: John Dewey's The Quest for Certainty. The book carves up the idea that math is such a thing into a thousand pieces, classing the inclination to believe it so as the same repressive drive that gave us religion. Because we were so unwilling to bear the pain of doubt. (Voltaire: 'Doubt is painful, certainty ridiculous.')


Whether you're right or not depends on whether one believes that algebraic datatypes could only have come out of a fascination with the "mathematical properties of the type system". Because ADTs are a serious win in terms of robustness and efficiency of development. They are now found in Rust (for example) but were invented in the 70s in the functional language community.

An innovation like that, which was not stumbled on by mainstream imperative language designers, arguably comes from a focus on abstract mathematical structure. You might be right if you believe that profunctor optics are going nowhere, or that dependent types and type–level programming are just too complicated, but as a research project, strongly typed functional programming has many strengths, and I don't just mean strengths that are not essentially related to the type system like purity or laziness.

'“The ability of the [strong] type system to catch [mundane] errors allows us to spend more time thinking about other, deeper issues in our code and drives down our error rate, making it possible to move faster while still maintaining safe and reliable systems,” says Yaron Minsky, co-head of the technology group at liquidity provider Jane Street. The firm, as it puts it, writes “everything that we can in OCaml”, a functional language, and employs 300 OCaml developers.'


> The reality is that 99% of working developers are not implementing complex algorithms

> The vast majority are delivering wet. messy business logic

Ok, notice something wrong here?


That people are using computers to solve real world problems instead of pursuing the higher planes of pure math?

90% of the work ends up being input validation and normalization. Humans suck at consistency, especially if your data comes from multiple sources, and potentially multiple languages. None of those complex algorithms are going to work until you make the data sane.


Those two lines contradict each other.


Your point is well-taken, and being overlooked, so here it is explicitly: to implement "wet. messy business logic" is, by definition, to implement "a complex algorithm".


I don't think it's being overlooked. It's being seen as pedantry, correctly I think. It's obvious what the parent poster meant, right? Sure the messy business logic is an algorithm in the technical sense. But everyone knows that when people mean when they say that most developers aren't writing fancy algorithms.

You know it's kind of funny how this very conversation mirrors the criticisms of Haskell leveled in the top post! Someone is trying to insist that language fit in a neat mathematical box where words are stripped of their colloquial meanings and force us to only use dictionary definitions. But language and the real world just refuse to be so clean! Ha!


It is also obvious that the parent poster apparently assumes that they have seen problems that no fp developer has seen ever.

Functional languages have been around in production for decades now. They have met and adapted to pretty much all the real-worldliness and messiness and pragmaticness you can think of. And these days, the concepts they came up with are being massively adapted to enterprise darlings such as java or c#.

Could we get more consideration than the old "You don't know real world"?


Parent poster self-identifies as a Haskell lover. It doesn't seem like they're assuming anything. They're making an informed observation.


Oh, since they self-identify as haskell lover, we can safely ignore the contrary answers from people actually having shipped business logic with haskell then.


To me a "complex algorithm" is something that's grinding through the data doing some kind of interesting transform.

Dealing with the tons and tons of crap people will put down in a field no matter how explicit you make your directions is not a fancy algorithm. It's down and dirty drudgework that is of little interest to the more math oriented algorithm side of the house.


Validation is a very common use case of applicative functors. Aside from the intimidating name, are there issues with it?


But people need simple tools by witch to do that.

So while not everyone needs to care about abstractions and their properties. We really need the people who put together frameworks and libraries to.


This is definitely an accurate description of the past, but is it the future? I’m really interested in what happens when we start to see (more) companies organized around software...


In the future we'll say "We don't need Haskell because we already have values, higher-kinded types, pure functions, and do-notation in Java".

(Although if I could speculate based on past performance: values won't work with collections, pure functions won't allow recursion, do-notation will be incompatible with checked exceptions, and the higher-kinded types won't have inference. And all of them will be jam-packed with unnecessary syntax.)


It is harmful to people to suggest that, because they might have novel or different ideas, they are not "pragmatic", or "goal-oriented".

This kind of ad-hominem attacks on their character has no place in evaluating the value of a tool or a concept.

Please consider evaluating ideas based on their merit with regards to context, rather than delivering blanket considerations about people based on your prejudice.


cf history of lisp.


>In mathematics, a monad is defined as a monoid in the category of endofunctors.

Sure. But like everything in math what it really is is a thing that the mathematician has come to know through application of effort. The definition is a portion of a cage built to hold the concept in place so the next mathematician can come along and expend effort to know it.

You've probably done this yourself, just maybe not that deep in the abstraction tower (though you'd be surprised how abstract some everyday things can be). For example, you may have internalized the fact about division of integers that every integer has a unique prime factorization. This is an important part of seeing what multiplication is, but it's not part of the tower of abstraction upon which multiplication is built.

Mathematicians tend to end up being unconscious or conscious Platonists because mathematicians are trained to see the mathematics itself.


The definition of a monoid here is not the usual definition, is a new definition for the special case of a strict category as defined MacLane's Book Categories for the Working Mathematicean. Since you maybe thinking about composition of endofunctors and unit endofunctors you get a confused picture. The monad as a monoid in the category of endofunctors is a way to show that you can confuse people using two different definition of an usual concept and both give different results. I got this from (1), look for "main confusion": monoid in the category of endofunctors is defined in a new way, and is not the expected monoid in the set of all endofunctors.

The definition of monoid in monomial categories is on page 166 and Monads in a category are on page 133. As a math person I know what is a monoid (usual term) but I did not know what is a monoid in a monomial category (well I know now because is on page 166 of the book).

(1) https://stackoverflow.com/questions/3870088/a-monad-is-just-...


I wonder what's the point of using such a phrase, it doesn't help you to grok the concept of monad. It can help you to know that someone has given a new definition to sound cute, shame on them. By the way I admire MacLane as a math person, but people seem to use category theory to sell snake oil. Category theory is a tool to give names to some diagrams and properties that are used frequently to avoid repeating the same argument in proofs, is a dry (don't repeat yourself, as in the ruby motto). If you are an expert in category theory you can give short proofs of known facts. Category theory is like pipes in unix, you pipe diagrams to show properties. Grep, sed, awk analoges are functors, categories and natural transformations. The input is a collection of diagrams and the output is a new diagram that has a universal property and it receives a name and a collection of properties that are supposed to be useful to proof new theorems.


The phrase appears 6 chapters into a graduate mathematic text on category theory. If one reads the preceding chapters, it is a useful but pithy explanation for what a monad is, using terms which have all already been covered. Its use outside of that context is basically just a joke towards the Haskell community being overly mathematical.


I agree with you, the phrase should be preceded with: Caution, this is a math joke, don't lose sleep trying to grok it.


But then there's https://shitpost.plover.com/m/monad.html

> The more I think about this, the more it seems to me that a monad is not at all a monoid in the category of endofunctors, but actually a monoidal subcategory.

> That's the problem.


Sometimes the definition is wrong (which should be a big hint that the definition is just pointing at something).

Like the definition of prime that I was taught in primary school, "a prime is a positive integer whose only divisors are itself and 1". The right definition is "a prime is a positive integer with exactly two divisors", because otherwise all interesting statements about primes would have to be confusingly rewritten as statements about primes greater than 1. (Sometimes we have theorems about "odd primes", i.e., not 2, but the important set, primes, definitely includes 2.)

I don't get the monad joke fwiw; I've done a ton of Haskell and a ton of math, but the Haskell wasn't deep abstraction and the math was differential geometry and the vicinity.


To make matters worse, it turns out that there are actually two different concepts commonly labelled by the word "prime", and in some highbrow contexts they are different -- so mathematicians actually define "prime" in an entirely different way: p is prime iff p|ab => (p|a or p|b). That is, a prime is something that can't divide the product of two numbers without dividing at least one of the numbers. The thing about having no nontrivial divisors is called being "irreducible".

(When are they different? When you're working with some kind of number other than the ordinary integers. For instance, suppose you decide to treat sqrt(-5) as an integer, so your numbers are now everything of the form a + b sqrt(-5) where a,b are ordinary integers. In this system of numbers, 3 is irreducible, but it isn't prime because (1+sqrt(-5)) (1-sqrt(-5)) = 6 which is a multiple of 3 -- but neither of those factors is a multiple of 3 even in this extended system of numbers.)


> primes greater than 1

I mean, many theorems about primes do not apply to 1. 1 is very often not considered a prime. And, as another commenter noted, there is not just one definition of prime.


This is an incredible quote:

> But like everything in math what it really is is a thing that the mathematician has come to know through application of effort. The definition is a portion of a cage built to hold the concept in place so the next mathematician can come along and expend effort to know it.


If I understand what you're saying, you're asking to not confuse mathematical communication with actually doing mathematics ... sort of how Ramanujan was doing amazing math but in a way that wasn't communicable to the convention until the language was introduced by Hardy?


> The definition is a portion of a cage built to hold the concept in place so the next mathematician can come along and expend effort to know it.

Well put. I'd say this extends beyond mathematics. All definitions are like that.


words form a net

hunt writhing concepts

some escape


Math is not the insurance policy. Your social skills and your ability to continually "sell" your worth to others are the insurance policy.


Its more like math is the underwriter that one is more than soft skills.


I always figured our domain (computation) is so vast that once programming is automated, so too are all the other jobs. If we get an AGI that can program our programs and learn to learn, it won't be hard to have it write a program to make sales calls, or gather user feedback, or build buzz for the company.

So don't worry, when it happens we can all rest because there won't be any need for our labor anyway.


This is my thought as well. Just don't be the first one automated. We will either have figured out post-work society or we will face some sort of collapse.


What about the pix2code example? It's consevable that domain specific automation will reduce the number of jobs that exist.


The pix2code paper is interesting, but it doesn't really answer the question of translating the UI requirements into the corresponding "business logic" - it's limited to producing the code that manipulates the "surface" elements, so to speak. The real challenge I think for an AGI is translating something like, "This button shows green, if the user has previously scored 10 or higher on the previous 5 tests, which have all been taken in the previous 6 weeks," into code.

Then there's the problem of edge cases - in this case, what if the user has not had an account for more than 6 weeks but has met the other conditions? Now the AGI has to detect that context and formulate the question back to the developer.

The "code will eat your coding job" hype sounds a lot like "we'll have self-driving cars all over the country by 2000" hype (yes, that hype did exist back then,) or going further back, "All of AI is done, we just need a few more decision rules" hype back in the seventies.

For sure, many coding frameworks are a lot simpler now than they were two decades ago, and yes, I think it has meant many aspects of digitized services are now much cheaper. You can build a Wix website for yourself, or a Shopify e-business, without paying a developer, which you needed to do in the year 2000. But the consequent growth in digital businesses has led to induced demand for more developers, as businesses constantly test the edges of these "no-code" services.

I would say we have reached some amount of saturation already. Anec-datally, it seems that if you segmented salaries by experience, you might find some amount of decline or stagnation in the lower levels of experience relative to a decade ago. So in that sense, the original point has some valence, but I don't think it has anything to do with "AI"


Assuming pix2code really did automate away traditional UI work, the development effort would then move to the next subdomain (e.g. a sales bot, etc).

I suspect as long as you're willing to learn and are competent, you should have a job until the final effort of a general AI self-learning programmer.


I'm also sceptical about pix2code, but the point is that domain specific automation could consevably reduce the number of overall jobs. The cost of switching domains is also non negligible.

The question with domain specific automation (and one of my takeaways from the article) isn't whether or not you'll have a job, but if the effort your put into getting your current job is worthwhile.


I think the total number of jobs (programming + other) that humans can do economically might go down over time. Programmers can usually pickup the next ambitious project (e.g. a sales bot) when the old domain is no longer profitable.

I think Bartosz is saying that Math and Category theory is useful to learn because it works in a number of subdomains. It can help keep the domain switching cost down somewhat.


Eh. Every time this comes up what is often missed is that humans are complementary to machines and we don’t yet understand conscience and identity all that well.

You’re right that the jobs as they exist today won’t in the future. Just like manufacturing replaced agriculture which was then replaced by services, there will be higher orders of creativity and problem solving that most humans will be engaged in the future.

There really isn’t a lack of problems to solve. Consider that we’ve barely scratched the surface of the earth and the farthest we’ve sent humans to is the Moon. The Space Industry might as well be the economic market that keeps expanding forever...


Humans look a lot like machines to me though. So it seems we are looking at the old machine competing against the new. And maybe the new machine is a hybrid of the old machine with some new feature built in. I don't think we'd call it human though, cyborg maybe.

One of the things I didn't like about Star Trek was that they have this super powerful computer and yet Picard has to ask Wesley to punch in the coordinates on the console and engage the warp coils. What kind of theater is this? Sure a floating monolithic slab of computation in space has less cinema appeal than a crew of humans, but a hardened machine seems more plausible for space travel. Humans can't sustain much more than 1g acceleration and don't live too long.


If everything else is automated, would sales calls even exist? Or any need for buzz?


It's hard to tell. I recently visited my homeland of Albania and was surprised to learn that things like food delivery / ridesharing apps did not exist at all. Someone who goes there could probably make a killing just copying what we have in the States. The future is unevenly distributed, as they say. While we automate away our problems in the first world the third world will be eagerly awaiting existing solutions.


Indeed, if no one (human) is working (and, why should they, if AI exists to do their jobs), who is getting paid? If humans aren’t getting paid, how do they buy goods and services? AI of this level of sophistication almost certainly implies some sort of post-scarcity economy.


It seems rather funny to write a treatise, with quicksort being a central example, where the shown code requires O(N) temporary space.

C/C++ programmers might not be good at category theory, but no one worthy of their salary would walk past a quicksort routine with O(N) memory without stopping and asking "Wait, what?"

Seriously, I remember when this used to be the first Haskell code shown on haskell.org homepage, and I had to stop and wonder if these folks were just trolling or if they are actually that oblivious of performance. If you want to promote Haskell, you could have hardly chosen a worse piece of code.


The article addresses that issue, its core thesis is that AI supports declarative programming. One of the author's main points is that a sufficiently intelligent compiler would rewrite code to be better optimized and have better computational complexity, eliminating low level C++ programming type jobs.

Category theory is to support the creation of specifications that are both easy to understand for a human who knows category theory and easy to optimize for the AI, compared to the original quick sort example.

The author also thinks many HTML and JS type jobs will also disappear. What I am skeptical of is that while Go, Chess and Jeopardy are challenging, they are closed domains. I think people underestimate just how much complexity building CRUD apps involves. Just like we underestimated how difficult walking to a cupboard to retrieve a mug would be for AI.


I’m sorry to say that, but C and C++ programmers will have to go.

I've heard that since I started my career in the early 90s and it's always interesting to compare what reasons people give why low-level languages are going to go away really soon now.

Other than that the article makes a lot of good points.


> I've heard that since I started my career in the early 90s and it's always interesting to compare what reasons people give why low-level languages are going to go away really soon now.

The main reason: They'll go away when the graybeards retire and the companies fail without them.

For the last 5 years, I've been babysitting a C-based system that generates more than 70% of our corporate profit. The youngsters working on the 7+ year-old project to replace it are on the 3rd refactoring of the 2nd programming language codebase and the "architect" is already talking about rewriting in a new language for "productivity improvements." For the 4th year in a row, the first of five essential elements of the new system will go on line Next Year, leaving still more years of work until we can shut the old system down and let the C programmers go (except we all know more new languages than the new guys, as we all have loads of free time - our old system is quite reliable). Without a substantial payment directly into my kids' trust fund, I have no intention of delaying retirement by a day - I have way too many side projects to explore!


> it's always interesting to compare what reasons people give why low-level languages are going to go away really soon now.

Because C has killed off most assembly language already, compared to the 1970s.

Because you can talk about writing C++ for embedded systems without busting up laughing.

Because "embedded" means ARM and not TMS1000 these days.

Because not even BART is still running PDP-8s in production anymore.

Because the extreme low end changes slowly, but it does change.


> The next job on the chopping block, in my opinion, is that of a human optimizer. In fact the only reason it hasn’t been eliminated yet is economical. It’s still cheaper to hire people to optimize code than it is to invest in the necessary infrastructure.

What would that infrastructure be, exactly? You can write a program that performs any mathematical operation. A program that can handle any input program, and optimize it as well as any human could, would need to be an AI with at least human intelligence, and a deep understanding of all known mathematics.

But then we're told that mathematical knowledge is a defense against automation? One wonders why we don't just hand all the math off to this optimization AI.


If the author is right, what subfields of mathematics will likely be the most salient? Linear Algebra? Stats? Category Theory? Isn't it possible that the species of math you invest in will turn out to not be so valuable in an AI-driven future? Or is the hope that a baseline mathematical fluency will help engineers pivot no matter what?


The author isn't right. He's right in some aspects (I'd say in 5% on what he conjectures), but for the most part, programmers and programming will be in quite a high demand for quite a foreseeable time in the future. Yes, C / C++ programming might become less important (see: memory wall) and I'd say highly parallel computation will take precedence, but this won't replace computer science nor programming...it'll just create the need to programmers to transition from one skill set into another.

Also, as a note: the field of computer science is a branch of mathematics. Whenever you deal with data structures or algorithms, you're dealing with how to procedurally design information structures and procedures to solve problems and model the real world. Mathematics? Well, depending on what branch you're into, it's using symbols and information ...to, well, do the same thing!

Out of all of the professions, it seems rather silly to me to think that our own profession is in danger of being replaced. I would think that it's the LEAST likely one, since programmers, to me at least, represent workers who are paid to communicate and find ways to solve problems in the real world using a tool which will continue to improve and enable our species to excel and automate jobs filled with repetitive drudgery. Out of all of the professions, I’m not sure why the author would think that ours is likely to go out of demand soon. I would conjecture that us playing a prominent role in automating other jobs ensures that we stay in demand for quite a long time!

As far as the most useful mathematical branches are concerned – if you’re interested, the branches I find have the highest ability to help in solving real world problems are: calculus, linear algebra, probability and statistics, computer science (as already mentioned), convex and computational optimization, quantum mechanics, complex analysis, ordinary and partial differential equations, data mining and analysis, information theory … well, I could keep going, but I think you’d greatly benefit from checking out a great book which gives a pretty good overview on the key areas in applied mathematics: The Princeton Companion to Applied Mathematics. The above are just my own guesses, and it’s highly dependent on which problem you’re looking to solve.


The author is off his rocker. Just because he used cute phrases like "a monad is defined as a monoid in the category of endofunctors" doesn't mean he knows what he is talking about.

Rust is the best shot to replace C, and it has yet to figure out how to manage memory without a complicated ownership/borrowing/lifetimes model and not impact performance.

That and Rust is probably 5-6 years ahead of its time. The future where Math, Cryptocurrencies, AI have eaten the world is still very far away.

So in my opinion, your best bet/insurance for the future is to learn Rust (which I'm doing); that being said I don't know what I'm talking about.


You cannot really predict that. Number theory was once considered "pure mathematics" in a sense that one could not even imagine practical applications for checking if a number is a prime or not. And then someone invented cryptography...

EDIT: For all those nitpicking downvoters: yes, I meant public key cryptography.


To be fair, cryptography and number theory have coexisted without much overlap from centuries (millennia?). Then, the rise of mechanized cryptanalysis forced us to look for hard-to-break ways to encrypt stuff, and prime factorization was a very good candidate.


That's not really a useful definition of "pure" mathematics.

Also cryptography is one of the oldest applications, and has interacted with a variety of sub-disciplines in mathematics for basically ever.


Cryptography predates number theory. Probably you mean public key cryptography.


At some level, just being able to do math.

Most people hate math, and there will always be some security in being willing to do things that most people hate. Engineers are assured, through the grapevine, that they will never use their math after college. They're right. Most of them will get so busy with organizing and arranging building blocks, that they'll forget their math. It makes sense to delegate the math work to a small fraction of people who enjoy it. Those people will never be great in number, and might not get paid any more, but will be valuable enough to have stable careers. The math doesn't even need to be particularly hard.

At least, so I hope. ;-)


Category theory is fine but won't give you "anything new". I wasted a lot of time and money as plenty of big names were praising it (e.g. John Baez) so I followed them on a few conferences/talks. In the end, it's a nice theoretical framework to think about how to structure your code but that's it. If you do want to get involved (and, like me, are coming from a computational background) the book from Bartosz (i.e. the guy from the post) tells you enough to get a good overview of the thing; it's also available for free online.

A second recommendation, a bit odd maybe, do not waste your time trying to engage with that community as most people there are incredibly smug and toxic (but if you don't believe me feel free to try it, ymmv). They are now pushing this idea that they're 'opening' the field to everybody and would like as much people to jump on their bandwagon, but they are just trying to get some traction to keep leeching off grants, etc... they couldn't care less about you, I experienced this first-hand. You can learn most of these things on your own anyway, as it is usual with most mathy things.

I don't disagree with the premise of the post, when the time comes math will save your ass, for sure. Now, what works? As you mentioned linear algebra and some intermediate statistics would greatly improve your chances. I would drop some geometry as well, geometry has an inherent 'visual/spatial' feeling to it but try to learn it in such way that you become familiar of dealing with it in a completely abstract way. Regarding this, I have noticed a small surge in interest for clifford algebras, geometric algebras, etc... if you have the time pay attention to it, there are some greats talks about it on youtube and this one has an amazing community that I would definitely recommend to engage with.

Also interesting to read about, some other algebras that are defined mathematically and are at the foundation of CS, things like lambda calculus come to mind.

You don't have to become an expert on these fields to start seeing the benefits. Just build of the habit of reading about one or two topics per week, what they do, how they do it, and before you know a year has gone by and you will feel much more confident in your skills. Anything that requires more attention will naturally drag you to it.

Finally, this guy Robert Ghrist is amazing (really!) https://www.math.upenn.edu/~ghrist/, check him out as well, his talks and other content. He also wrote a book on applied topology (free online as well) which is amazing. Even if you don't understand anything about it, give it a quick skim to have an idea of all the incredible things math can you do for you that you could not have imagined existed!


'Category theory is fine but won't give you "anything new"'.

This is far from the truth and disregards a significant body of work in programming language theory (not to mention algebraic geometry and topology).

Perhaps most famously, monads, which have been widely adopted in the Haskell and Scala communities, originated in the category theory literature. Eugenio Moggi famously described how monads could be used to structure the denotational semantics of a programming language.

Monads are just the most obvious example of 'new' concepts which originate in the category theory literature and are practically realised by the study of categorical semantics of programming languages. A brief investigation of the literature will reveal a vast array of concepts such as initial algebras/final coalgebras (used to implement safe recursion schemes), adjunctions (giving rise to free constructions such as the infamous free monad), profunctor optics (bidirectional data accessors). This is just to give a few examples in PLT and does not even touch upon lesser-known constructions in type theory.

‘In the end, it's a nice theoretical framework to think about how to structure your code but that's it.’

This comes across as overly dismissive of the problem of ‘how to structure code’, which is in no way trivial. Formalising desirable properties of programs, and designing languages/constructs which restrict to programs exhibiting these desirable properties, is a general focus of programming language theory. ’A nice theoretical framework’ in which such problems can be formalised is very significant, and the comments that programming language theorists working on categorical semantics are “incredibly smug and toxic” and “are just trying to get some traction to keep leeching off grants” is especially insulting to many wonderful people working in this domain.


Seconded. Robert Ghrist’s Elementary Applied Topology is one of my favorite books. A real eye opener if you haven’t seen some of it before.


Data science seems like a gateway drug to doing more math.

I’ve been working through Joel Grus’ Data Science from Scratch,

https://www.amazon.com/Data-Science-Scratch-Principles-Pytho...

rewriting the Python examples in Swift:

https://github.com/melling/data-science-from-scratch-swift


If the author is right, how long will it take to get there? I've been working in the field for 25 years. Frankly what I do today isn't that much different than what I did 25 years ago and we are busy trying to hire more people who do the same types of things. I'll probably retire in the next 15-20 years. Will this wholesale change proposed by the author take place within that time frame? Based on past experience I doubt it.


To add to this: In my career I've personally witnessed so many instances of people saying "Well, this can all be automated..." and then starting enormous projects to automate a process that is currently the sole focus of literally hundreds of engineers. As if the reason there are hundreds of engineers doing this job is that all those guys are just idiots. The result is these massively ambitious projects that promise senior management the world, drag on forever, often justifying themselves with hand-optimized toy examples to show progress. In the end the actual problem is very obviously intractable and years of progress are wasted.

It might be true that eventually we can solve "implementing cross-platform UIs in the general case" but we could also be literally 100 years away from achieving that, and in the mean time the fact that it's theoretically possible is worthless.


I think it is a question of degree. AGI could be 100 years or 1000 years away; maybe it's something beyond our capabilities entirely. But we will probably find ways to augment our own programming tasks to the extent that what humans need to work on is a moving target.


Interesting. If languages like C and C++ are really first to be vulnerable, why are they the ones used for building AI (computer vision) instead of category theoretic languages?


I suppose if someone successfully created an ML-powered optimizer, powerful enough to level the playing field with C++ as the author suggests, in a higher-level language then they could run it on itself and there would be no need for C++ in the loop. Training the first unoptimized version of the model might be expensive, but if Google can throw $1.4 million of capacity (https://twitter.com/eturner303/status/1223976313544773634) at training a chatbot...


The author is saying that the main reason to use something like C++ instead of Python or even Java is for speed. He assumes that optimizing for speed is a problem that can fit neatly under the machine learning domain.

If a machine can optimize performance better than humans, then it would not make sense to use C++, in the context of performance.


We have JIT compilers that can optimize "better" than static. People are still using the latter because there are other benefits that nobody from the ivory tower can beat.


When someone asks me, "Why don't you have X insurance?"

My reply is, "Because I can do math."

Was surprised to find the article wasn't about that.


I think this article gives AI more credit than it demonstrates at present and simplifies the examples.

It's useful I expect for quick fixes/guidance like the loop example.

For example on improving performance - these days that often needs holistic architectural re-think - surely a creative process? The idea of optimizing involving a loop seems very distant from heterogeneous asynchronous behavior of modern hardware.

If AI really does start to solve in the more 'general' way not just a bit of object recognition here and there, won't software developers incorporate with it as part of process instead. Enabling even more sophisticated software to be written?

I think that is the key part of longevity to software development as a career. The compiler didn't remove the assembly programmer it simply enabled a whole new level of software complexity to be feasible.


This article purports to be talking about math but then goes off down some insular functional programming rabbit hole.

For what it's worth, I can easily accept that Haskell programmers' career prospects will not be altered one whit by improvements in optimisation and automation...

P.S. Haskell is not math: https://dl.acm.org/doi/10.1017/S0956796808007004 https://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf


The paper you linked shows that one Haskell implementation does not exactly correspond to a specific algorithm, then gives an alternative definition which does correspond. What does that have to do with the statement 'Haskell is not math'?


It's a salient example of functional programming pretending to demonstrate an elegant expression of pure mathematics in code, while actually being nothing of the sort.


> I can’t help but see parallels with Ancient Greece. The Ancient Greeks made tremendous breakthroughs in philosophy and mathematics–just think about Plato, Socrates, Euclid, or Pythagoras–but they had no technology to speak of.

I can’t help but remark how completely untrue this sounds to you if you have read the magnificent and forgotten book by Lucio Russo, see [1]. Greeks of centuries III-II BC had plenty of tech and they were much more modern than new scientists of centuries XVI-VII.

[1] https://news.ycombinator.com/item?id=22409445


In case anyone is interested, the author also wrote the excellent "Category Theory for Programmers", available in print or online:

https://bartoszmilewski.com/2014/10/28/category-theory-for-p...

If you're interested in what Category Theory is about, it's a great place to start for people with a background in programming but not necessarily mathematics.


I agree with the OP that junk programming will likely die. But so will junk math. I don't think programming is going to be automated anytime soon, but I'd imagine that the inputs of an optimizer capable of doing so would look more like vagueish business goals and policy constraints that businessmen / politicians like to write, rather than some functional monad, which I guess is why they still continue to be the master character classes of humanity.


I'm skeptical of these sorts of claims for two reasons.

1. Creativity. What objective function do you optimize to write the first Super Mario Bros? Can you then get from there to RocketLeague or Braid? (I think not).

2. Imagine that you somehow obtain a magical technology that takes in a natural language spec and emits highly-optimized code, just like a human. Who's writing that spec?

For interactions with actual humans, there's usually a professional drafter (lawyer, contracting officer, etc) carefully specifying what the other parties should and should not do. We'd presumably need the same thing even with some fancy AGI. This is perhaps a bit different from worrying about whether foo() returns a Tuple or List, but it's not totally dissimilar from programming.


I agree with your first point, such objective function would have to optimise something extremely abstract like player enjoyment.

As for the second, the point is not to have a specificition on the human side. Most of the time when communicating with others we don't need lawyers, and spoken contracts are valid and by law. Even with lawyers contracts can be disputed in court as there is not a formal definition of every exact scenario.

Assuming we had the power to do (1), all we'd need in (2) is something that doesn't provide unexpected outcomes.


If you don't have some kind of specification, how are you going to control what you get?

It's true that you can turn to a teammate and say "hey, can you write me a pipeline to import data from JSON files?" and you'll usually get something usable. However, you and your teammates have shared goals and background information about your particular project and the world at large, etc.

Projects go off the rails all the time because the generally intelligent (allegedly, anyway) humans don't share these things. Right now, the front page has an article called "Offshoring roulette" about this. If you don't want to click through, here's my story about a contract programmer working in the lab. I asked him to look into a problem: the software running our experiments crashed after a certain number of events occurred. It turned out that the events were being written into a fixed-size buffer, which was overflowing. This could have been fixed in many ways (flush it to disk more often, record events in groups, resize the buffer). However, he chose to make the entire saving function into a no-op. This quickly "fixed" the problem--but imagine my delight when the next few runs contained no data whatsoever. In retrospect, although this guy had a PhD, he wasn't particularly interested in the broader context, namely that it crashed while collecting data that I wanted.

An optimizer is going to take all kinds of crazy shortcuts like that unless it's somehow constrained by the spec. You could certainly imagine building lots of "do-what-I-mean" constraints into this optimizer but that requires even more magic.


As with some of the other commenters, I don't fully agree. Programming is messy, and most end users don't care about the beautiful abstractions you used. Junk programming will always exist, and just good enough programming will always be the mainstream.

When I'm purchasing a chair, I don't normally care about the detailed specifics of wood types, craftsmanship and the specifics of the joins, nor doesn't matter if a master craftsman doesn't approve.


I've been hearing this for 25 years, since I was just starting college.

We are marginally closer to it now technologically, and probably actually significantly (as in, "more than marginally") farther away from it overall, because the explosion in programming use cases has greatly exceeded our improvement in the ability to automate.

Even the given example is beyond our reach right now! We already can't automate that simple specification of a sorting algorithm into an efficient one. How are we supposed to automate the creation of a graphics card driver with AI? We'd have much better leverage just applying better software engineering to that task, and I say that without particularly criticizing the people doing that work or anything... I just guarantee that they are not so well greased up with engineering that there's no improvement they can make bigger than "let's try to throw AI at it".

There are still places where category theory is a good idea. A lot of our distributed systems would be better off if someone was thinking in terms of CRDTs or lattices or something. But we're farther than ever from automating our computing tasks.


Self driving cars are not coming. Jobs are not being eliminated en masse. This is hyperventilation and is not borne out by history, facts, or reality. Don't fall for the scary AI nonsense.

https://medium.com/@seibelj/the-artificial-intelligence-scam...


I am not convinced. Most mathematicians embrace denotational semantics, but most engineers intuitively default to operational semantics, because ultimately, we need to make choices/trade-offs that have consequences which are too complex for any optimiser which lacks a holistic view of the system our system is part of.

Operational semantics are actually a higher order logic than Category theory when expressed in a Geometry of Interaction (GoI) grammar.

https://ncatlab.org/nlab/show/Geometry+of+Interaction

I don't have the fancy nomenclature to utter the Mathematical phrase "endomorphism on the object A⊸B.", but I intuitively understand what operational semantics are and why they are useful to me. When a programming language/grammar comes along which implements most of the design-patterns I need/use to turn my intuitions into behavioural specifications - I am still going to be more productive than a Mathematician because I will not have to pay the (upfront cost) of learning a denotational vocabulary. The compiler will do it for me, right?

In the words of the late Ed Nelson: The dwelling place of meaning is syntax; semantics is the home of illusion.

Languages are human interfaces. Computer languages are better interfaces than Mathematics because we design (and evolve them) to be usable by the average human, not Mathematicians. Good interfaces lower the barrier to entry by allowing plebs like myself to stand on the shoulder of giants. Mathematics expects everybody to be a giant.

And in so far as dealing with ambiguity goes, Programming languages are way less ambiguous than Mathematical notation!

Ink on paper has no source code - no context.


> As long as the dumb AI is unable to guess our wishes, there will be a need to specify them using a precise language. We already have such language, it’s called math. The advantage of math is that it was invented for humans, not for machines. It solves the basic problem of formalizing our thought process, so it can be reliably transmitted and verified

The same is true of most programming languages - they were all made for humans. Math has the advantage of being to prove formally that it solves the problem, but that isn’t a requirement for most software.

> If we tried to express the same ideas in C++, we would very quickly get completely lost.

Same is true for many things that you can express in C++, but not in Math.

Math used this way is essentially just another programming language - with massive advantages in some circumstances, and massive disadvantages in others - I can’t find any argument in this article as to why it is a better bet than any other language.


It’s a little weird for this article to focus on qsort, as surely the key point is that the algorithm doesn’t have to be stated at all, just the requirements -- that the output is sorted.

This doesn’t necessarily invalidate the argument, but it means all the concrete examples seem rather beside the point.


Lines like this:

> The AI will eventually be able to implement any reasonable program, as long as it gets a precise enough specification

make me less worried about the job apocalypse.

It feels to me like getting good, consistent specifications for software is incredibly hard, particularly if some unbound set of humans is meant to interact with it.

Delivering domain specifications for even simplistic "things" is incredibly hard:

https://www.youtube.com/watch?v=vNOTDn3D_RI&t=2782s


"So any time you get bored with your work, take note: you are probably doing something that a computer could do better."

So can I send the computer to the meetings for me while I carry on coding?


> One such task is the implementation of user interfaces. All this code that’s behind various buttons, input fields, sliders, etc., is pretty much standard. Granted, you have to put a lot of effort to make the code portable to a myriad of platforms: various desktops, web browsers, phones, watches, fridges, etc. But that’s exactly the kind of expertise that is easily codified.

Good luck. There will always be a need for bespoke UI. Just as there's always a need for bespoke anything that can otherwise be mass produced.



I don't quite understand this conflation of mathematics with category theory. This obsession of some programmers with mathematics, actually with a tiny part which is category theory, looks to me very strange. By chance, there is this recent comment of Scott Aaronson, a strong mathematician who is a rising star in quantum computing, which contains what is it probably the more balanced view.

I quote from the source [0] the relevant part: "With some things I don’t understand well (nuclear physics, representation theory), there are nevertheless short “certificates / NP witnesses of importance” that prove to me that the effort to understand them would be amply repaid. [...] And then, alas, there are bodies of thought for which I’ve found neither certificates or anti-certificates—like category theory, or programming language semantics [...] For those I simply wish the theorizers well, and wait around for someone who will show me why I can’t not study what they found."

[0] https://www.scottaaronson.com/blog/?p=4616#comment-1830447


We already do automate jobs by improving our development environments. A better language can eliminate important classes of bugs, and once a reliable library is available, you don't have to write it yourself. The tools do get better, gradually.

This doesn't seem to be happening particularly quickly, though, and it's not clear that it's accelerating. Setting standards is a social process and it often takes years for new language changes to get widely deployed.

Another thing that slows things down is that every so often some of us decide what we have is terrible and start over with a new language, resulting in all the development tools and libraries having to be rebuilt anew, and hopefully better.

I expect machine learning will result in nicer tools too, but existing standards are entrenched and not that easy to replace, even when they are far from state-of-the-art.


> You might think that programmers are expensive–the salaries of programmers are quite respectable in comparison to other industries. But if this were true, a lot more effort would go into improving programmers’ productivity, in particular in creating better tools.

...

> I am immensely impressed with the progress companies like Google or IBM made in playing go, chess, and Jeopardy, but I keep asking myself, why don’t they invest all this effort in programming technology?

It seems like Google does invest in programming technology, but a lot of that tech is internal. Google spends an order of magnitude more money on employee productivity than any other job I've worked at. But that's probably because at previous jobs we spent <<1% of salary on tools and didn't have economies of scale.


programming productivity gets absolutely massive investment -- all of secret, proprietary, and open source.

They reason it seems otherwise is that software has infinite appetite for increased productivity, since there is minimal friction and energy cost commonly seen in almost every other endeavor. There are essentially two throttles on the exponential improvement in computing: (1) the speed of electromagnetic objects, and (2) the speed of humans to learn new things recently invented and use them to invent newer things.


It's interesting that he thinks UI will be 'the first to be automated,' but in my experience the variation and creativity, constraints and rules – not to mention the animations – that are emergent from a UI design are so varied and complex that although this seems intuitively correct, history has told us it's anything but.

Arguably the best way to describe an interface is through a declarative programming language, and unless we're saying that 'creativity in UI design is superfluous to our future requirements' it seems like this will remain the case for the foreseeable future.


+10000

Mathematics is a user interface. Black ink on white paper is the medium which we've been using to communicate complex concepts/ideas for thousands of years.

In 2020 there are better communication mediums. Interactive mediums.


"One such task is the implementation of user interfaces." Clearly author has not used image to code tools before. The amount of junk code that is created through these tools is unusable in production.


For now. But the very fact that the code compiles and does it's basic tasks is already an achievement.


> So the programmers of the future will stop telling the computer how to perform a given task; rather they will specify what to do. In other words, declarative programming will overtake imperative programming.

This has been happening for a very long time with abstraction. Layers upon layers of computing "just works" without the programmer having to think about it or why. Things like GRPC/OpenAPI etc make it conceivable of a day where a product manager just needs to write the schema and methods and hit "Deploy to AWS" .


> Experience tells us that it’s the boring menial jobs that get automated first.

I doubt that the economic drivers of automation consider if the job is boring or menial for the worker. I think this “experience” needs a source.

Furthermore, imagine working 40 years in a field you didn’t enjoy in order to have an “insurance policy”. (Just do what you like.)


> I doubt that the economic drivers of automation consider if the job is boring or menial for the worker. I think this “experience” needs a source

You are right, no one says "lets automate the boring jobs". What happens is these types of jobs naturally select themselves because they are uncomplicated which makes them prime targets for automation


The one liner quicksort implementation in Haskell is only really possible because a good chunk of the hard work was handled by a partition library function... I'm not sure how different that is than just calling quicksort from a library in any other high level language.


Haskell quicksort isn't quicksort, it's an illustration of quicksort. It's not an in-place constant-memory implementation.

See also the Haskell Sieve of Eratosthenes, which isn't a Sieve of Eratosthenes, and is in fact even slower than naive trial division: https://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf


  partition p xs = foldr (select p) ([],[]) xs
  select p x ~(ts,fs) | p x       = (x:ts,fs)
                      | otherwise = (ts, x:fs)
That is the entirety of `partition` in the standard library.


It's odd that functional programming enthusiasts always love to put the mantle of "math" on their favorite functional tidbits, while real mathematicians write imperative, stateful (and often sloppy) Python.


You may well be talking about different kinds of math here.

Most of what functional programmers / theorists tend to care about are from sub-fields like logic, abstract algebra and the like.

The "real mathematicians" that you mention are most likely working in other fields like linear algebra, statistics etc. While it is certainly possible that logicians and algebraists work with sloppy python (in as much as they write any code at all), but I don't feel that it had be a great fit.


Plenty of serious researchers in top universities are studying the intersection of mathematics and functional programming, and I don't think that they're somehow not 'real mathematicians'.

Here's a paper by the author of this post, formalising a popular Haskell pattern in category theory: https://arxiv.org/abs/2001.07488

EDIT: switched to non-PDF link.


I agree that a lot of software will be done declaratively rather than imperatively. But we will create DSLs for that. I don't think we will use math for that. And somebody needs to create the DSLs.


So why hasn't Excel already eaten our lunches?

It's the most popular, successful programming language ever but it hasn't taken the whole market for programming, why not?


I got interested in programming in 2000 and graduated with a CS degree in 2009. Throughout the 2000s (remember this was post-dot com) I got told software was a bad field to get into for two reasons:

1) The advent of advanced tooling would make software engineers unnecessary. Tools like Visual Basic and UML diagrams were the tip of the iceberg of "visual coding" where a business person could just specify requirements and the software would be automatically created.

2) Jobs were all going to go overseas. There is no reason to pay a programmer in California 10x what you can pay someone in Mumbai. It's better to study something like finance where the secret domain knowledge is held within the chambers of ibankers in Manhattan. The future of tech startups is a few product managers in NYC outsourcing the coding work to India.

There's also a 3rd argument that gets floated around that the field will be oversaturated, "everyone" is learning to code, etc. In 2015 I asked a startup for 125k and they told me, while that did seem to be market at the time, they thought salaries had peaked and were going in the opposite direction. In 2020 you probably can't hire a bootcamp grad for 125k.

Since then the field has exploded and wages have gone off the charts, but I still hear the same type of arguments over and over.

In 2020, you hear stuff like:

1) AI is the future, it's going to automate away all of the menial programming jobs.

2) Bay Area is overcrowded, all the jobs are going remote. The future of tech startup is a few marketing execs in SF outsourcing the menial tech work to flyover states.

Personally, I didn't believe the hype then and don't believe the hype now. I find it amusing that the author questions the wisdom of Google for not using AI to automate development, as if that's never occurred to Google.

Of course the author is right that tech is a treadmill, new skills move into the spotlight while old ones become outdated, although even then the mass of legacy code means consultants will have lucrative jobs taking care of it for a long time.

In my experience, new tooling always creates more software jobs, not less. Software is not like high frequency trading, the more people that compete to make software, the more people we seem to need to make software.

Sure, Bay Area is getting insanely expensive, but Google still tries to fit as many as it can into Mountain View, Facebook still crams as many as it can into Menlo Park. Every 3 months some VC will have the bright idea that, what if we just pay people to do the lowly engineering work somewhere else and just have execs in the Bay Area? And 99% of those startups go nowhere and Google is worth a trillion dollars.

There is a very intuitive line of reasoning that goes, software can be done my machines and software can be done anywhere. There is a thread of truth to both those narratives, but it leads people to very incorrect conclusions that there won't be as many software jobs in the future and they will be done in cheap places.

Despite all intuition, and all the logical narratives about costs and automation, a group of people dedicated to technology in the same physical room have defied that intuition. Virtually every extremely important software company grew in the West Coast of the USA, in some of the most expensive places in the world, and the more software tools have improved the more headcount these companies have had. So take all this intuition about the end of days for software engineers with a huge grain of salt.


This is historicism. For example, the idea that, because compilers have eliminated most hand-optimization, that process will inexorably continue, moving further up the chain of abstraction until "trivial programming" has been eliminated. The author thinks he's derived some law of historical progress. Along these lines, many smart people have predicted the "end of labor" since at least Marx.

I think that predicting the future is hard and most people are better off "optimizing local minima".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: