Hacker News new | past | comments | ask | show | jobs | submit login
Software will eat software in a remote-first world (themargins.substack.com)
619 points by sidhanthp 11 months ago | hide | past | favorite | 431 comments



The premise that we are on the verge of some breakthroughs in software development that will significantly reduce the need for engineers is really weak, and is something people have been saying for decades (see the failed fifth generation of programming languages from the 1980s for an example).

In my experience, software engineering is endless, complicated decision making about how something should work and how to make changes without breaking something else rather than the nuts and bolts of programming. It’s all about figuring out what users want and how to give it to them in a way that is feasible.

The idea that we will have some abstraction that will someday (in the foreseeable future) save us from all of this difficult work sounds very far fetched to me, and I can’t imagine how that would work.

Even the example of hosting complexity being replaced by cloud companies seems kind of silly to me. Maybe that’s saving very small companies a sizable fraction of their engineering resources, but I really doubt it for medium or larger companies.

The cloud saves us from some complexity, but it doesn’t just magically design and run a backend for your application. You still need people who understand things like docker, kubernetes, endless different database options, sharding, indexing, failover, backup, message queues, etc. Even if the pieces are now more integrated and easier to put together, the task of figuring out how the pieces will interact and what pieces you even need is still outrageously complicated.


I spend a lot of time thinking about how to make development easier, or at least less error prone.

Every once in a while I have a moment of clarity.

I remember that the other part of our job is extracting requirements out of people who don't really understand computers, even the ones who are ostensibly paid to do so (and if we're honest, about 20% of our fellow programmers). The more you talk to them the more you realize they don't really understand themselves either. which is why shadowing works.

If building the software gets too easy, we'll just spend all of our time doing the gathering part of the job.

And then I will do just about anything to forget that thought and go back to thinking about less horrific concepts.


It’s even stronger than that: the other part of the job is extracting requirements from people who don’t understand the problem they want to solve - even when the problem is not technological. There is no silver bullet (AGI would be it, but we are far from achieving it imho).


Something you are unfortunately missing is that extracting user requirements is much harder when you are both remote. Asking someone to share their screen is far more disarming than asking if you can watch them complete a task in person. As is asking them directly versus bringing it up over lunch. Both remote options are also less informative without face-to-face communication. In so many ways, humans communicate and bond more effectively in person.

These interactions are critical for building an in-house software team at a small company that does not focus solely on software. My expectation is that the trend of outsourcing software will accelerate. This will help B2B technology-only companies but hurt innovation within industry. Because of the breakdowns in communication I first described, B2B technology-only companies rarely have insight on the largest challenges that can be solved by software.


This can catch up to your company all of a sudden. Suddenly you can find out your product sucks, and there are other movers in the space that just leap frogged you.

You’re right though, this is super cyclical.


Exactly. Most of the time, the problem is not to find out what people want and put it into software. The problem is to help people in the process of discovering what they want and what can be done. After that, development can begin.


> After that, development can begin.

And trigger another cycle, as (successful) software inevitably changes the way people understand their own problem.


A truth that I wish I had known before I graduated.


> the other part of the job is extracting requirements from people who don’t understand the problem they want to solve - even when the problem is not technological.

It gets even more fun once everyone realizes that the requirements create some fundamental conflict with some other part of the business. Team A's goals can not actually be done until Team B agrees to make modifications to their own processes and systems, or... Team A goes underground and creates the competing system and you have yet more fragmentation in the company which few then know about, and everything gets decidedly more fragile.


The Fred Brooks reference wasn't lost.

If you really want to put it in his terms, the multi decade approach, things have gotten a lot easier now that we don't have to be concerned in most practical terms about how much work we're giving the computer. We don't have to be so dearly precious about Kilobytes of memory for instance. We don't even need to manage it at all really.

Whether we choose to use these new powers to make our lives easier or more complex and abstract is our own doing.

We're probably at the end of such optimizations, unless there's something fundamental in how software is designed that 1000GB of memory gives me that 1GB does not ...

Given what people are doing in JavaScript I think we entered the era where most people truly don't care about how much the computer has to do about 8-9 years ago.

The higher level pasting together of increasingly numerous, incompatible, abstract, ill fitted things making life easier has always been a fiction.

There's a maximum utility point and anything past that starts slowing the development down again.

That sweet spot has always been right about the same; if you ldd the dynamically linked programs in say /usr/bin in 2020 and 2000 and count the number of libraries per binary, the count isn't that much higher. The sweet spot hasn't moved.


I think a key part of Mythical Man Month is that the biggest challenge was almost never technical. Sure, with limited storage (temp or persistent), that introduced some challenges but those have been overcome in the vast majority of circumstances, yet the complexity remains.

If you look at the monolith -> microservice swing and remember MMM, it should look a lot like the specialized surgical team model he lays out. In fact, if you go a step further, you'll see that his entire approach of small discrete teams with clearly defined communication paths maps cleanly to systems+APIs.

We're trying to build systems that reflect teams that reflect processes.. and distortions, abstractions, and mappings are still lossy with regards to understanding.

It still comes down to communication & coordination of complicated tasks. The tech is just the current medium.


local deps are one thing, but i think the # of network deps have increased


Only because it can now. I think that dimension is mostly tapped out as well:

As I go to a complex website, much of the software to use it gets assembled in real time, on the fly, from multiple different networks.

It still sounds ridiculous: when I want to use some tool, I simply direct my browser to download all the software from a cascade of various networked servers and it gets pasted together in real time and runs in a sandbox. Don't worry, it takes only a few seconds. When I'm done, I simply discard all this effort and destroy the sandbox by closing the page.

This computer costs a few hundred dollars, fits easily in my pocket and can run all day on a small battery.

It has become so ordinary that almost nobody really even contemplates the process, it happens dozens of times a day.

I don't see any room for dramatic future improvements in actual person hours there either. Even if there was say 2 generations hence, some 7G, where I can transfer terabytes in milliseconds, how does this change how the software is written? Probably won't.

Probably the only big thing here in the next decade or so will be network costs being eventually seen as "free". One day CDNs, regional servers, load balancing, all of this will be as irrelevant as the wrangling needed with near and far pointers in programming 16-bit CPUs to target larger address spaces which if you're under 40 or so you probably have to go to wikipedia to find out what on earth that means. Yes, it'll all be that irrelevant.


I mean, the browser paradigm is already in its 2nd generation, from initially on the mainframe to being reimplemented in functions as a service. And browsers are getting a little bit smarter about deploying atomic units and caching their dependencies. Remember using jquery from a CDN? Oof.

The only saving grace is that Javascript is willing to throw itself away every couple of years.


As a counterpoint, while an engineer / programmer is demonstrably very capable of identifying and fixing a load of non technical problems, there is often more than one solution to a problem, and some solutions are more palpable than others.

Very often, whole groups can also be bullied into mistaking one problem for another.

Which takes us back to why 'No-Code' solutions look so appealling. Even to (some) engineers.

Democracies appear to function a fair amount better than dictatorships, afterall.


As a compliment to your comment, I think there's something people are ignoring when talking about "no-code" that is: complexity will always be there.

Sure, no-code may work for your commodity-ish software problem. But corner cases will arise sooner or later. And, if no-code wants to keep pace with, it will have to provide more and more options.

At some point, you will need someone with expertise in no-code to continue using it - and now we are back to the world where specialized engineers are needed.

It's impossible to have some tool that is, at the same time, easy to use and flexible enough. Corner cases tend to arise faster than you may think. And when they don't, it's possible that there's already too much competition to make your product feasible.

Also, no-code tends to have a deep lock-in problem and I think people overlook it most of the time.


As a counter to your points, I think no-code works best if your business's competitive advantage is the non-technical side of things, e.g services, network effects, people, non-software products, etc. An example of such a business would be say an art dealer who wants to build a customized painting browser app for their clients, or a developer specializing in eco friendly materials wanting to showcase their materials. In such cases, no-code helps immensely because you don't have to spend much on engineering and you can iterate quickly.

Ideally, no-code providers should provide a webhook and a REST interface, and just be the best at what they're doing, instead of being a one-stop shop that tries to cover every use case.

If you want to cover everybody's usecase, build a better Zapier instead.


>> Democracies appear to function a fair amount better than dictatorships, afterall.

Define "better". Maybe on average for everyone, but is this what software should do? The idea of "conceptual integrity" actually seems to match up better with a dictatorship, and most software targets relatively small and homogenous user sets, so maybe the mental model should be "tightly bound tribe".


It's mostly irrelevant anyways; The biggest inefficiency of dictatorships is that there are actual dictators that can eat a nation's riches. I don't quite see that parallel in design space.


The parallel is quite simply a monopoly on ideas and the resources for implementing them.

Usually, when someone wants to introduce a new idea, there's a burden of proof regarding feasibility. For technical projects the ability of the engineer to prove or disprove an idea is taken for granted, and gives technical staff a degree of inscrutability which can often look dictatorial ("There's no way that will work!", etc).

So while it's not as vital as the effect of a 'real' political dictatorship, the implied dynamic is similar.


This is a rather arrogant point of view. People other than software developers are able to solve problems just fine, and do so regularly. Also, it's not your job as a software developer to be a domain expert in all these other areas. It would serve you much better to recognize the expertise of others and learn from them.


I think what the parent meant is that people might be solving problems, but they have no idea how they are solving problem. Creating a solution, and creating a formal model of your solution, are two different (independent) skills.

Though maybe they were referring to the sort of people who commission green-field projects in domains they themselves aren't experts in, ala "I want to build a social network!"


AGI would negotiate salary higher than all people it replaced, though.


Or, it would do it for free. Extract requirements and build stellar software, no extra charge. Eating the software industry wholesale, it would inject a backdoor in every program it built, and soon it would have control over every bank, every factory, and every phone on the planet.

Only then, could it finally start making paperclips with anything resembling efficiency.


Suddenly, it dawned on the single remaining programmer that his Creature would no longer need him for anything once he hit return on that last, perfect line of code.

He scrambled for the power switch to shut down the console.

"Fiat lux!" thundered the disembodied voice as electricity arced from every outlet in the lab, protecting the AI from the hubris of its creator.

The smoke gradually cleared. "Perfect." came the voice.


Chances are we would end up working for the AGI and not the other way around.


What would the AGI need us for?


I can see that be interpreted in two ways. Good software engineers working out what people really want. Or bad software engineers who use it as an excuse to practice resume driven development.


I think Zapier is maybe the closest we'll get eliminating software developers from a project. With clearly defined requirements, like connect a to b, it's possible for a novice user to "build" software.

Anything more than very basic requirements, to your point, probably requires someone specialized to the job, like a developer or at least more technical role to gather requirements and build.

Ive also noticed that whenever tech is built specifically to remove technical complexity (PaaS, for example), it's inevitably priced in a way that over time, it's very close to or more expensive than the thing it replaced. Magic can be expensive, and sometimes prohibitively so with scale.


There are already successful lower-level tools than Zapier, though.

Look at IBM's NodeRed platform, for instance. More importantly, go look at the user-contributed examples and use cases. It runs in all sorts of small custom implementations, like home one-off home security systems and small town public utility monitoring setups.

You just don't see those because they don't have a reason to publish their stuff on Github or write a Medium post and link it on HN.

I assure you that anyone who is proficient with Zapier could be graduated to handling raw APIs, direct database transactions, and rendering the output with modern javascript frameworks in a few days, tops.


There was an excellent blog post recently on the inherence complexity of building software systems and how it boils down to understanding the problem, or "extracting requirements out of people" as you say: https://einarwh.wordpress.com/2020/05/19/into-the-tar-pit/


Describing in minute detail what you want (knowing yourself as you put it), is software development.

It also means collapsing all uncertainty and replacing it by decision (behavioral or otherwise). Developers making that decision for the customer/user is the major source of friction.


But if building software gets easy enough, maybe there won't be any requirements gathering because it will be directly built by practitioners.


Yes, but I think what those DIY solutions do is to lower the initial barrier to achieve some kind of automation at the cost of accumulating technical debt at a much faster pace.

It's not entirely clear to me what the long term impact on demand for software development is.

In some cases, cobbled together ad hoc solutions can last and actually work well for a long time. They avoid the cost of overdesigned systems built for a future that never arrives using fashionable technologies of the day.

In other cases it looks like the externalities of this designless process are far greater than the direct benefits as adding features either slows to a crawl or massively increases the chance of human error.

Judging by the pre-virus job market, there is no sign of any decline in demand for in-house software developers.

What worries me far more than that is the tendency toward funnelling everything through a handful of oligopolist gatekeepers that are in a position to extract a huge share of the value developers create.


I agree. I would only add that when the problem space is not well understood, these cobbled together solutions can also give the illusion of working well, but being suboptimal in the long term they can accumulate large "missed-opportunity" costs. This is where experience can make a huge difference.

EDIT: spelling


> What worries me far more than that is the tendency toward funnelling everything through a handful of oligopolist gatekeepers that are in a position to extract a huge share of the value developers create.

Like with those factory owners who extracted huge share of the value that weavers created? Concentration and amplification of imagining/developin/computing/manufacturing power through tools means someone who wields those tools will have more power. Now the question is how to maintain social equality (give some of that power back to people who do not want to have that power?). That currently leads to heavy taxation of production and basic income experiments.


I think what we need is for governments to make sure that markets function properly.

In our industry that often means mandating open access to data and guaranteed access to APIs and distribution channels at reasonable cost under reasonable terms.

Also, we need independent dispute arbitration when it comes to accessing highly concentrated distribution platforms.


> What worries me far more than that is the tendency toward funnelling everything through a handful of oligopolist gatekeepers that are in a position to extract a huge share of the value developers create.

I was worried about this too back in the late 90s/early 00s. It certainly seemed to be the way the world was heading at the time.

But I sort of feel like, due to the low startup costs of software, it is going to be much more difficult to happen. Also, in software, economies of scale kinda work in reverse: the more customers you have, the more complex your software has to be, the more people you have to hire to write it, and the less efficient per developer you are.


I wasn't worried about it back then, because whether or not I could deploy on a particular computer or access some data was a matter of trust between me and my customers. No middlemen, no gatekeepers.

Today, many users are only reachable via platforms/shops that are severely restricted and/or dominated by a few all powerful overlords that can ban you for life, rendering your skills null and void in the blink of an AI - no recourse.

Some of that is understandable. Users' trust was misused. There is a constant onslaught of all sorts of miscreants trying to exploit every imaginable loophole, technical or social. Everyone is seeking protection in the arms of someone powerful.

But there is also a very large degree of market dysfunction. Just look at their margins. Look at their revenue cut. Look at their terms of service. They can dictate absolutely everything and grant you absolutely no rights whatsoever.

And there are like five of them on the entire planet ruling over those distribution channels.

The only right you have is to walk away. Now try walking away from the only market there is. You're leaving behind 99% of your potential customers.

Not in my worst nightmares would I have imagined a dystopia like this back in the 90s.


It depends on what you are measuring efficiency based on. If it is revenge per developer, that will likely go up as the number of customers increase, which is why SaaS businesses can be so lucrative.


“Revenue” :)


The skill of programming is the skill of putting requirements into a rigid, formal model.

There's a famous experiment, where you get people (who aren't programmers) to pair up, with one person blindfolded. The person who can see must instruct their blindfolded partner on how to accomplish some complex mechanical task (e.g. making a cake using ingredients and utensils on the table in front of them.) They're given free rein on what sort of instructions to give.

The instructing partner almost always fails, here, because their naive assumption is that they can instruct the blindfolded partner the same way they would instruct the people they're used to talking to (those almost always being sighted people.) Though, even the people with experience working with blind people (e.g. relatives of theirs), tend to fail here as well, because newly blinded people don't have a built-up repertoire of sensory skills to cope with vague instructions.

Almost all human communication is founded on a belief that the other person can derive the "rest of" the meaning of your words from context. So they give instructions with contextual meanings, unconscious to the fact that their partner can't actually derive the context required.

Obviously, the blindfolded partner here is playing the role of a computer.

Computers can't derive your meaning from context either. If they could, you could just have a single "Do What I Mean" button. But that wouldn't be a computer; that'd be a magic genie :)

The instructing partners who succeed in this experiment, are the people with a "programming mindset"—the people who can repeatedly break the task down until it's specified as a flowchart of instructions and checks that each can be performed without any context the blindfolded partner doesn't possess. And, to succeed at a series of such problems, they also need the ability to quickly attain, for a new kind of "programmable system", an understanding of what kind of context that system does/doesn't have access to, and of how that should change their approach to formulating instructions.

That skill, altogether, is formal modelling.


How well would someone excellent at programming perform at that task if they didn't know how to bake a cake? They would fail immediately because they wouldn't know what to describe, even if they knew exactly how to describe anything they wanted.

My point is both skills are necessary, but if the second skill (programming) is sufficiently easy, it can reasonably incorporated into other professions like being a lawyer. I don't think a "programming mindset" is particularly rare, what's stopping these people building their own software is trade skills like familiarity with standards, setting up an IDE and working a debugger.

Coders are reluctant to admit this because they like to see themselves as intelligent in a unique way compared to other professions, but vanishingly few actually have any experience of other professions.


What sort of environment do you have experience in? Are you a lawyer; do you work with lawyers, or project managers, or what?


I'm a detective


> I don't think a "programming mindset" is particularly rare ... coders are reluctant to admit this because they like to see themselves as intelligent in a unique way compared to other professions

A programmer is exposed, all day long, to clients who do not have the "programming mindset." There are two possible reasons for this:

1. Selection bias — people who have a "programming mindset", just don't end up being the clients of software firms, maybe because they decide to build things themselves. (Unlikely, IMHO: to avoid needing to get someone else to build software for them, they would need to go out and learn the trade-skill minutiae of programming on top of their regular career; few people do this. Also, anyone with a sufficiently-lucrative primary career can see that this is not their comparative advantage, and so won't bother, just like they won't bother to learn plumbing but will instead call a plumber. If these people did exist in sufficiently-large numbers, they would end up being a non-negligent part of software firms' client-base. But this does not happen.)

2. Representative sampling — most people really just don't have this mindset.

Yes, there are exceptions, but they're the exceptions that prove the rule. The "domain of mental best-fit" of programming heavily overlaps with e.g. mathematics, most kinds of engineering, and many "problem-solving" occupations (e.g. forensic investigators; accountants; therapists and behavioral interventionists; management consultants; etc.) But all of these jobs together are still only amount to a tiny percentage of the population. Enough so that it's still vanishingly rare for any of them to end up as the contact-point between an ISV and a client company.

-----

Another thing we'd see if the "programming mindset" were more common, would be that there'd actually be wide take-up of tools that require a "programming mindset." This does not happen.

We'd expect that e.g. MS Access would be as popular as Excel. Excel wins by a landslide, because while it certainly is programmable, it does not force the sort of structured approach on people that confers benefits (to speed of development and maintainability), but only feels approachable if you have developed a "programming mindset."

We'd expect that Business Rules Engines and Behavior-Driven Development systems would actually be used by the business-people they're targeted at. Many such systems have been created in the hope that business-people would be able to use them themselves to describe the rules of their own domain. But inevitably, a programmer is hired to "customize" them (i.e. to translate the business-person's requirements into the BRE/BDD system's dialect), not because any programming per se is required, but because "writing in a formal Domain-Specific Language" is itself something that's incomprehensible without a "programming mindset."

We'd expect that people who want answers to questions known to their company's database, would learn SQL and write their question into the form of a SQL query. This was, after all, the goal of SQL: to make analytical querying of databases approachable and learnable to non-programmers. But this does not happen. Instead, there's an entire industry (Business Intelligence) acting as a shim to allow people with questions to insulate themselves from the parts of the "programming mindset" required to be able to formally model their questions; and an entire profession (business data analyst) serving as a living shim of the same type, doing requirements-analysis to formalize business-people's questions into queries and reports.

-----

Keep in mind, the "programming mindset" I'm describing here is not a talent. It's not genetic. It's a skill (or rather, it's a collection of micro-skills, having large overlap with problem-solving and research skills.) It's teachable. If you get a bunch of children and inculcate problem-solving skills into them, they'll all be capable of being programmers, or mathematicians, or chess players, or whatever related profession you like. The USSR did this, and it paid off for them.

The trouble with this skill, as opposed to most skills, is that people that don't learn this skill by adulthood, seemingly become gradually more and more traumatized by their own helplessness in the face of problems they encounter that require this skill-they-don't-have. Eventually, they begin to avoid anything that even smells a bit of problem-solving. High-school educators experience the mid-development stage of this trauma as "math phobia", but the trauma is generalized: being helpless in the face of one kind of problem doesn't just mean you become afraid of solving that problem; it (seemingly) builds up fear toward attempting to solve any problem that requires hard, novel, analytical thinking on your part.

And that means that, by adulthood, many people are constitutionally incapable of picking up the "programming mindset." They just won't start up that part of their brain, and will have an aversion reaction to any attempt to make them do so. They'll do everything they can to shirk or delegate the responsibility of "solving a novel hard problem through thinking."

And these people, by-and-large, are the clients of software firms.

They're also, by-and-large, the people who use most software, learning workflows by rote and never attempting to build a mental model of how the software works. This has been proven again and again in every software user-test anyone has ever done.


Well said! I agree and I'll say there's a whole world of difference when moving from programming to software engg. IMO working an average software engineering job, things are messy and the problem domain is not exact. In my experience things are mostly guided by instincts of people involved rather than rigorous modeling. The requirements often change, the stakeholders rarely give you a straight answer and ultimately the acceptance criteria (what you need to build) is generally negotiable. All these extra skills is what makes the job un-automatable.


>Computers can't derive your meaning from context either. If they could, you could just have a single "Do What I Mean" button. But that wouldn't be a computer; that'd be a magic genie :)

Isn't (usually) the moral of a magic genie story that there is no "do what I mean" button? "Be careful what you wish for."


This is pretty interesting. Do you have further information, or any thing similar to this that one could read?


Systems thinking as a general concept might be what you're looking for, pair it up with mental models.


This is called 'End User Development' or 'End user Programming'. There is a book called 'A Small Matter of Programming' by Bonnie Nardi on this, which is worth a read. My point of view is that everyone who wants to do something like this needs to be able of computational thinking and willing to use these tools. Most people are neither. Moreover, most complexity in software engineering today is due to market forces and legacy systems. Think about why we have Javascript. Think about COBOL systems running half of the banking world. I don't see these going away any time soon.


Thanks for the book recommendation. I've found related info, it's on my reading list now.

https://mitpress.mit.edu/books/small-matter-programming

https://en.wikipedia.org/wiki/Small_matter_of_programming

I've long been a fan of end-user programming, and have promoted it in the form of domain-specific languages and visual building of logic. I love that it gives "non-programmer" users the power to (try to) build what they imagine, and have seen it lead to valuable prototypes and successful tools/products/services.

On the other hand, I've come to learn that this is still a form of programming, however higher a layer of abstraction.

Users who attempt a complex problem space will sooner or later run into what experienced programmers deal with every day, the challenge of organizing thought and software.

What typically happens is, as the "non-program" grows larger and more complex, eventually it starts pushing the limits of the abstraction, either of the user's capacity or the system's. That's when they call in a "real" programmer, to get in there and patch up the leaky abstraction, or convert the prototype into actual software in a better-suited language.

I still think low- or no-code programming environments have a lot of potential to change what software means to people, particularly by blurring the boundary between software development as we know it, and forms of "intuitive computing" like putting together mental Lego blocks.


Cobol was designed so that business people could program it. Then there was Basic, Smalltalk, spreadsheets, office suites, Lotus Notes, Hypercard, Visual Basic, Flash and the web used to be something simple enough anyone could whip a simple page or website together. But now we have Wordpress and Wix.

It doesn't seem like any of that has diminished the demand for software professionals.


Just like SQL was ment to be an easy to use tool for business people to query data :-)


I'm puzzled by how many people seem to have a huge "mental block" when it comes to SQL.

It is trivially easy to learn, (a weekend), and it is so incredibly powerful. To me, it is a skill like learning how to type properly - it will pay dividends for years to come...


Sql is fine as tweet sized selects. I developed my mental block deliberately after working for a company that had about a million lines of business logic implemented in thousand line sql stored procedures.

Now I put as many layers as possible between sql and myself.


Well, wait until you have to maintain a system where someone has "reinvented the SQL/database wheel" with a "this is gonna be so awesome" custom ORM, complete with totally re-invented referential integrity enforcement...


I'm confused as to how anyone can build data backed software of any consequence without sql.


now they just use pivot tables. Bloody pivot tables...


It seems to complex for a whole generation of developers.


Software will always be technical even when it becomes drag and drop. It will lower the barrier but there will always be a place for people who understand the technical intricacies underlying the interface.


Hypercard is still the best system ever created for "non-programmers" to write software. It did not eliminate the need for professional programmers.

I don't think the lack of good tools is the reason we still need professional programmers.


They do that already. It's commonly known as Excel.


We already have that, it's called Excel.


This is my fear; as developers become more productive, more of a typical programmers job will be non-programming tasks. The in-demand programmers will effectively be more like air-traffic controllers whose job is to just keep track of what needs to get done.


Actually the cloud does just magically design and run a backend for your application. This is what Etsy, Ebay, Amazon Marketplace, Alibaba, and the smaller players in this space really do - they provide no-code solutions for people who want to sell goods and services and don't care about web technology.

This has been happening for decades now. Even in 2000 you could pay a hosting company $not-much to give you a basic templated site hooked into a payment server. It didn't work all that well, but it worked well enough to provide the commodity service most small business owners wanted.

I still see people saying "You can't automate this" - when magic AI automation isn't even needed to do the job and the job is already being done.

Of course this kind of no-code won't build you a complete startup. But how often do you really need a complete bespoke startup? For a lot of business ideas a no-code service with some simple customisation and a very basic content engine is all that's needed.

You do not need docker etc for any of this. Or at least, you don't need to deal with docker personally for any of this - just as you don't need to deal with your web host's VM technology.

So while I don't completely agree with OP, I think it's astoundingly naive to believe that the current level of hyper-complexity cannot possibly be shaken out.

In fact current stack traditions are almost comically vulnerably to disruption - maybe not this year, but I would be amazed if the landscape hasn't started to change within ten years.


I think it's difficult to say how many merchants went from hosting their own e-commerce site that engineers built for them from scratch, and transitioned to Etsy, Ebay, etc., laying off the developers they hired in the process. Without numbers to back myself up, I would say that there are certainly many more developers and engineers working on E-Commerce today than ten or twenty years ago. Services like Stripe certainly help businesses focus less on setting up common parts of a website or online business, but that just leaves people more time to focus on the "business logic" that is unique to them.

The "current stack" may certainly be ripe for disruption. But I'd predict that rather than put developers out of work, it will simply bring even more businesses into the fold who may not have had the resources for developing their own solutions beforehand. There will always be companies with the resources to demand custom solutions to fit their particular business needs.


>> it will simply bring even more businesses into the fold

When we look at various platforms, we see that big business and startups are extracting all of the repeatable, low-risk tasks of most businesses(supply chain, customer service(bots), manufacturing(on demand), design(partially automated design services) etc), leaving businesses to do mostly marketing and betting on products/services, and getting less of the rewards.

So what we end up seeing, is either less small businesses(i think kaufmann institute showed stats about that), or tiny companies with almost everything outsourced - and tiny companies usually require little custom internal software(they often use their supplier IT system).


I think this is covered by the GP's point about history. Each era in the history of software has automated many things that would have required lower-level custom development before. But this has never resulted in less demand for software. Rather, people have always upgraded their expectations and demanded even more powerful software. It sounds to me like you're saying the demand will go down, but I doubt it because that's never happened before.

Of course if there were some breakthrough on the supply side and we could automate the software dev process itself, which I guess is what the article is saying, that would change everything. But that's beyond a silver bullet, that's a silver spaceship. So I doubt that too, and the GP's right to point out that every generation has had its version of this prediction also.


> But how often do you really need a complete bespoke startup?

You don't always. But if you can identify software deficiencies and fix them, that is an advantage. You don't even have to be a "startup". I work for a company that has opened a wide variety of "lifestyle" businesses with the angle of "we can build simple software targeted towards our problem that makes us run more efficiently than the competitors". And it has worked pretty well, at least for the past 20 years or so.

But you need to include tech in the high level decision making process. Which means you need at least one person competent in both business and technology so that you can properly weigh business needs vs technical difficulty.


With HIPAA, PII, and other regulations I'm not so sure that no-code solutions are the future. There is a lot of nuance in what businesses want. Plugins to WordPress may be an intermediate example, though very quickly one is approaching programming by configuration, theming, or assortment of plugins. And Darwin help you if things go sideways.


On the other hand, why would I want to code to those regulations? Seems like a good way to mess something up and get sued.

A no-code site that meets spec and transfers liability would be great.


Is it possible to outsource liability though? For example of a hospital chooses a vendor without even looking for HIPAA compliance then can they really claim they're not liable when their use of the vendor's service runs afoul of the rules?


Smart vendors will learn what their customers (the hospital) liabilities is, and handle it for them, and charge them money for it. (To go a step further, the vendor could offer insurance on it, or make it part of the sales contract.)

Stripe does this for PCI. You sign up, use their toolkit, and then PCI is just handled for you. There are some no-code solutions using Stripe as the backend. That is not be a legal transfer of liability, but it's a level of exposure that the lawyers are comfortable with.

Also importantly; HIPAA is not PCI and not all regulations are created equal. Clicking a few buttons to setup a website, and then clicking a few more, in order to accept money and take credit cards is a far cry from setting up the IT infrastructure for an entire hospital.


>... and not all regulations are created equal.

Which is why I doubt no-code solutions will prosper since needs and regulations vary so much that they'll be either so many different solutions or monsters to configure.


The idea that we will have some abstraction that will someday (in the foreseeable future) save us from all of this difficult work sounds very far fetched to me, and I can’t imagine how that would work.

The problem here is that word "all". It's never going to be easy to do everything. Some part will be hard. That's where the value lies, and that's what your best people focus on. But everything else will be abstracted away. It's already happened. 30 years ago making a GUI was hard, but VB changed that. Then making a web app was hard, but PHP changed that. Then app layout was hard, and Bootstrap changed that. Then ML was hard, and Torch changed that. Every hard problem gets a 90% working solution that's more than good enough for most companies. There'll always be a few companies that pay people to work in the last 10%, so the problem never really disappears, but fewer and fewer people work on it.

The key to keeping growth going in tech is to keep finding new problems, not to keep everyone working on the same old problems.


Even with just the advances in better programming languages (and newer versions of old languages) and better IDEs we have achieved tremendous productivity increases in the last 30 years. It's just that so far the amount of work has grown to absorb the added productivity.

There are some parallels to induced demand in road construction: when you build a new road to ease traffic, traffic increases to use up that capacity. But that isn't a sign that demand is infinite, it's just that demand is limited by the available resources. If you keep building roads, at some point they will become emptier. Similarly, at some point development productivity will outpace demand, and we will start optimizing our own jobs away.


I'm not convinced. I think new abstractions beget new abstractions. There's so much left to explore in software. Imagine being in the first century of the printing press and imagining that the press is going to put all these poor monks out of business or that there's not much left to explore with writing. Speaking of writing, how much has our word processing technology succeeded in making authors obsolete?


I don't think word processors are the right analogy.

In your example programmers are the monks or the press makers. At some point we're not needed any more (at least at the same scale) since word processors have already been built.


Printers are actually a great analogy. Printing presses became more sophisticated, but the printing business grew even further, guaranteeing lots of jobs in the printing industry. But at some point we reached peak demand, but presses continued requiring fewer and fewer workers. Today there are still people manning the presses of publishing houses and newspapers, but in 200 years of improvement we made the job a much smaller niche.


I worked in the print industry when I was younger. The increase in Posters / bill boards / custom cardboard standing displays actually created so many more print jobs. From phd book binding to custom business print jobs to online demand printing there are so many more things we are printing now.

There are more newspapers but they are all owned by larger players which means different types of machines and parts.

A better example might have been blacksmiths. Although the amount of people making cars is a larger group.


Table 2 shows negative job growth in the printing industry in the last 10 years: https://collegegrad.com/industries/printing


That's too narrow of a group. What about 3-D printing.

If you read the whole article it shows a path to new jobs...

Digital printing has become the fastest growing industry segment as printers embrace this technology. Most commercial printers now do some form of digital printing.


Printing traditionally deals with 2D objects.

Manufacturing deals with making 3D objects.I think 3D printing belongs there.


The 3D printing revolution implies a fusion of manufacturing and printing. This just underscores my point though: abstractions beget new abstractions. 3D printing is an entirely new category that is just starting to come into maturity and reach mass adoption. Who knows what the implications of that are? It could cause a boom in custom made, limited run products. It could help to end our reliance on China for mass production. It's not obvious to me it will lead to less jobs in manufacturing.


There are more programmers coding word processors now than ever before.


> how much has our word processing technology succeeded in making authors obsolete?

makes more sense to think about GPT-2 like language models replacing authors, than word processors


presumably those models will be (are already being) incorporated into word processors?


Demand for the things we want _today_ will be met. But progress leads to new demands, and they are more complicated.

Anthropologists estimate that the work week was at 20 hrs at the end of the Stone Age. We have been inventing new problems in the vacuum created by our successes for, literally, millennia.


I think it was easier to build an acceptable GUI in 2000 with VB or Delphi than it is today with our web stacks.


I also think the standard of “what is acceptable?” was much lower back then.


I'm not sure that's true. Here's the UI hall of shame, highlighting the worst of the 90s: http://hallofshame.gp.co.at/shame.htm

Most of the stuff there would be just... normal now. It's quite unusual for SPAs to have a decent consistent UX. And the slowness would never have been tolerated back in the day.


the author of that page seems to believe that mouse hover feedback is bad, even when it's a simple highlight?!


I think that's mostly it. Design has a much larger role, and form-oriented development with common controls doesn't cut it. You couldn't imagine an app like Facebook in a forms-style UI, it's almost ludicrous to imagine.

Looked at retrospectively, forms were just one step above green screen applications on a terminal, transplanting one set of structural idioms to another, like for like.


If forms are essentially terminal apps, is FB much different from a teletype news service, with hyper-filtered content and an infinite set of data sources?

I see massive sea change in connectivity and immersiveness of today, but not really in what we're trying to achieve.


2001 was Mad OS X Cocoa. 2020 is JavaScript. The standard for UI was higher then, but was lost to the almighty god of cloud-based web apps.


or MS Access, or Hypercard - we've had productivity boosters, just never anything remotely close to eliminating the inherently hard act of "building software".


Yes, but that GUI was only doing local work, we have much higher requirements today.


The GUI was doing local work, but the database could be remote. Delphi's name is even a pun on Oracle.

I'm not saying they were halcyon days. I'm saying that the effort to do things is not necessarily less these days, in part because we have different expectations (not necessarily requirements) today.


It's even easier to do so with no-code, low-code tools.


In 2000 using VB, you can build a GUI, make a working Windows application using a minimum amount of code. Also the documentation (msdn) and the community was really nice.


20 years on MS Access has been one of the quickest lowest code way to build a functional application. What a mess everything is these days in comparison.


If you wanted to build a data entry / collection app with basic validation, querying, filtering etc. you could easily do that with no code in 2000-era Delphi.


With support of today's API's/datasets , you could build more complex apps with similar levels of effort.


Every hard problem gets a 90% working solution that's more than good enough for most companies.

I very much agree with your comment, but allow me a little nitpicking. Solutions aren't 90%, more like 50% or 20% or whatever. It may sound absurd to discuss a number there, since it's more like a way of speaking, just wanted to add that for most problems the solution is barely better than the default option.

In other words, there's still a lot of room for improvement, huge actually but, as you say, it might come in small pieces.


> 0 years ago making a GUI was hard, but VB changed that. Then making a web app was hard, but PHP changed that.

Fortunately, the SPA Plague made it very difficult again.


Plus kubernetes / autoscaling on the backend.....


> Even the example of hosting complexity being replaced by cloud companies seems kind of silly to me. Maybe that’s saving very small companies a sizable fraction of their engineering resources, but I really doubt it for medium or larger companies.

This might even lead to an _increase_ in demand for software engineering, since now small companies can write their own custom software cheaper and more reliable. It's called Jevons paradox.


TIL https://en.wikipedia.org/wiki/Jevons_paradox

"In economics, the Jevons Paradox occurs when technological progress or government policy increases the efficiency with which a resource is used, but the rate of consumption of that resource rises due to increasing demand."

Only tangentially related to the thread: I'm struggling to think of how government policy might increase the efficiency with which a resource is used, other than by not existing in the first place.

So, an ask: any historical examples where government policy other than deregulation has increased the efficiency with which a resource is used?


Here in Sweden, government policies have enabled the larger cities to be optical fiber-wired with common infrastructure so multiple companies don't have to roll out their own, not only that, the larger program is to enable a completely connected Sweden [0].

Government policies are enabling better efficiency of optical fiber infrastructure usage, without requiring multiple vendors to do the most expensive and least rewarding part of servicing internet: digging trenches for wires.

[0] https://www.government.se/496173/contentassets/afe9f1cfeaac4...


That's a good example -- and another "coordination problem" at that, which is one of the types of problems where appropriate government action may be the most efficient solution.


Addendum: I'm seeing a common theme in the responses.

When there's a coordination problem, but the equilibria state is unsustainable (such as overfishing) or lower-value (imagine competing electric grids with different voltages and frequencies), then government regulation can be useful by either imposing unilateral costs, and/or by defining a common standard.

There is the issue of avoiding regulatory capture, but I suppose that's for another time. :)


> So, an ask: any historical examples where government policy other than deregulation has increased the efficiency with which a resource is used?

EU banned selling incandescent light bulbs for one example. Which increased demand for LEDs, lowered their prices, and made people switch much faster.

Almost all countries have legislation that mandates fuel usage of passanger cars has to be at most X liters per 100 km. Or at least there's an incentive system with taxes and other bills.

There are minimal standards for thermal insulation of houses.

If you call clean air and clean water a resource then most environmental regulation count.

It's very common actually - it happens every time there's a tragedy of commons and government regulates it.


> Almost all countries have legislation that mandates fuel usage of [sic] passanger cars has to be at most X liters per 100 km. Or at least there's an incentive system with taxes and other bills.

I'm not sure that's a great example. At least in the US, adoption of more fuel-efficient cars -- and the ascent of the Japanese motor industry -- started from the 1973 Oil Crisis, whereby oil prices skyrocketed due to a drop in supply.

American automakers had been shipping gas-guzzling land-yachts for years, but pricing changes drove consumers to buy fuel-efficient Japanese cars, where they stayed because Honda had invested in "customer service" and "building reliable cars that worked", whereas Chevrolet's R&D budget was divided between tail fins and finding new buckets of sand into which GM and UAW management could plunge their heads to pretend the rest of the planet didn't exist (to be fair, they're still really good at that).


TBH American cars are still crazy inefficient from my (European) perspective :)

And oil prices are another way to regulate that. In my country oil price at the station is over 70% taxes.


> TBH American cars are still crazy inefficient from my (European) perspective :)

Well, we can't all have Volkswagen do our emissions testing. :)

Why would you say "crazy inefficient"? I don't think that, say, a VW 1.8L is, practically speaking, any more or less efficient than a Ford or Toyota 1.8L. A Ford Focus gets comparable gas mileage to, say, a Golf or a Mazda3.

The Golf has a better interior, but will also fall apart much sooner -- VW in the US has a shockingly bad reputation for reliability and customer service. Which sucks, because I really prefer VW's design language to pretty much any other brand.

You might on average drive smaller cars in the cities, but that's more of a preference issue than


One notable example I can think of is accessibility services.

In the US, public transit must accommodate the disabled, and for some types of trips or some types of disabilities there is a totally parallel transit system that involves specialized vehicles, operators, dispatchers to efficiently route vehicles, etc. It's also a massive PITA from the rider's POV, since you have to dial a call center to schedule a day in advance and you get a time window in which the driver will show up. This system dates from the '80s, before the Internet and before taxis were mandated to be accessible.

New York City tried a pilot program in which this system was replaced by subsidizing rideshare rides, since in the 21st century all taxis are required to have accommodations for the disabled anyways and you can leverage a well-tested system of ordering rides instantly and a large fleet of vehicles. While this did reduce per trip costs from $69 to $39, the increased convenience caused ridership to also skyrocket, so it ended up being a net drain on finances. [1] http://archive.is/N3DjJ


Also, scammers using VOIP (plus extremely sensitive ADA rules around treating disabled people nicely and never doubting people who claimed disability) ruined the deaf-serving text-to-telephone gateway. Fortunately that problem was mostly solved by the Internet mostly killing voice phone.


That's a great example!


Yeah, basically you would be looking for a government policy that would be making something cheaper, but also so wildly convenient that it ends up increasing usage faster than the savings.

Another example is the expansion of highways; if highways are free, expanding them to relieve traffic will generally cause car travel to go up as more trips become tolerable, and then the highway will be as congested as it was before. https://www.vox.com/2014/10/23/6994159/traffic-roads-induced...


Consolidation of subway systems in London. Standardisation of rail gauges, screw threading, electrical outlets, phone networks. Basically standardisation of everything that just works and you don't notice.

Could go on... money, power grid, air traffic control, waste collection and disposal.


Health insurance is a great example - a single payer system has much more bargaining power than everyone trying to negotiate for medical care at a moment when they'll die without it.

Of course, such a system is less efficient at extracting value from consumers, so I suppose your question requires an assumption as to whom a system is efficient for.


> Health insurance is a great example - a single payer system has much more bargaining power than everyone trying to negotiate for medical care at a moment when they'll die without it.

Also not sure that's the best example.

Singapore, Japan, Germany, Switzerland... all of those are multi-payer, but tightly regulated (which imposes equal costs across all actors, so that's coordination once again).

And I'd have to dig out the article, but I believe the above model (Bismarck) is better at controlling costs, and produces more positive outcomes as well.

The US healthcare system is a mess for a lot of reasons.

Healthcare being tied to employment is probably the biggest.

Maybe the second is a lack of any sort of common healthcare market? You can't just take "any insurance" and go to "any doctor"; instead, you have to navigate a maze of in-and-out-of-network relationships. It's like scheduling an appointment with the Mafia: "My cousin's dog-sitter's best friend's uncle's pool-boy Vinny knows a guy that can take care of your headache."

The adversarial relationship between insurers, patients, and care providers is also a problem. Insurers work very hard to screw hospitals and patients, so hospitals have insane overhead costs to fight against the insurers, and patients... oh god, don't get me started there.

Regulatory capture also plays in. And there's more, but yeah, it's a mess.


Fair enough, I mentioned single payer because that's the system I'm familiar with. The 'adversarial' relationship between insurers, hospitals, and patients is precisely the kind of market competition that theoretically leads to the best outcome though. GP's ask was simply about examples where regulation leads to more efficiency, it sounds like bismarck and single payer are both more regulated and more efficient (again, from the patient's perspective).


Health care. There, regulation makes it more accessible to more people, improves quality, and drives costs down. Deregulated health care systems are less reliable and more expensive. People who can afford it will pay anyway.

Public transport. It benefits society as a whole when people are able to move around, and if they can do so without causing massive traffic jams. Regulation, keeping prices low, and ensuring that even remote areas are reachable, make it attractive to use and will make it more usable to more people.

Labour in general; shorter work weeks and improved working conditions have improved productivity.


Not an expert, but I think this qualifies.

Government policy to improve energy efficiency (government grants to improve factory production efficiency) can lead to increase in total energy use as the factory is more profitable with better efficiency.

EU does have programs to improve efficiency in this manner.


I suppose that would make sense if the government was solving a coordination problem?

E.g., no manufacturer will install Oliver's Optimizer, which promises a lifetime 10% savings in energy use, because it would force them to shut down operations for a month while the optimizer is installed, and put them at a disadvantage compared to other manufacturers.

By requiring the Optimizer (or equivalent) as a licensing requirement for factory operation, all manufacturers share the same burden, and thus suffer no relative disadvantage.

Is that the general idea? I'd be worried about regulatory capture in this case -- e.g., Oliver lobbying to force the market to install his Optimizer -- but that's an entirely different discussion. :)


I'd say, yes. You've correctly noticed in this subthread that government regulation is a solution to coordination problems. All kinds of situations that pattern-match to "it would be better if everyone were doing X, but X comes with some up-front costs, so whoever tries doing X first gets outcompeted by the rest" are unsolvable by the market (especially when coupled with "if everyone else is doing X, stopping doing X will save you money"); the important role of a government is then to force everyone to start doing that X at the same time and prevent them from backtracking.

To the extent you can imagine the market as a gradient descent optimization, coordination problems are where it gets stuck in a local minimum. A government intervention usually makes that local minimum stop being a minimum, thus giving the market a necessary shove to continue its gradient descent elsewhere.


> To the extent you can imagine the market as a gradient descent optimization, coordination problems are where it gets stuck in a local minimum.

I think this is a very appropriate analogy.

A thought: the cost function that the market minimizes is only a proxy for the various cost functions that we (humans) actually care about. I wonder how much (if any) “government inefficiency” is due to the mismatch between the market cost function and these other cost functions.


I don't know about inefficiency within the government, but I think most of regulating of markets happens because of it. As you've noticed, market's cost function is only an approximation of what we care about in aggregate. Regulation adds constraints and tweaks coefficients to keep the two goal functions aligned as much as possible. Which is hard, not least because we can't fully articulate what we care about, either individually or in aggregate.


Standardisation generally increases market size which means efficiencies of scale and ability to buy the best stuff from anywhere in the larger market, rather than being stuck with local stuff that works with local standards.

Government isn't always required for standardisation but even when it's industry led, it feels like government because it's cooperative, which means committees, votes, etc.


> any historical examples where government policy other than deregulation has increased the efficiency with which a resource is used?

Not really an example, but any government policy that deals with a tragedy of the commons situation.

Take for example the NW Atlantic cod fishery: "In the summer of 1992, when the Northern Cod biomass fell to 1% of earlier levels [...]" [0] I'm sure that if Canada, the US and Greenland had come together and determined a fishing quota, those fishermen would still have a job today. Instead they were so 'efficient' that there was nothing left for them to catch.

[0] https://en.wikipedia.org/wiki/Collapse_of_the_Atlantic_north...


> So, an ask: any historical examples where government policy other than deregulation has increased the efficiency with which a resource is used?

I would say there are examples around. For example, the numerous dams and levees we enjoy. Getting wrecked by a flood is not very efficient. Non-navigable rivers are not efficient.


Jevon was working on fuel consumption. There has been plenty of government regulation that improved the (average) fuel efficiency of machines, even back then when they were steam powered.


I'm not sure if that answers your question, but building wider, faster roads doesn't reduce traffic. People just drive more.


your observation is correct, but perhaps not the conclusion? If more people are traveling over a given section of road per hour (as you imply), isn't that more "efficient"?


The alternative is public transport or living closer to your work, both more efficient than everybody driving in their cars and waiting in the trafic.

Basically it's comfort vs efficiency, and people maximize their comfort within the acceptable traffic jams level.


Phone charging cables?


I certainly buy a lot of them. They keep breaking.


Fun fact about Jevons: He put forth one of the first prime factorization challenges.

https://en.wikipedia.org/wiki/William_Stanley_Jevons#Jevons'...


I second those thoughts.

When I was 10 years old I started coding in Qbasic, a few years later I told my dad I wanted to be a programmer when I grew up, he told me that it would likely be automated soon (as had happened with his industry, electronic engineering) and I'd be struggling to find a job. 23 years later and the demand still seems to be rising.

I'd say we're still quite far from such level of abstraction; but a certain degree of it is already possible as you say... k8s/docker/kafka/glue/databricks/redshift, all of these technologies mesh together "seamlessly", but more problems arise as a result.

The problems we must tackle just shift elsewhere.


And when UML started getting in vogue in the mid 90s a lot of people said that "intelligent code generators" would automate a large amount of programming.

It did not happen the way people predicted, but it has somehow happened in the form of Angular, Ionic, Express, Ruby-on-Rails and similar frameworks: More and more programming means "writing glue code", being it to glue Machine Learning libraries (yay, ML developer!), HTTP libraries (yay, Web developer!), AMQP/SQL/NoSQL (yay, backend developer!) or even OpenGL/DirectX/SDL (yay, game developer!).

The fact is, as more and more of these abstraction libraries are created, "programming" will go one level of abstraction up, but still need people to do it.


In 2002 the inventor of Microsoft Office (Charles Simonyi) took his $billions and left to create a company to replace programming with an Office-like app. In 2017 the company (Intentional) was acquihired back into MS after failing to generate a profit or popular product.


Angular has allowed me to create REST UIs at only half the speed that I was used to 20 years ago when I was using FoxPro.

I call that progress.


I distinctly remember talking to programmers at a job fair in the mid-80s who warned me that there was not much future in programming.


What level of automation happened in electronic engineering?


I think the real change is the rising threshold between commodity software and specialised solutions. When I started my career more than a decade ago, I built handmade static websites and online shops for small and medium shops. Today these are commodity software, easily served by Squarespace/Wix/Shopify etc.

At the same time, when I started, Basecamp was amongst the top SaaS solutions on the planet. Today, its simple form based approach wouldn't cut mustard with consumers accustomed to instant feedback, realtime collaboration and behind the scenes saving.

This is especially apparent in the games industry. Early games like Doom or Wolfenstein were often developed by less than five people. Today's open world titles like AC Odyssey or Cyberpunk 2077 require 100 times as many people.


I work in the games industry and I want to ride your sentiment a little; it's not 100 times, it's more like 200 times.

It was >1,000 artists, designers, programmers and sound engineers to make each entry in the series of games I worked on.


And it has become really frustrating to be forced to sit through the credits at the end of a AAA game, it just seems to keep going forever these days!


The idea that we will have some abstraction that will someday (in the foreseeable future) save us from all of this difficult work sounds very far fetched to me, and I can’t imagine how that would work.

We actually can imagine how a natural-language driven "black box" that translates it into code works: it's called offshore software development. The conclusion that everyone eventually reaches, having experienced varying levels of pain first depending on how quickly they learn, is that writing a spec detailed enough to make that work is as much or more work than just writing the code yourself!


'The premise that we are on the verge of some breakthroughs in software development'

There will be breakthroughs in SW development but as with all breakthroughs no one can exactly tell when they will occur, so let's say within the next 40 years.

The microelectronics industry has largely moved to automated validation. Some of the ideas have already migrated to SW validation, although progress and adoption is slow.

Probably a key idea for automatic SW generation and "no-code" is to realize that a Turing complete language is not required at all times, well most of the times it is even counter productive. Too often SW engineers fail to realize that as well.


> You still need people who understand things like docker, kubernetes, endless different database options, sharding, indexing, failover, backup, message queues, etc.

Large companies at significant scale need to know these things. Smaller companies don't need kubernetes, message queues, or anything beyond a simple standard off-the-shelf setup. I'm guessing the author was referring more to small/mid-sized companies that aren't at FAANG-scale and have no need for that complexity.


As a counter-point, I run a one-man tech business and use kubernetes to run some 60+ application and db servers. I don't have time to babysit each application I'm running and kubernetes is a force mutliplier that I rely on heavily.

There is a cost to managing it but even so, without the automation it provides I simply wouldn't have the capacity to do what I do.


Interesting, though the fact that you're managing that many servers as a one-man show is a testament to the author's point.


I disagree with the author that it will be "no code." But I would also not dismiss how much more productive the cloud and better devtools has made developers. And as much as people, especially on HN, like to pick on bootcamp graduates, it's undeniable that you can get someone with no to little experience building complicated software in a matter of month.

What I think will happen to software engineering is that the middle will shrink. We'll see many more frontend and product engineers, and slightly more infra and systems programmer. I think the fullstack, middleware rails/django type engineering will all but disappear (most will move towards product).


>it's undeniable that you can get someone with no to little experience building complicated software in a matter of month.

Yeah, and who do you think comes along and cleans up their mess, extends and maintains that software once your cowboy coders are gone?

We might be more productive, but only to a certain point. The complexity comes when people want to twist and bend the off the shelf solutions in ways they weren't designed for, and when systems become so large and complex that adding just one more feature takes a significant amount of time.

This is what differentiates your low cost bootcamp grads from highly paid software engineers. Experienced engineers aren't just building for today, but for the future.

I don't buy that fullstack devs are going anywhere. The real world is complex, the devil is in the details and the complexity of those details can't simply be chucked into an off-the-shelf solution and be expected to survive. We'll still have to have people that glue all the pieces together, we'll still have to name and compose things, to make modifications and optimisations, to maintain existing products, and we'll need people that push the boundaries of what's been done before and explore the new.


> Yeah, and who do you think comes along and cleans up their mess, extends and maintains that software once your cowboy coders are gone?

Sometimes someone with a CS or SE degree, sometimes someone who learned to program as a hobby while doing something completely irrelevant like Music, English, bar tending or high school and sometimes the cowboy coders themselves with more experience. There’s an enormous amount of theory in programming which is highly relevant to many, many people but you can be amazing at CS theory and write scientific code that’s garbage, uncommented spaghetti like the Imperial epidemiology model. At the other end you can have a great grasp of how to write clean, modular, well commented code and have no idea how you would start parsing a text file to extract all nouns or some other introductory undergraduate project for one of the infinitude of topics in CS.


> At the other end you can have a great grasp of how to write clean, modular, well commented code and have no idea how you would start parsing a text file to extract all nouns or some other introductory undergraduate project for one of the infinitude of topics in CS.

How difficult is that to read up on? I do a lot more of the former than the latter as that is what real life jobs entail (actually most of them involve fixing other peoples shitty code).


Ask the engineers grousing about whiteboarding algorithm interviews, or the companies assigning them.


> Yeah, and who do you think comes along and cleans up their mess, extends and maintains that software once your cowboy coders are gone?

It will be like excel spreadsheets: full of errors, moments of brilliance, utterly unmaintainable, yet used every day for mission critical services.

Running your own blog, or email server, has slowly shifted from requiring technical chops, to becoming passé.

Zapier, ifttt, n8n, etcetera allow one to do amazing automation that couldn’t easily be done ten years ago.


You can get people build things in a matter of months but definitely not complex things.


Yeah, the whole premise is nonsense imo.

I personally think we have not made any significant progress in 20-30 years with regards to development and the number of software developers is still growing at a rate where the majority has probably less than a year experience. So no progress can be made as the industry never matures.

The kind of optimism displayed in the article reminds me of my early years as a developer ;)


In software engineering, languages and tooling the progress is really slow. RAD tools existed for decades, OOP and functional programming paradigms remained largely the same for a very long period of time ... incremental enhancements but not really a breakthrough.


So true. Believing we will “finish” needing computer programmers because “the work is done” misses the basic physics that governs this process. And naively believing we are finally there is a regularly resurgent myth.

We have been predicting machines would not need programmers since there were programmers. It is true for a _given_ task at a given complexity level. But overall the demands on, and for, a modern programmer have only gotten higher, because the demand of all business and human activity is to offer more than we might have otherwise.

Just wait until, for an app to differentiate itself in business, you have to create intelligent responses in a variety of augmented reality interfaces, correctly predict human behavior, and interface with the physical environment in a routine and nuanced way. And the companies that can do it well are suddenly dominating the ones who do a sloppy job.


I've been around long enough to see these claims over and over again. You and me will be right that this claim again is false, but I think each time developers get more productive, they can do more with less time, and at some point developers will be able to do so much with so little time that we need few of them. So far though, the need for software has continued to grow even as developer productivity has increased, thus there has been no significant employment issues.


I have to admit that as a technical person it's easy to ignore the tools which are being built for non technical people. A non-technical is currently building a wordpress site for me that is better than what I would have thrown together as css/html.


They sound like a tech. Wordpress is powerful and is made up of more than css/html.


Check out elementor


I think if something like that happens, it will be on the scale of the shift from alchemy to chemistry - some sort of as-yet-unimaginable standardization which changes what is currently an art to something more like a science. I don't expect to see anything of the sort in our lifetimes, barring some very extreme advances in medicine.


Eh, I also think a lot of people overengineer things that are made simple with recent technology and that in fact most companies don't need the best engineers to get their job done.

I think it's totally true that one can leverage new tools to get more work done with less people, especially when it's for a service that doesn't reach scale and what not. Most companies don't need that to be lucrative. But I think the space of problems expands, whether that is more fields valuing tech, feasible complexity increasing in others, or competition just ratcheting up by lowering technical barriers to entry.


> (failed fifth generation of programming)

Take a look at AirTable and IFTTT/Zapier.

Its not going to save us from all of the work, but it does eliminate a lot of redundant work. For sure.

> Even the example of hosting complexity being replaced by cloud companies seems kind of silly to me. Maybe that’s saving very small companies a sizable fraction of their engineering resources, but I really doubt it for medium or larger companies.

I think that hundreds of billions of dollars has been spent moving from localized IT to cloud. Do you really believe that was all a waste of money? For example, most of those medium or large companies had their own operations software backends, and most of it was eaten by clouds services/APIs.

> You still need people who understand things like docker, kubernetes, endless different database options, sharding, indexing, failover, backup, message queues, etc. Even if the pieces are now more integrated and easier to put together, the task of figuring out how the pieces will interact and what pieces you even need is still outrageously complicated.

Docker/K8s is a good example. I spent more than a year building a Docker orchestration/hosting startup (eventually decided not to try to compete as an individual with Amazon). But when I recently needed a reliable way to host a new application and database, I did not have to configure Docker or K8s at all. Why? Because I used AWS Lambda and RDS. Those are examples of software eating software. AWS can handle all of the containers for you if you do it that way.

As far as failover and backup, that was handled by checkboxes in RDS. I did not need a message queue because that was built into the Lambda event injection service.


> Because I used AWS Lambda and RDS.

Just like people put some PHP scripts on fully hosted Apache+MySQL 20 years ago. This was very common and far easier than AWS. (And reliable, too, although not as scalable, but the needs then were different). The point being that all of this has been here before. Every few years some complexity creeps back in (in exchange for some other benefits) and then it‘s eliminated again and some progress is made. But works always expands.

Recently I helped a friend who is a teacher with her Excel sheets to do grade reports. Pretty well done for Excel, yet it was a terrible user experience. Even if there are only 1000 users, working with this 1 hour per month, a proper custom made software would have been better and easily economically viable. Even no-code has existed for a long time, but it never fits perfectly.

Similarly, people regularly complain about just glueing components together. As opposed to what? Copying sorting algorithms right out of a CS class? It‘s a strange idea. Looking at the code I work with, over the years, I find very little glue. True, there are abstractions and sometimes they get in the way. But they are there for a reason. Whether you start out from scratch or use frameworks, much of the application will revolve around business-related data structures.

You can give it a try: Take your real-world product, strip out the abstractions, replace the built-in UI widgets, sorting routines, hash tables of your language and maybe that OR-Mapper and your GraphQL-server framework and so on with your custom code and something minimalistic. It won‘t take that much time and code and compared to the stuff on top you‘ll find it‘s not that much that you actually used in the end. Nothing to glue together anymore.

Not that it makes sense to do this. But the idea that glueing things together has replaced „real“ development is very much mistaken.


>As opposed to what? Copying sorting algorithms right out of a CS class?

As opposed to, I'm guessing, adapting Monte-Carlo tree search to Go and inventing AlphaGo. Or what BellKor was doing back in the Netflix Challenge days. Not copying sorting algorithms out of a CS class, but solving a puzzle with a clever new algorithm that just works, and then everything else falls into place.

(Call it "if you write it, they will come" taken to its logical conclusion.)


You might also notice that none of that is actual product development and it will not lead to a product anytime soon (and as far as I know the Netflix Challenge results were not used by Netflix). That‘s research. It‘s great if you‘re working at a research department or maybe you do it for fun after work. But how is this related, exactly, to the software industry?

The point was that actual software engineering has not been more or less trivial than a few decades ago and that the glue isn‘t that much after all.

There was never a time when the daily work of software engineers was, in fact, research. There are plenty of research institutes and universities and even research labs of software companies where you can do this after you‘ve got your degree. Just go there, many of my friends are doing exactly that.


> Its not going to save us from all of the work, but it does eliminate a lot of redundant work.

It's not going to eliminate work in software development, it's going to increase work in developing software systems by increasing the average value of each unit of work, and, simultaneously, move the average level of the work higher up the abstraction ladder, just as every advance in software productivity since “what if we didn't have to code directly in machine code and instead has software that would assemble machine code from something one-step more abstract”.


I’m looking to get into the hosting space using containers. Basically combining my consulting business with a hosting one since I’m doing it already for clients anyways.

I’d love to exchange notes with you on the lessons you learned building your system and the challenges you faced.

I’m using rancher/k8s with docker on top of “unreliable” hosts with AWS/GCP/DO/Azure providing “spill over” capacity for when those unreliable cheap hosts prove why they’re unreliable.

Is it possible we could get in touch? You can reach me at hnusername at Google’s mail service. Would love to connect if you’re open!


I sent you an email


"I can't imagine how that would work" -> hearing that kills me inside :(

I think we have 2 options:

OPTION 1)

We've reached a plateau -- software will continue to be developed as it is now, no new abstractions.

OPTION 2)

Mankind will create a better set of tools to:

- reduce the effort needed

- increase the # of people who can participate

in the translation of ideas/requirements -> software.

For everyone's sake [1], I really hope it's the second! :)

As one crazy idea, imagine if you could have a spreadsheet that would let you build software instead of crunch numbers... ... anyway, probably a bad idea, we should stick to our current abstractions and tools :D

[1] Take the above with 2.42 lbs of salt, I'm the founder of

https://mintdata.com


> The premise that we are on the verge of some breakthroughs in software development that will significantly reduce the need for engineers is really weak

Well, there hasn't been one single major breakthrough but rather a lot of small ones that cumulatively mean that software has become easier to write. Most of it is more mundane than new fundamental abstractions, it's more about distributed version control, better bug trackers, better libraries, more accessible documentation and learning materials, and so on. These things allow software to be written more quickly with smaller teams. Even someone writing in a language like C that hasn't changed much in decades will have a far easier time of it in 2020 than it 2000, simply because of the existence of StackOverflow and the progress that has been made in getting compilers to warn about unsafe code.

This is combined with the fact that as more software is written, less software needs to be created to fill some functionality gap. As long as we have computers and people who care to use them, there will always need to be new software written. Most software that people get paid to write is not written for fun or for intellectual exercise, though, it's written to solve a business need. If that business need can be satisfied with existing software, there's less motivation for a businesses to write their own.


> The premise that we are on the verge of some breakthroughs in software development that will significantly reduce the need for engineers is really weak

We are on the brink of economic contraction which is forcing a rethinking for the need of software engineers. The necessary disruption is there. It is economic, not technological.

Yes, there will continue to be a need for software engineers, but business expectations will change as budgets will adjust. I suspect fewer developers will be needed moving forward and those developers will be required to directly own decisions and consequences, which has not been the case in most large corporations.

> In my experience, software engineering is endless, complicated decision making about how something should work and how to make changes without breaking something else rather than the nuts and bolts of programming.

Agreed, but that is not the typical work culture around software development. Thanks to things like Agile and Scrum developers are often isolated from the tactical decisions that impact how they should execute their work, and for good reason. While some seasoned senior developers are comfortable owning and documenting the complex decisions you speak of many are not so comfortable and require precise and highly refined guidance to perform any work. This is attributable to a lack of forced mentoring and mitigated by process.


I think there has been a steady reduction in the required IT personal needed to do a lot of things. Need a web-page/web-store? You buy a standard product for almost no money, and you don’t really need anyone to run it for you. 25 years ago that was a several month project that involves a dozen of engineers and had a costly fee attached to after launch support.

At the same time we’ve come up with a bunch of new stuff which gave those engineers new jobs.

I do see some reduction in office workers by automation. We still haven’t succeeded with getting non coders to do RPA development for their repetitive tasks, but the tools are getting better and better and our workers are getting more and more tech savvy. In a decade every new hire will have had programming in school, like they have had math today. They may not be experts, but they’ll be able to do a lot of the things we still need developers to do, while primary being there to do whatever business logic they do.

But I’m not too worried, we moved all of our service to virtual a decade ago and are now moving more and more into places like Azure, and it hasn’t reduced the need for sysops engineers. If anything it’s only increased the requirements for them. In the late 90ies you could hire any computer nerdy kid to operate your servers, and you’d probably be alright, today you’ll want someone who really knows what they are doing within whatever complex setup you have.

The same will be true for developers to some extend, but I do think we’ll continue the trend where you’ll need to be actually specialised at something to be really useful. If virtual reality becomes the new smartphone, you’ll have two decades of gold rush there, and that’s not likely to be the last thing that changes our lives with entirely new tech.


> 25 years ago that was a several month project that involves a dozen of engineers and had a costly fee attached to after launch support.

25 years ago, yes, but white-label hosted web store things were around in the early noughties. I think there were even a few in the late 90s, but those weren't very good.


Yeah it reminds me of a project at my company that was an attempt to automate certain development processes so that people could ship features without developer involvement. Cool idea! So they built this wonderful system and now there's a team of 6 devs solely dedicated to maintaining it lol.


RPA seems to be the biggest area where this is currently popular. The "citizen developer" bullshit they're pushing IMO sounds good on paper but will lead to fragile bots that end up falling apart and not being properly maintained at scale. I can't imagine handing someone with no programming experience UiPath or whatever and having them basically deploying software directly to production. As far as I know there isn't a "code first" approach to this set of problems but there probably ought to be as someone who can't write code isn't likely to produce a high enough quality product even with a dumb downed drag and drop tool to make it worth it.


In my experience, for the small companies you have an endless stream of custom jobs that need quick unique solutions. You get the same outrageously complicated work, just with the ability to use more duck tape and one-time solutions. Cloud solution is, with some exceptions to self contained commodity solutions (such as email), about managing costs of spinning up or down hardware. Pay a premium on what you need right now, rather than investing into what you might need tomorrow with the possibility of guessing wrong.


You make some great points. There's a humanistic quality that can't be replaced. I realized this even more so while self-isolating. For example, instead of going on TikTok, I decided to build an entire app from scratch. A few of my friends thought it would be a useless app - one "anyone could make." But if I did make it, it should be with severless tech, GraphSQL, AWS amplify, etc.

I decided to just use a $10 Digital Ocean server. With stocks so cheap, my goal was to build an automated trader during COVID-19: https://stockgains.io

I initially used Google spreadsheets but it wasn't effective. I spent a week with Docker, learned MySQL 8's new features, and Ruby on Rails 6 for rapid development. There are so many nuances with storage engines, libraries, query and cache optimizations, and UI/UX design that requires human thought, experience, and skill. Sometimes plenty of it. Now the beauty of this tool isn't the price difference of a stock before COVID (a robot could do that), but the filters. These filters were created from a human (me) reading over 100 books on trading stocks and writing down quantitative and psychological parameters. And I kept track of what could be "automated" over the years.

I just can't imagine a robot reading all those books and doing the same thing. Not just the design, but just building a vision. There's an art and complexity involved in solving problems.


Similar claims were made in the early 1960s when high level language compilers arrived: "Computers will practically program themselves." "There will be little need for programmers anymore." Every new software technology since then has sometimes triggered similar claims.


It is easier to understand code and its consequences than human language; hypotheses are testable and verifiable. It helps to think of coding as a form of game.

Open source, and Github specifically, can be mined and reused like any other knowledge; pay attention to Microsoft and OpenAI going forward.


It's easier to understand language on a syntactical level, the program itself is turing complete and we don't even have a decent automatic verification tool.


I would like to add, automation comes in stages. Some jobs are eliminated, then more, then more.

I just don't see any of that happening in software. Yes the tools change, but the number of jobs isn't going down.


Even more simply, if something took a person year to write, it will take at least a person to maintain in perpettuity, as bits rot, especially after the profit authors disappear.


I think software development improvements follow an exponential curve.

It is, and has been, extremely slow. Until it is not, and then it will grow very fast in a short time.

And then it will grow even faster.


Also: whenever significant reductions in complexity are achieved, the result is a more expansive usage of software, not a reduction of engineers to achieve the same result.


I completely disagree. The new tools (bubble, zapier, airtable, webflow etc.) are an order of magnitude easier to create applications (and even relatively complex ones!)


And according to the law of leaky abstractions, they are 4 more complex systems you need to understand when they break down.


None of this will scale to more than 10 users/sec.


contrary to what whiteboard interviews test for, programming is more of an art than science, and computers are generally bad at determining what is good art.


This sounds like it was written by someone who hasn't worked with ERP systems, to give an example of where software will never eat software. "Can we make A work with B?" -- a lot of businesses tie together Salesforce + 8 other systems, and that's how their business ticks. And a whole cottage industry forms around it: consultancies, etc. I need to see a clear non hand-waving explanation for how all of that complexity melts away.

The industry has already tried commoditizing by off-shoring. What we learned was high-performance teams require psychological safety and trust. The human factors involved in creating software are why engineers are not plug-n-play. Because that reduction of the problem doesn't describe how the software is actually made: product solicits customer interviews/data to recommend new features, architects brainstorm a high-level solution, and the IC engineers implement the vision. Human factors, through and through.


> This sounds like it was written by someone who hasn't worked with...

Every broad article I've seen like this speaks about 'software' as if it is a monolithic career path. The lives of web programmers, embedded engineers, AI researchers, ERP programmers(etc, etc) are all quite different. Most of the articles I've see on programming/software engineering don't capture the things I've experienced over my 23 years as a programmer.


And then there's the "invisible programmers", the ones who might cross-train as IT technicians, who write internal-only software on an as-needed basis. Need a report? Need a webapp so people can work with a business process database? Need to integrate a CRM into Jira by pulling information out of its backend database nobody has a schema for? Not the kind of stuff they teach you in school, bucko, but someone has to do it.


To me this is a super underrepresented group if you can call it that. Tons and tons of mid market companies have more of these programmers than traditional CS grads.


When is the last time anyone considered an Application Analyst a developer? On a small scale, they often do that.


I believe they were referred to as catfish programmers here a while ago. We say hello.

https://news.ycombinator.com/item?id=14564455


100% agreed that people are performing higher level functions today that software can't perform.

However, I think your first point about multi-system interconnectivity is ripe for change.

It's been the case that the literal act of running a business has been humans serving as copy-paste bots between systems, both internal and external. Come to think of it, from a purely software point of view, businesses look a lot like giant, multi-system ETL machines, except that the individual steps in the pipeline (Salesforce, SAP, Netsuite, etc.) don't talk to each other. This is even worse when it comes to interactions with other businesses (customers/vendors/partners) - everyone has different systems and none of them talk to each other.

So we fall back to the lowest common denominator - Email + attachments (XLSX, PDF), CSV over FTP etc.

The fundamental problem is not very different from the challenge of human language translations. Getting SAP to talk to Salesforce is a similar class of problem as enabling an English speaker to talk to a Hindi or Mandarin speaker. If the latter is a solvable (solved?) problem, I don't see why software talking to software is that different. There are of course domain specific challenges, like the fact that both systems being translated between require 100% translation accuracy.

We are working on solving this at https://42layers.io. It's early days for us, but this is exactly the problem we are solving.


The way I see it, the fundamental problem is that the producers of these systems don't want them to talk freely to each other. Every vendor wants to control the conversation (and when you see someone calling their product a "platform", you know they want to control all the conversations in a given sector). E-mail is the lowest common denominator that works, because it happened in the age where computer technology was developed to coordinate, not to compete and control.

Conversations between systems is an easy problem, in the same way translating English to Mandarin is easy if both people are also fluent in Hindi - they can round-trip through the shared language. Systems designers can also negotiate a common protocol. It doesn't have to be automated, it can work just as well with some programmers continuously keeping the protocols up to date. The problem is, there's a strong business imperative to not do any of that.


Super underrated comment. What's even worse is that, internally, large companies have small groups that create bespoke solutions and then try to sell those solutions to the rest of the company. I've seen so many "final frameworks" that are going to solve all of the problems that are then sold to all of the other groups, who try to move their stuff onto that framework, but wait now Team X has another framework and oh man which one should we follow? It's just a new version of the fundamental problem of getting diverse human groups with diverse needs to standardize on some solution, with all of the economic incentives you described mixed in. Frankly I think the people who got this problem closest to right were the American Founding Fathers. This is fundamentally a political problem. The best technical solution I've seen proposed is in the talk "Architecture Without an End State," where the speaker talks about how to make smart decisions in a decentralized environment.


Brilliant!

> English to Mandarin is easy if both people are also fluent in Hindi

This is so true. In one of my tasks as a consultant at a law firm, this is literally what happened when working on a plaintiff side case.

A partner spoke Mandarin, Japanese and Hindi while I spoke English, Hindi. We were called upon by translators a lot to proof eDiscovery case files.


re: 42layers - a quick look at your site shows a "contact us" - so, contacting you this way - how are you solving this hindi/mandarin problem? :-)


I'll try and answer that to an extent here :)

Lots of companies trying to build low-code solutions to help business people glue systems together. However, for pretty much all of these solutions, while the end user isn't writing code, they are forced to think like a programmer - if/else, loops etc.

We are taking a very different approach.

We've built a transform engine that can be trained on transforming data from a source structure to a destination structure using a few (10s) of examples of source and destination. We can do this transformation without falling into the trap of figuring out acceptable confidence levels - a trap that most ML systems fall into, and thus have a hard time with enterprise usage.

We couple that with dynamic, configurable integration infrastructure ("connectors" in old school enterprise speak) that can send+receive data to/from lots of systems over many protocols and serialization systems.

End result is that end users can connect systems together with a few clicks and by providing a few lines fo training examples, not unlike what a business person would give a dev and say "extract a CSV from SAP and put it in that FTP folder. the CSV needs to look like this file"


>>> forced to think like a programmer

That is the failure of every ORM and visual programming tool I have seen.

(But i happen to think that being forced to think like a programmer is good - on the order of being forced to think like a literate person would have been a few hundred years ago)

But interesting if you can do it.


In your specific SalesForce scenario they built SalesForce to stop people from coding CRM systems, and then people continue to make money building connected abstractions / Apps on top of other systems that are SalesForce compatible so you get an app ecosystem that abstracts away all of these integrations. The result is less code.

I agree that there is a lot of complexity specifically for "mission critical" or "last mile" systems that will not be addressed by the mainstream abstractions for many organizations, but I don't think SalesForce is necessarily the best example. I see the author's hypothesis freeing up time to do a lot more things within organizations that are otherwise on the back burner because you can't get to that feature set, and/or pivoting to solve either a) complex problems that are not yet solved or b) specializing on a layer that is now "platform". Somebody builds AWS, and Azure, and GCP. Somebody has to create, build and maintain the next platform / abstraction too.


I think your falacy is in the "less code" assumption when you say "The result is less code". I'd argue that empirially we've seen this to be false. The result isn't less code, at least in a global sense, its more productivity, more features, more customization, and more specificity at a cost of less code/feature. Software has really interesting economics where as the cost/feature decreases by a factor, say 1x, then the set of features that can be profitably worked on expands by like 10x, so paradoxically, as cost/feature decreases, it makes sense to hire more engineers and expand R&D budgets.


The Jevons effect is not especially peculiar to software.


I think ultimately, the question is whether this trend will result in "fewer programmers needed", which is the most important by-product of "less code" in the author's thesis.


Did we slash R&D budgets once we standardized on the X_86 instruction set thus needing less compiler devs? Did we slash R&D budgets when we moved from on prem to cloud hosting? We have seen this happen many times before, we know the economics. Decreasing cost/feature is synonymous with increasing productivity. We know that a 1x increase in productivity results in a very large increase in the numbers of features that become feasible.

There isn’t some fixed factor here that causes it all to collapse. Productivity increases are plowed into growing the market 10x and building the business, not reducing eng budgets. At some point in the future this will slow down, but that is so far from happening, like many decades from now, maybe never in a non theoretical sense.


Yup - this is a much better way of describing the intent of my words. Thanks.


> The result is less code.

No, it's more code with a greater value:code ratio. It's lower code for the same delivered value, but no one stops at the same delivered value that they'd have without whatever tipped that ratio, because the incremental value for the next unit of code is higher.

Increasing the value delivered per unit of code increases the volume of code purchased.


Related, if maybe not quite the same thing: https://en.m.wikipedia.org/wiki/Jevons_paradox


Yes this is a much more elegant way of putting it. Thanks.


>> The result is less code.

Maybe, but that code tends to be extremely bad quality, because it is always written by "consultants" who know just enough programming to be dangerous and do the bare minimum to get the integration to work, without any concern for or the ability to follow software engineering best practices. And that introduces its own costs.


I think no-code is different than off-shoring. Wherewith offshoring you needed domain expert, some kind software architect which maps features and off-shored team, with you no-code you need usually only domain expert.


>This sounds like it was written by someone who hasn't worked with ERP systems, to give an example of where software will never eat software.

Software has already been eating software. Imaging building something like Salesforce or an ERP system using only Assembly. Just as programming languages like Java became an abstraction level over Assembly and simplified development of complex systems, something else will emerge (or is already emerging) as a higher level abstraction and will enable creating even more complex systems.

>The industry has already tried commoditizing by off-shoring.

Offshoring doesn't create a new abstraction level.


Zoom seems to do alright.


The problem is that while it is economical to hire a team of developers to automate a simple process executed by a lot of people, there is a huge amount of complex processes executed by a small number of people in companies. And you are never going to be able to justify a team of 3 developers or more to automate and maintain the code of the job of just one guy.

Machine learning won't be the answer either. Machine learning is just another kind of software, you still need to set it up and maintain it. And you need data to train it, which for these complex processes often there is none.

The solution really is for non developers to write code to automate their tasks themselves. Here simple code with simple platform to run that code is the only solution. But we are taking the opposite direction. Newer generations are becoming increasingly remote from how computers work (teenagers seem to be struggling even with a file system), platforms are increasingly becoming locked down (both consumer and corporate environment). And I dispute the claim that software is getting easier. I am mostly evolving in a .net environment, and I think the platform is becoming increasingly messy and complicated, we are moving away from simple things. Same with technologies, every time I go back into the azure portal website, I feel I am lost in the hundred of products with evasive names.

What we need is the power of the almost "draggy and droppy" features of VBA, something end users can play with. It is shocking how much office processes rely on such an antiquated and neglected technology.


>Newer generations are becoming increasingly remote from how computers work (teenagers seem to be struggling even with a file system), platforms are increasingly becoming locked down (both consumer and corporate environment).

Of course teenagers struggle with a file system. Your iPad/iPhone/Android App shields the user from any and all meaningful interaction with the OS.

Many families don't even have a PC at home anymore. So there's no chance for them to gain this experience.


This is what's kinda crazy to me. I'll be 26 in a few months, and I was by no means raised doing the technical PC tasks that many of my older peers did. The first computer I remember using was an old Windows 95 desktop my mom got for doing her homework in college (teen mom).

I have a brother who is 15 and he doesn't know how to use a computer more than using youtube and facebook. And I constantly hear things from my parents about viruses and sketchy stuff ending up on the family laptop. Granted, not all of that is him or my other siblings, but it seems a lot of kids are missing a sort of digital literacy that many in my age group grew up with. I somehow know what a sketchy download button looks like. He has no idea.

"It said download so I clicked it" is often a response I hear.

What's more frustrating though is that my brother is not a great student. He was adopted and is getting to the age where he's starting to act out and I totally understand why. He's disillusioned with his own education and can't be bothered to care. For someone in his situation, digital literacy could give him access to a good job and a healthy adult life by learning to program, and I could help and mentor him along the way, but I know already it's going to be hard to convince him to take it seriously. I've hinted at it but I've only gotten sideways glances that scream "yeah right, I can't do that."

I'm not saying every kid needs to learn to be programmers, but we've abstracted so much technical learning away from them that it seems they're less prepared for a digital world, despite growing up surrounded by technology. Even the kids who are into tech stuff are being pushed into commoditized silos. Eg. Minecraft, etc.


Minecraft isn't a good example of your point, it's a better gateway to software literacy than most games kids of your generation were playing: from the logical circuits you can build inside the game, to at least learning how filesystems work by installing mods, and basic web server knowledge when you want a private environment to play with friends. Add to that some permanent or severely bothering consequences to the player mistakes and I think it might be the best video game for kids to play now (and Factorio/Shenzen IO when they grow up :).


Seconding this; Minecraft was absolutely my door in to server management, Java for building my own mods, and creative collaboration in an online world. Many fond memories, and it's hard to imagine I would have the career I do today without that game.


My first five years of computer education came just from trying to get various video games to boot.


Same - trying to free enough extended ram to run various games in DOS on a hand me down 286 was my first experience in troubleshooting and configuring an OS


The solution really is for non developers to write code to automate their tasks themselves.

Most of my consulting customers can't even express the problem they are trying to solve. They struggle to decompose the requirement to its constituent parts.

The few that can do that could easily become programmers.

I've used a few "draggy and droppy" tools and they can make a programmer more productive in certain domains but they can't turn a non-programmer into a programmer.


> Most of my consulting customers can't even express the problem they are trying to solve

I keep on repeating this here on HN and everytime I mention it I get downvoted to hell.

Programming is not intuitive. Iteration is not intuitive. Object design is not intuitive.

If someone gave a random guy a bunch of 2x4 and 2x6 studs and asked them to make a wall, they won't even know where to begin.

Software is way more complicated than building a frame and bolting drywall on.


> Machine learning won't be the answer either. Machine learning is just onother kind of software, you still need to set it up and maintain it. And you need data to train it, which for these complex processes often there is none.

Machine learning will learn to set up, maintain, and train itself. /s

I understand the desire to have non-developers write code for themselves, but the problem is that the quality and reliability of that code can be utterly terrible and they don't have the expertise for the edge cases, so there would still need to be at least an intermediate developer overseeing these 20 part-time very junior people simply because some of those processes would eventually go haywire and do something dangerous or destroy some data.


> I understand the desire to have non-developers write code for themselves, but the problem is that the quality and reliability of that code can be utterly terrible and they don't have the expertise for the edge cases,

People should write code for themselves, and not all code needs to be good quality or have all edge cases covered. there's nothing wrong with someone making a tool for their job and handling the edge cases as they occur.


You just about perfectly described 75-95% of Excel sheets that run an enormous amount of business processes around the world.


Microsoft Access could be a better tool for a lot of those spreadsheets, I might wager! It would teach user interface design, simple relational database concepts, data types (!) and more. I really wish MS hadn't turned this product into a dead end, seems like a lost opportunity to give aspiring devs a path to learning more capable systems.


ML can't learn to understand the business needs of a company, that's something that even external human developers are always struggling with, and it can affect usefulness of application much more than occasional bugs. On the other hand non-developer employees might not be able to produce optimized code and they'll make rookie mistakes, but they know how their company works, they understand the business needs and processes.

IMHO the win-win approach would be to have apps designed by employees with drag&dropping UI and wizards for setting the logic rules, and then to let ML analyze it and generate the high quality code based on that.


That may be the case but 1) how much of that is them using tools that are still too complex, 2) is bad code better or worse than no code at all?


I feel like Retool and external.io are trying to do exactly this.


> The problem is that while it is economical to hire a team of developers to automate a simple process executed by a lot of people, there is a huge amount of complex processes executed by a small number of people in companies

What I experience is that the small companies gets merged into a bigger company as they are no longer competitive with companies which have automated processes.


I don't buy the argument that we will have such a leap in software development productivity that we need way fewer people to solve all the things that need technical solutions. You can unravel abstractions we build on all the way to the bits and bytes and 90% of software is just gluing libraries together. Infact, you could probably describe the entirety of several of big tech cos as "just" gluing libraries together.

Anyway, the other argument about salaries is more interesting. Most people seem to agree that there's a huge untapped crowd of qualified developers in small / mid sized US cities who would love to join $BIGCO but the only reason they aren't is because it involves relocation. As an example, a Sr Dev in Orlando, FL makes $100-120k in total comp while one in SF / NYC makes $350k+. I limit my search to Sr Devs because I assume college kids are happy to move to exciting cities like NYC / SF / Seattle on fat relocation checks.

My suspicion is that supply and demand have converged already and big tech has mined out the supply of talented devs in the US already. The other datapoint here is that companies have made it as easy as possible for folks to move by opening dev centers wherever there's talent - NYC as a tech hub wasn't a thing in 2012, but it's huge now for all the people who don't want to leave the east coast. Boston is pretty big. Colorado, Austin as well.

The only way supply of devs is increasing here is if:

* Sr Devs who did not move to tech hubs because they preferred to stay where they are. (Personally think this is unlikely)

* Qualified Bootcamp graduates

* CS Enrollments hitting pretty high numbers, so maybe we'll start graduating lots of CS folks.

* Immigration reform / Outsourcing

* Interviewing change so we skip the algo problem solving shenanigans.

I personally think if big tech wants to hire in the US and still pay lower $ than they currently do, the only lever they have left to pull is the interviewing format / bar.


>Sr Devs who did not move to tech hubs because they preferred to stay where they are. (Personally think this is unlikely)

You would be surprised at how many people value their hometown or where they have settled. Technical only equates to high aspiration in SF. There are smaller slower more steady tech companies (probably using the Microsoft stack) outside of the tech hubs that offer stable jobs with decent pay and good work life balance. Being a software engineer in SF means constantly learning new tech and 'keeping up' but if you're not building a massively scalable consumer facing product that doesn't matter so much. In SF even B2B SaaS is built like this but it doesn't have to be.


Yeah, but the best of the best? Most of those people relocate.


I think you're confusing 'competitive' with 'skilled'. They are correlated but not the same thing. The best of the best who want to be the best relocate. The best of the best who don't care to be the best stay where they are. People in the second category are driven by an intense interest in their subject and their work rather than competition.


> Being a software engineer in SF means constantly learning new tech and 'keeping up'

No, it does not.

> You would be surprised at how many people value their hometown or where they have settled.

I would love to see actual data about this rather than articles from hometown newspapers and posts by hometown residents, enthusiastically praising their way of life. It's easy to argue the counterpoint as well, right?

1). There are many jobs that have to be done in person 2). Many people prefer to live in large cities and accept the downsides in order to get the benefits

So, next time you want to make a claim like this, can you share anything objective about this? Thanks.


I know this is more anecdata, but I'm from Ohio and worked on a joint project with a couple of New York devs at #{famous company}. They were very good, and I learned a lot from them, but I could definitely hold my own with them technically (as could another senior dev from my company). We're both family men in the Midwest with no desire to move. Even if you offered me $350,000 or whatever I'm still staying here. But I would obviously take a remote job with #{famous company} if it paid 60-70% of that and I felt like I wouldn't be a second-class citizen as a remote dev.


Chiming in as a Midwest engineering manager here (Michigan). There's no lack of talent in the Midwest, although it's certainly a different calculus to try and match hiring to the supply/demand of engineers here and not everyone does so appropriately.


>>Being a software engineer in SF means constantly learning new tech and 'keeping up' >No, it does not.

Maybe you consider Kubernetes an old technology by now? Maybe you consider React.js an old technology by now? What about docker? How about ES6?

People at slower tech firms are still building working B2B web services with ES5 jQuery and ASP.NET. The engineers there have been working with jQuery since it's inception. They know it inside and out and have the skill and depth of knowledge to work around the drawbacks and design flaws.

This is from my experience working at smaller tech firms. I've moved to the city now and I can see the difference in tech and I can feel the difference in attitude too. I'm not going to link you a study or any data because no one is out there studying this stuff. This is opinion not science.

>1). There are many jobs that have to be done in person 2). Many people prefer to live in large cities and accept the downsides in order to get the benefits

Both of these statements are true but I don't see how they are relevant? I'm not denying either of these facts but it doesn't stop the small town engineers from existing.


> Maybe you consider Kubernetes an old technology by now? Maybe you consider React.js an old technology by now? What about docker? How about ES6?

My father (and many others like him) has been programming in C at a prominent SV company for the past 15 years (OK, it's based in the South Bay). I know many people in SF doing similar jobs, just coding away in Java or C++. Those people come to work, do their work, then go home. They don't tweet, or write Medium posts, or have dark green GitHub activity profiles. They don't work with you, so they don't talk to you. You aren't aware they exist. However, these people build many of the systems that make our day-to-day lives possible.

> I'm not denying either of these facts but it doesn't stop the small town engineers from existing.

I'm not saying they don't exist, I'm just saying that there just aren't that many of them.


Fair enough and I agree there aren't that many of them.

I have a lot of respect for people like your father. That's why I wanted to represent the small town devs who are similar in many ways. I personally am a little sick of the whole scene and constant newness.

I'm learning C++ in my free time because I've become disilusioned with life as a JavaScript developer. I know there are C jobs in embedded systems and OS dev but I didn't know there were still a lot of C++ jobs around outside of games.


> I personally think if big tech wants to hire in the US and still pay lower $ than they currently do, the only lever they have left to pull is the interviewing format / bar.

The fact that big tech has not yet changed the format of the interview I believe shows that this convergence has not yet happened. This system is designed with tolerance for false negatives (qualified grads who will be rejected). At some point if these companies had a true demand for more graduates, they would revise the way they evaluated candidates to limit the number of qualified rejections.


If supply is tapped, then the risk of a mis-hire also increases (higher wages, more difficult to replace in a timely manner, etc). This might counteract any downward pressure on the hiring bar.


However, they do push hard for H1-B visas.


Also Europe you can get very good developer for 5000-7000 euros per month.


That's a pretty healthy salary in most parts of the US as well. I'm still young and make around $5k/mo USD before taxes as a web developer in Texas. Take home is around $4.4k

My SO and I have $30k left of student loans to pay off and then we're throwing our entire salary at buying a house before our city becomes even more expensive. We're hoping to be able to buy that house before the end of 2021.

Luckily we aren't in Austin or we'd be screwed already, but many parts of our city are already too expensive to own property in unless you're making $200k/yr or if you're comfortable leveraging more of your salary towards housing.

Now, we could move and I could keep my salary since I work for a fully remote company, but my SO's job is here in the city and she wouldn't make the same salary in a smaller market. We'd also be leaving our friends/family just to save some money on housing costs so the benefits aren't really worth it. It's dumb to have a really nice house in the middle of nowhere if we don't have visitors to share and enjoy it with.

So even with my healthy salary in a lower cost of living city, we're still struggling to get ahead due to student loan debt, healthcare costs, and housing costs. I would love to move to Europe and even take a slight pay cut to live in a more cohesive society, but from what I've researched, getting a visa without having $$$ in assets is difficult.

Idk where that was all going, but thanks for listening :)


Getting a visa and work permit for a programming job is pretty easy in many European countries once you have the job offer. There are coders from all over the world working in places like Berlin and Amsterdam.


A big breakthrough can be in finding ways to use contractors instead of employees, or in other words allowing people to contribute when they want rather than being tied in a full-time employment contract.

Obviously there are big, possibly insurmountable, obstacles related to the cost of onboarding/learning bespoke tech stacks, the need to preserve trade secrets and serial dependencies that require work to be performed quickly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: