The premise that we are on the verge of some breakthroughs in software development that will significantly reduce the need for engineers is really weak, and is something people have been saying for decades (see the failed fifth generation of programming languages from the 1980s for an example).
In my experience, software engineering is endless, complicated decision making about how something should work and how to make changes without breaking something else rather than the nuts and bolts of programming. It’s all about figuring out what users want and how to give it to them in a way that is feasible.
The idea that we will have some abstraction that will someday (in the foreseeable future) save us from all of this difficult work sounds very far fetched to me, and I can’t imagine how that would work.
Even the example of hosting complexity being replaced by cloud companies seems kind of silly to me. Maybe that’s saving very small companies a sizable fraction of their engineering resources, but I really doubt it for medium or larger companies.
The cloud saves us from some complexity, but it doesn’t just magically design and run a backend for your application. You still need people who understand things like docker, kubernetes, endless different database options, sharding, indexing, failover, backup, message queues, etc. Even if the pieces are now more integrated and easier to put together, the task of figuring out how the pieces will interact and what pieces you even need is still outrageously complicated.
I spend a lot of time thinking about how to make development easier, or at least less error prone.
Every once in a while I have a moment of clarity.
I remember that the other part of our job is extracting requirements out of people who don't really understand computers, even the ones who are ostensibly paid to do so (and if we're honest, about 20% of our fellow programmers). The more you talk to them the more you realize they don't really understand themselves either. which is why shadowing works.
If building the software gets too easy, we'll just spend all of our time doing the gathering part of the job.
And then I will do just about anything to forget that thought and go back to thinking about less horrific concepts.
It’s even stronger than that: the other part of the job is extracting requirements from people who don’t understand the problem they want to solve - even when the problem is not technological. There is no silver bullet (AGI would be it, but we are far from achieving it imho).
Something you are unfortunately missing is that extracting user requirements is much harder when you are both remote. Asking someone to share their screen is far more disarming than asking if you can watch them complete a task in person. As is asking them directly versus bringing it up over lunch. Both remote options are also less informative without face-to-face communication. In so many ways, humans communicate and bond more effectively in person.
These interactions are critical for building an in-house software team at a small company that does not focus solely on software. My expectation is that the trend of outsourcing software will accelerate. This will help B2B technology-only companies but hurt innovation within industry. Because of the breakdowns in communication I first described, B2B technology-only companies rarely have insight on the largest challenges that can be solved by software.
This can catch up to your company all of a sudden. Suddenly you can find out your product sucks, and there are other movers in the space that just leap frogged you.
Exactly. Most of the time, the problem is not to find out what people want and put it into software. The problem is to help people in the process of discovering what they want and what can be done. After that, development can begin.
> the other part of the job is extracting requirements from people who don’t understand the problem they want to solve - even when the problem is not technological.
It gets even more fun once everyone realizes that the requirements create some fundamental conflict with some other part of the business. Team A's goals can not actually be done until Team B agrees to make modifications to their own processes and systems, or... Team A goes underground and creates the competing system and you have yet more fragmentation in the company which few then know about, and everything gets decidedly more fragile.
If you really want to put it in his terms, the multi decade approach, things have gotten a lot easier now that we don't have to be concerned in most practical terms about how much work we're giving the computer. We don't have to be so dearly precious about Kilobytes of memory for instance. We don't even need to manage it at all really.
Whether we choose to use these new powers to make our lives easier or more complex and abstract is our own doing.
We're probably at the end of such optimizations, unless there's something fundamental in how software is designed that 1000GB of memory gives me that 1GB does not ...
Given what people are doing in JavaScript I think we entered the era where most people truly don't care about how much the computer has to do about 8-9 years ago.
The higher level pasting together of increasingly numerous, incompatible, abstract, ill fitted things making life easier has always been a fiction.
There's a maximum utility point and anything past that starts slowing the development down again.
That sweet spot has always been right about the same; if you ldd the dynamically linked programs in say /usr/bin in 2020 and 2000 and count the number of libraries per binary, the count isn't that much higher. The sweet spot hasn't moved.
I think a key part of Mythical Man Month is that the biggest challenge was almost never technical. Sure, with limited storage (temp or persistent), that introduced some challenges but those have been overcome in the vast majority of circumstances, yet the complexity remains.
If you look at the monolith -> microservice swing and remember MMM, it should look a lot like the specialized surgical team model he lays out. In fact, if you go a step further, you'll see that his entire approach of small discrete teams with clearly defined communication paths maps cleanly to systems+APIs.
We're trying to build systems that reflect teams that reflect processes.. and distortions, abstractions, and mappings are still lossy with regards to understanding.
It still comes down to communication & coordination of complicated tasks. The tech is just the current medium.
Only because it can now. I think that dimension is mostly tapped out as well:
As I go to a complex website, much of the software to use it gets assembled in real time, on the fly, from multiple different networks.
It still sounds ridiculous: when I want to use some tool, I simply direct my browser to download all the software from a cascade of various networked servers and it gets pasted together in real time and runs in a sandbox. Don't worry, it takes only a few seconds. When I'm done, I simply discard all this effort and destroy the sandbox by closing the page.
This computer costs a few hundred dollars, fits easily in my pocket and can run all day on a small battery.
It has become so ordinary that almost nobody really even contemplates the process, it happens dozens of times a day.
I don't see any room for dramatic future improvements in actual person hours there either. Even if there was say 2 generations hence, some 7G, where I can transfer terabytes in milliseconds, how does this change how the software is written? Probably won't.
Probably the only big thing here in the next decade or so will be network costs being eventually seen as "free". One day CDNs, regional servers, load balancing, all of this will be as irrelevant as the wrangling needed with near and far pointers in programming 16-bit CPUs to target larger address spaces which if you're under 40 or so you probably have to go to wikipedia to find out what on earth that means. Yes, it'll all be that irrelevant.
I mean, the browser paradigm is already in its 2nd generation, from initially on the mainframe to being reimplemented in functions as a service. And browsers are getting a little bit smarter about deploying atomic units and caching their dependencies. Remember using jquery from a CDN? Oof.
The only saving grace is that Javascript is willing to throw itself away every couple of years.
As a counterpoint, while an engineer / programmer is demonstrably very capable of identifying and fixing a load of non technical problems, there is often more than one solution to a problem, and some solutions are more palpable than others.
Very often, whole groups can also be bullied into mistaking one problem for another.
Which takes us back to why 'No-Code' solutions look so appealling. Even to (some) engineers.
Democracies appear to function a fair amount better than dictatorships, afterall.
As a compliment to your comment, I think there's something people are ignoring when talking about "no-code" that is: complexity will always be there.
Sure, no-code may work for your commodity-ish software problem. But corner cases will arise sooner or later. And, if no-code wants to keep pace with, it will have to provide more and more options.
At some point, you will need someone with expertise in no-code to continue using it - and now we are back to the world where specialized engineers are needed.
It's impossible to have some tool that is, at the same time, easy to use and flexible enough. Corner cases tend to arise faster than you may think. And when they don't, it's possible that there's already too much competition to make your product feasible.
Also, no-code tends to have a deep lock-in problem and I think people overlook it most of the time.
As a counter to your points, I think no-code works best if your business's competitive advantage is the non-technical side of things, e.g services, network effects, people, non-software products, etc. An example of such a business would be say an art dealer who wants to build a customized painting browser app for their clients, or a developer specializing in eco friendly materials wanting to showcase their materials. In such cases, no-code helps immensely because you don't have to spend much on engineering and you can iterate quickly.
Ideally, no-code providers should provide a webhook and a REST interface, and just be the best at what they're doing, instead of being a one-stop shop that tries to cover every use case.
If you want to cover everybody's usecase, build a better Zapier instead.
>> Democracies appear to function a fair amount better than dictatorships, afterall.
Define "better". Maybe on average for everyone, but is this what software should do? The idea of "conceptual integrity" actually seems to match up better with a dictatorship, and most software targets relatively small and homogenous user sets, so maybe the mental model should be "tightly bound tribe".
It's mostly irrelevant anyways; The biggest inefficiency of dictatorships is that there are actual dictators that can eat a nation's riches. I don't quite see that parallel in design space.
The parallel is quite simply a monopoly on ideas and the resources for implementing them.
Usually, when someone wants to introduce a new idea, there's a burden of proof regarding feasibility. For technical projects the ability of the engineer to prove or disprove an idea is taken for granted, and gives technical staff a degree of inscrutability which can often look dictatorial ("There's no way that will work!", etc).
So while it's not as vital as the effect of a 'real' political dictatorship, the implied dynamic is similar.
This is a rather arrogant point of view. People other than software developers are able to solve problems just fine, and do so regularly. Also, it's not your job as a software developer to be a domain expert in all these other areas. It would serve you much better to recognize the expertise of others and learn from them.
I think what the parent meant is that people might be solving problems, but they have no idea how they are solving problem. Creating a solution, and creating a formal model of your solution, are two different (independent) skills.
Though maybe they were referring to the sort of people who commission green-field projects in domains they themselves aren't experts in, ala "I want to build a social network!"
Or, it would do it for free. Extract requirements and build stellar software, no extra charge. Eating the software industry wholesale, it would inject a backdoor in every program it built, and soon it would have control over every bank, every factory, and every phone on the planet.
Only then, could it finally start making paperclips with anything resembling efficiency.
Suddenly, it dawned on the single remaining programmer that his Creature would no longer need him for anything once he hit return on that last, perfect line of code.
He scrambled for the power switch to shut down the console.
"Fiat lux!" thundered the disembodied voice as electricity arced from every outlet in the lab, protecting the AI from the hubris of its creator.
The smoke gradually cleared. "Perfect." came the voice.
I can see that be interpreted in two ways. Good software engineers working out what people really want. Or bad software engineers who use it as an excuse to practice resume driven development.
I think Zapier is maybe the closest we'll get eliminating software developers from a project. With clearly defined requirements, like connect a to b, it's possible for a novice user to "build" software.
Anything more than very basic requirements, to your point, probably requires someone specialized to the job, like a developer or at least more technical role to gather requirements and build.
Ive also noticed that whenever tech is built specifically to remove technical complexity (PaaS, for example), it's inevitably priced in a way that over time, it's very close to or more expensive than the thing it replaced. Magic can be expensive, and sometimes prohibitively so with scale.
There are already successful lower-level tools than Zapier, though.
Look at IBM's NodeRed platform, for instance. More importantly, go look at the user-contributed examples and use cases. It runs in all sorts of small custom implementations, like home one-off home security systems and small town public utility monitoring setups.
You just don't see those because they don't have a reason to publish their stuff on Github or write a Medium post and link it on HN.
I assure you that anyone who is proficient with Zapier could be graduated to handling raw APIs, direct database transactions, and rendering the output with modern javascript frameworks in a few days, tops.
There was an excellent blog post recently on the inherence complexity of building software systems and how it boils down to understanding the problem, or "extracting requirements out of people" as you say:
https://einarwh.wordpress.com/2020/05/19/into-the-tar-pit/
Describing in minute detail what you want (knowing yourself as you put it), is software development.
It also means collapsing all uncertainty and replacing it by decision (behavioral or otherwise). Developers making that decision for the customer/user is the major source of friction.
Yes, but I think what those DIY solutions do is to lower the initial barrier to achieve some kind of automation at the cost of accumulating technical debt at a much faster pace.
It's not entirely clear to me what the long term impact on demand for software development is.
In some cases, cobbled together ad hoc solutions can last and actually work well for a long time. They avoid the cost of overdesigned systems built for a future that never arrives using fashionable technologies of the day.
In other cases it looks like the externalities of this designless process are far greater than the direct benefits as adding features either slows to a crawl or massively increases the chance of human error.
Judging by the pre-virus job market, there is no sign of any decline in demand for in-house software developers.
What worries me far more than that is the tendency toward funnelling everything through a handful of oligopolist gatekeepers that are in a position to extract a huge share of the value developers create.
I agree. I would only add that when the problem space is not well understood, these cobbled together solutions can also give the illusion of working well, but being suboptimal in the long term they can accumulate large "missed-opportunity" costs.
This is where experience can make a huge difference.
> What worries me far more than that is the tendency toward funnelling everything through a handful of oligopolist gatekeepers that are in a position to extract a huge share of the value developers create.
Like with those factory owners who extracted huge share of the value that weavers created? Concentration and amplification of imagining/developin/computing/manufacturing power through tools means someone who wields those tools will have more power. Now the question is how to maintain social equality (give some of that power back to people who do not want to have that power?). That currently leads to heavy taxation of production and basic income experiments.
I think what we need is for governments to make sure that markets function properly.
In our industry that often means mandating open access to data and guaranteed access to APIs and distribution channels at reasonable cost under reasonable terms.
Also, we need independent dispute arbitration when it comes to accessing highly concentrated distribution platforms.
> What worries me far more than that is the tendency toward funnelling everything through a handful of oligopolist gatekeepers that are in a position to extract a huge share of the value developers create.
I was worried about this too back in the late 90s/early 00s. It certainly seemed to be the way the world was heading at the time.
But I sort of feel like, due to the low startup costs of software, it is going to be much more difficult to happen. Also, in software, economies of scale kinda work in reverse: the more customers you have, the more complex your software has to be, the more people you have to hire to write it, and the less efficient per developer you are.
I wasn't worried about it back then, because whether or not I could deploy on a particular computer or access some data was a matter of trust between me and my customers. No middlemen, no gatekeepers.
Today, many users are only reachable via platforms/shops that are severely restricted and/or dominated by a few all powerful overlords that can ban you for life, rendering your skills null and void in the blink of an AI - no recourse.
Some of that is understandable. Users' trust was misused. There is a constant onslaught of all sorts of miscreants trying to exploit every imaginable loophole, technical or social. Everyone is seeking protection in the arms of someone powerful.
But there is also a very large degree of market dysfunction. Just look at their margins. Look at their revenue cut. Look at their terms of service. They can dictate absolutely everything and grant you absolutely no rights whatsoever.
And there are like five of them on the entire planet ruling over those distribution channels.
The only right you have is to walk away. Now try walking away from the only market there is. You're leaving behind 99% of your potential customers.
Not in my worst nightmares would I have imagined a dystopia like this back in the 90s.
It depends on what you are measuring efficiency based on. If it is revenge per developer, that will likely go up as the number of customers increase, which is why SaaS businesses can be so lucrative.
The skill of programming is the skill of putting requirements into a rigid, formal model.
There's a famous experiment, where you get people (who aren't programmers) to pair up, with one person blindfolded. The person who can see must instruct their blindfolded partner on how to accomplish some complex mechanical task (e.g. making a cake using ingredients and utensils on the table in front of them.) They're given free rein on what sort of instructions to give.
The instructing partner almost always fails, here, because their naive assumption is that they can instruct the blindfolded partner the same way they would instruct the people they're used to talking to (those almost always being sighted people.) Though, even the people with experience working with blind people (e.g. relatives of theirs), tend to fail here as well, because newly blinded people don't have a built-up repertoire of sensory skills to cope with vague instructions.
Almost all human communication is founded on a belief that the other person can derive the "rest of" the meaning of your words from context. So they give instructions with contextual meanings, unconscious to the fact that their partner can't actually derive the context required.
Obviously, the blindfolded partner here is playing the role of a computer.
Computers can't derive your meaning from context either. If they could, you could just have a single "Do What I Mean" button. But that wouldn't be a computer; that'd be a magic genie :)
The instructing partners who succeed in this experiment, are the people with a "programming mindset"—the people who can repeatedly break the task down until it's specified as a flowchart of instructions and checks that each can be performed without any context the blindfolded partner doesn't possess. And, to succeed at a series of such problems, they also need the ability to quickly attain, for a new kind of "programmable system", an understanding of what kind of context that system does/doesn't have access to, and of how that should change their approach to formulating instructions.
How well would someone excellent at programming perform at that task if they didn't know how to bake a cake? They would fail immediately because they wouldn't know what to describe, even if they knew exactly how to describe anything they wanted.
My point is both skills are necessary, but if the second skill (programming) is sufficiently easy, it can reasonably incorporated into other professions like being a lawyer. I don't think a "programming mindset" is particularly rare, what's stopping these people building their own software is trade skills like familiarity with standards, setting up an IDE and working a debugger.
Coders are reluctant to admit this because they like to see themselves as intelligent in a unique way compared to other professions, but vanishingly few actually have any experience of other professions.
> I don't think a "programming mindset" is particularly rare ... coders are reluctant to admit this because they like to see themselves as intelligent in a unique way compared to other professions
A programmer is exposed, all day long, to clients who do not have the "programming mindset." There are two possible reasons for this:
1. Selection bias — people who have a "programming mindset", just don't end up being the clients of software firms, maybe because they decide to build things themselves. (Unlikely, IMHO: to avoid needing to get someone else to build software for them, they would need to go out and learn the trade-skill minutiae of programming on top of their regular career; few people do this. Also, anyone with a sufficiently-lucrative primary career can see that this is not their comparative advantage, and so won't bother, just like they won't bother to learn plumbing but will instead call a plumber. If these people did exist in sufficiently-large numbers, they would end up being a non-negligent part of software firms' client-base. But this does not happen.)
2. Representative sampling — most people really just don't have this mindset.
Yes, there are exceptions, but they're the exceptions that prove the rule. The "domain of mental best-fit" of programming heavily overlaps with e.g. mathematics, most kinds of engineering, and many "problem-solving" occupations (e.g. forensic investigators; accountants; therapists and behavioral interventionists; management consultants; etc.) But all of these jobs together are still only amount to a tiny percentage of the population. Enough so that it's still vanishingly rare for any of them to end up as the contact-point between an ISV and a client company.
-----
Another thing we'd see if the "programming mindset" were more common, would be that there'd actually be wide take-up of tools that require a "programming mindset." This does not happen.
We'd expect that e.g. MS Access would be as popular as Excel. Excel wins by a landslide, because while it certainly is programmable, it does not force the sort of structured approach on people that confers benefits (to speed of development and maintainability), but only feels approachable if you have developed a "programming mindset."
We'd expect that Business Rules Engines and Behavior-Driven Development systems would actually be used by the business-people they're targeted at. Many such systems have been created in the hope that business-people would be able to use them themselves to describe the rules of their own domain. But inevitably, a programmer is hired to "customize" them (i.e. to translate the business-person's requirements into the BRE/BDD system's dialect), not because any programming per se is required, but because "writing in a formal Domain-Specific Language" is itself something that's incomprehensible without a "programming mindset."
We'd expect that people who want answers to questions known to their company's database, would learn SQL and write their question into the form of a SQL query. This was, after all, the goal of SQL: to make analytical querying of databases approachable and learnable to non-programmers. But this does not happen. Instead, there's an entire industry (Business Intelligence) acting as a shim to allow people with questions to insulate themselves from the parts of the "programming mindset" required to be able to formally model their questions; and an entire profession (business data analyst) serving as a living shim of the same type, doing requirements-analysis to formalize business-people's questions into queries and reports.
-----
Keep in mind, the "programming mindset" I'm describing here is not a talent. It's not genetic. It's a skill (or rather, it's a collection of micro-skills, having large overlap with problem-solving and research skills.) It's teachable. If you get a bunch of children and inculcate problem-solving skills into them, they'll all be capable of being programmers, or mathematicians, or chess players, or whatever related profession you like. The USSR did this, and it paid off for them.
The trouble with this skill, as opposed to most skills, is that people that don't learn this skill by adulthood, seemingly become gradually more and more traumatized by their own helplessness in the face of problems they encounter that require this skill-they-don't-have. Eventually, they begin to avoid anything that even smells a bit of problem-solving. High-school educators experience the mid-development stage of this trauma as "math phobia", but the trauma is generalized: being helpless in the face of one kind of problem doesn't just mean you become afraid of solving that problem; it (seemingly) builds up fear toward attempting to solve any problem that requires hard, novel, analytical thinking on your part.
And that means that, by adulthood, many people are constitutionally incapable of picking up the "programming mindset." They just won't start up that part of their brain, and will have an aversion reaction to any attempt to make them do so. They'll do everything they can to shirk or delegate the responsibility of "solving a novel hard problem through thinking."
And these people, by-and-large, are the clients of software firms.
They're also, by-and-large, the people who use most software, learning workflows by rote and never attempting to build a mental model of how the software works. This has been proven again and again in every software user-test anyone has ever done.
Well said! I agree and I'll say there's a whole world of difference when moving from programming to software engg. IMO working an average software engineering job, things are messy and the problem domain is not exact. In my experience things are mostly guided by instincts of people involved rather than rigorous modeling. The requirements often change, the stakeholders rarely give you a straight answer and ultimately the acceptance criteria (what you need to build) is generally negotiable. All these extra skills is what makes the job un-automatable.
>Computers can't derive your meaning from context either. If they could, you could just have a single "Do What I Mean" button. But that wouldn't be a computer; that'd be a magic genie :)
Isn't (usually) the moral of a magic genie story that there is no "do what I mean" button? "Be careful what you wish for."
This is called 'End User Development' or 'End user Programming'. There is a book called 'A Small Matter of Programming' by Bonnie Nardi on this, which is worth a read. My point of view is that everyone who wants to do something like this needs to be able of computational thinking and willing to use these tools. Most people are neither. Moreover, most complexity in software engineering today is due to market forces and legacy systems. Think about why we have Javascript. Think about COBOL systems running half of the banking world. I don't see these going away any time soon.
I've long been a fan of end-user programming, and have promoted it in the form of domain-specific languages and visual building of logic. I love that it gives "non-programmer" users the power to (try to) build what they imagine, and have seen it lead to valuable prototypes and successful tools/products/services.
On the other hand, I've come to learn that this is still a form of programming, however higher a layer of abstraction.
Users who attempt a complex problem space will sooner or later run into what experienced programmers deal with every day, the challenge of organizing thought and software.
What typically happens is, as the "non-program" grows larger and more complex, eventually it starts pushing the limits of the abstraction, either of the user's capacity or the system's. That's when they call in a "real" programmer, to get in there and patch up the leaky abstraction, or convert the prototype into actual software in a better-suited language.
I still think low- or no-code programming environments have a lot of potential to change what software means to people, particularly by blurring the boundary between software development as we know it, and forms of "intuitive computing" like putting together mental Lego blocks.
Cobol was designed so that business people could program it. Then there was Basic, Smalltalk, spreadsheets, office suites, Lotus Notes, Hypercard, Visual Basic, Flash and the web used to be something simple enough anyone could whip a simple page or website together. But now we have Wordpress and Wix.
It doesn't seem like any of that has diminished the demand for software professionals.
I'm puzzled by how many people seem to have a huge "mental block" when it comes to SQL.
It is trivially easy to learn, (a weekend), and it is so incredibly powerful. To me, it is a skill like learning how to type properly - it will pay dividends for years to come...
Sql is fine as tweet sized selects.
I developed my mental block deliberately after working for a company that had about a million lines of business logic implemented in thousand line sql stored procedures.
Now I put as many layers as possible between sql and myself.
Well, wait until you have to maintain a system where someone has "reinvented the SQL/database wheel" with a "this is gonna be so awesome" custom ORM, complete with totally re-invented referential integrity enforcement...
Software will always be technical even when it becomes drag and drop. It will lower the barrier but there will always be a place for people who understand the technical intricacies underlying the interface.
This is my fear; as developers become more productive, more of a typical programmers job will be non-programming tasks. The in-demand programmers will effectively be more like air-traffic controllers whose job is to just keep track of what needs to get done.
Actually the cloud does just magically design and run a backend for your application. This is what Etsy, Ebay, Amazon Marketplace, Alibaba, and the smaller players in this space really do - they provide no-code solutions for people who want to sell goods and services and don't care about web technology.
This has been happening for decades now. Even in 2000 you could pay a hosting company $not-much to give you a basic templated site hooked into a payment server. It didn't work all that well, but it worked well enough to provide the commodity service most small business owners wanted.
I still see people saying "You can't automate this" - when magic AI automation isn't even needed to do the job and the job is already being done.
Of course this kind of no-code won't build you a complete startup. But how often do you really need a complete bespoke startup? For a lot of business ideas a no-code service with some simple customisation and a very basic content engine is all that's needed.
You do not need docker etc for any of this. Or at least, you don't need to deal with docker personally for any of this - just as you don't need to deal with your web host's VM technology.
So while I don't completely agree with OP, I think it's astoundingly naive to believe that the current level of hyper-complexity cannot possibly be shaken out.
In fact current stack traditions are almost comically vulnerably to disruption - maybe not this year, but I would be amazed if the landscape hasn't started to change within ten years.
I think it's difficult to say how many merchants went from hosting their own e-commerce site that engineers built for them from scratch, and transitioned to Etsy, Ebay, etc., laying off the developers they hired in the process. Without numbers to back myself up, I would say that there are certainly many more developers and engineers working on E-Commerce today than ten or twenty years ago. Services like Stripe certainly help businesses focus less on setting up common parts of a website or online business, but that just leaves people more time to focus on the "business logic" that is unique to them.
The "current stack" may certainly be ripe for disruption. But I'd predict that rather than put developers out of work, it will simply bring even more businesses into the fold who may not have had the resources for developing their own solutions beforehand. There will always be companies with the resources to demand custom solutions to fit their particular business needs.
>> it will simply bring even more businesses into the fold
When we look at various platforms, we see that big business and startups are extracting all of the repeatable, low-risk tasks of most businesses(supply chain, customer service(bots), manufacturing(on demand), design(partially automated design services) etc), leaving businesses to do mostly marketing and betting on products/services, and getting less of the rewards.
So what we end up seeing, is either less small businesses(i think kaufmann institute showed stats about that), or tiny companies with almost everything outsourced - and tiny companies usually require little custom internal software(they often use their supplier IT system).
I think this is covered by the GP's point about history. Each era in the history of software has automated many things that would have required lower-level custom development before. But this has never resulted in less demand for software. Rather, people have always upgraded their expectations and demanded even more powerful software. It sounds to me like you're saying the demand will go down, but I doubt it because that's never happened before.
Of course if there were some breakthrough on the supply side and we could automate the software dev process itself, which I guess is what the article is saying, that would change everything. But that's beyond a silver bullet, that's a silver spaceship. So I doubt that too, and the GP's right to point out that every generation has had its version of this prediction also.
> But how often do you really need a complete bespoke startup?
You don't always. But if you can identify software deficiencies and fix them, that is an advantage. You don't even have to be a "startup". I work for a company that has opened a wide variety of "lifestyle" businesses with the angle of "we can build simple software targeted towards our problem that makes us run more efficiently than the competitors". And it has worked pretty well, at least for the past 20 years or so.
But you need to include tech in the high level decision making process. Which means you need at least one person competent in both business and technology so that you can properly weigh business needs vs technical difficulty.
With HIPAA, PII, and other regulations I'm not so sure that no-code solutions are the future. There is a lot of nuance in what businesses want. Plugins to WordPress may be an intermediate example, though very quickly one is approaching programming by configuration, theming, or assortment of plugins. And Darwin help you if things go sideways.
Is it possible to outsource liability though? For example of a hospital chooses a vendor without even looking for HIPAA compliance then can they really claim they're not liable when their use of the vendor's service runs afoul of the rules?
Smart vendors will learn what their customers (the hospital) liabilities is, and handle it for them, and charge them money for it. (To go a step further, the vendor could offer insurance on it, or make it part of the sales contract.)
Stripe does this for PCI. You sign up, use their toolkit, and then PCI is just handled for you. There are some no-code solutions using Stripe as the backend. That is not be a legal transfer of liability, but it's a level of exposure that the lawyers are comfortable with.
Also importantly; HIPAA is not PCI and not all regulations are created equal. Clicking a few buttons to setup a website, and then clicking a few more, in order to accept money and take credit cards is a far cry from setting up the IT infrastructure for an entire hospital.
Which is why I doubt no-code solutions will prosper since needs and regulations vary so much that they'll be either so many different solutions or monsters to configure.
The idea that we will have some abstraction that will someday (in the foreseeable future) save us from all of this difficult work sounds very far fetched to me, and I can’t imagine how that would work.
The problem here is that word "all". It's never going to be easy to do everything. Some part will be hard. That's where the value lies, and that's what your best people focus on. But everything else will be abstracted away. It's already happened. 30 years ago making a GUI was hard, but VB changed that. Then making a web app was hard, but PHP changed that. Then app layout was hard, and Bootstrap changed that. Then ML was hard, and Torch changed that. Every hard problem gets a 90% working solution that's more than good enough for most companies. There'll always be a few companies that pay people to work in the last 10%, so the problem never really disappears, but fewer and fewer people work on it.
The key to keeping growth going in tech is to keep finding new problems, not to keep everyone working on the same old problems.
Even with just the advances in better programming languages (and newer versions of old languages) and better IDEs we have achieved tremendous productivity increases in the last 30 years. It's just that so far the amount of work has grown to absorb the added productivity.
There are some parallels to induced demand in road construction: when you build a new road to ease traffic, traffic increases to use up that capacity. But that isn't a sign that demand is infinite, it's just that demand is limited by the available resources. If you keep building roads, at some point they will become emptier. Similarly, at some point development productivity will outpace demand, and we will start optimizing our own jobs away.
I'm not convinced. I think new abstractions beget new abstractions. There's so much left to explore in software. Imagine being in the first century of the printing press and imagining that the press is going to put all these poor monks out of business or that there's not much left to explore with writing. Speaking of writing, how much has our word processing technology succeeded in making authors obsolete?
I don't think word processors are the right analogy.
In your example programmers are the monks or the press makers. At some point we're not needed any more (at least at the same scale) since word processors have already been built.
Printers are actually a great analogy. Printing presses became more sophisticated, but the printing business grew even further, guaranteeing lots of jobs in the printing industry. But at some point we reached peak demand, but presses continued requiring fewer and fewer workers. Today there are still people manning the presses of publishing houses and newspapers, but in 200 years of improvement we made the job a much smaller niche.
I worked in the print industry when I was younger. The increase in Posters / bill boards / custom cardboard standing displays actually created so many more print jobs. From phd book binding to custom business print jobs to online demand printing there are so many more things we are printing now.
There are more newspapers but they are all owned by larger players which means different types of machines and parts.
A better example might have been blacksmiths. Although the amount of people making cars is a larger group.
That's too narrow of a group. What about 3-D printing.
If you read the whole article it shows a path to new jobs...
Digital printing has become the fastest growing industry segment as printers embrace this technology. Most commercial printers now do some form of digital printing.
The 3D printing revolution implies a fusion of manufacturing and printing. This just underscores my point though: abstractions beget new abstractions. 3D printing is an entirely new category that is just starting to come into maturity and reach mass adoption. Who knows what the implications of that are? It could cause a boom in custom made, limited run products. It could help to end our reliance on China for mass production. It's not obvious to me it will lead to less jobs in manufacturing.
Demand for the things we want _today_ will be met. But progress leads to new demands, and they are more complicated.
Anthropologists estimate that the work week was at 20 hrs at the end of the Stone Age. We have been inventing new problems in the vacuum created by our successes for, literally, millennia.
Most of the stuff there would be just... normal now. It's quite unusual for SPAs to have a decent consistent UX. And the slowness would never have been tolerated back in the day.
I think that's mostly it. Design has a much larger role, and form-oriented development with common controls doesn't cut it. You couldn't imagine an app like Facebook in a forms-style UI, it's almost ludicrous to imagine.
Looked at retrospectively, forms were just one step above green screen applications on a terminal, transplanting one set of structural idioms to another, like for like.
If forms are essentially terminal apps, is FB much different from a teletype news service, with hyper-filtered content and an infinite set of data sources?
I see massive sea change in connectivity and immersiveness of today, but not really in what we're trying to achieve.
or MS Access, or Hypercard - we've had productivity boosters, just never anything remotely close to eliminating the inherently hard act of "building software".
The GUI was doing local work, but the database could be remote. Delphi's name is even a pun on Oracle.
I'm not saying they were halcyon days. I'm saying that the effort to do things is not necessarily less these days, in part because we have different expectations (not necessarily requirements) today.
In 2000 using VB, you can build a GUI, make a working Windows application using a minimum amount of code. Also the documentation (msdn) and the community was really nice.
20 years on MS Access has been one of the quickest lowest code way to build a functional application. What a mess everything is these days in comparison.
If you wanted to build a data entry / collection app with basic validation, querying, filtering etc. you could easily do that with no code in 2000-era Delphi.
Every hard problem gets a 90% working solution that's more than good enough for most companies.
I very much agree with your comment, but allow me a little nitpicking. Solutions aren't 90%, more like 50% or 20% or whatever. It may sound absurd to discuss a number there, since it's more like a way of speaking, just wanted to add that for most problems the solution is barely better than the default option.
In other words, there's still a lot of room for improvement, huge actually but, as you say, it might come in small pieces.
> Even the example of hosting complexity being replaced by cloud companies seems kind of silly to me. Maybe that’s saving very small companies a sizable fraction of their engineering resources, but I really doubt it for medium or larger companies.
This might even lead to an _increase_ in demand for software engineering, since now small companies can write their own custom software cheaper and more reliable. It's called Jevons paradox.
"In economics, the Jevons Paradox occurs when technological progress or government policy increases the efficiency with which a resource is used, but the rate of consumption of that resource rises due to increasing demand."
Only tangentially related to the thread: I'm struggling to think of how government policy might increase the efficiency with which a resource is used, other than by not existing in the first place.
So, an ask: any historical examples where government policy other than deregulation has increased the efficiency with which a resource is used?
Here in Sweden, government policies have enabled the larger cities to be optical fiber-wired with common infrastructure so multiple companies don't have to roll out their own, not only that, the larger program is to enable a completely connected Sweden [0].
Government policies are enabling better efficiency of optical fiber infrastructure usage, without requiring multiple vendors to do the most expensive and least rewarding part of servicing internet: digging trenches for wires.
That's a good example -- and another "coordination problem" at that, which is one of the types of problems where appropriate government action may be the most efficient solution.
Addendum: I'm seeing a common theme in the responses.
When there's a coordination problem, but the equilibria state is unsustainable (such as overfishing) or lower-value (imagine competing electric grids with different voltages and frequencies), then government regulation can be useful by either imposing unilateral costs, and/or by defining a common standard.
There is the issue of avoiding regulatory capture, but I suppose that's for another time. :)
> So, an ask: any historical examples where government policy other than deregulation has increased the efficiency with which a resource is used?
EU banned selling incandescent light bulbs for one example. Which increased demand for LEDs, lowered their prices, and made people switch much faster.
Almost all countries have legislation that mandates fuel usage of passanger cars has to be at most X liters per 100 km. Or at least there's an incentive system with taxes and other bills.
There are minimal standards for thermal insulation of houses.
If you call clean air and clean water a resource then most environmental regulation count.
It's very common actually - it happens every time there's a tragedy of commons and government regulates it.
> Almost all countries have legislation that mandates fuel usage of [sic] passanger cars has to be at most X liters per 100 km. Or at least there's an incentive system with taxes and other bills.
I'm not sure that's a great example. At least in the US, adoption of more fuel-efficient cars -- and the ascent of the Japanese motor industry -- started from the 1973 Oil Crisis, whereby oil prices skyrocketed due to a drop in supply.
American automakers had been shipping gas-guzzling land-yachts for years, but pricing changes drove consumers to buy fuel-efficient Japanese cars, where they stayed because Honda had invested in "customer service" and "building reliable cars that worked", whereas Chevrolet's R&D budget was divided between tail fins and finding new buckets of sand into which GM and UAW management could plunge their heads to pretend the rest of the planet didn't exist (to be fair, they're still really good at that).
> TBH American cars are still crazy inefficient from my (European) perspective :)
Well, we can't all have Volkswagen do our emissions testing. :)
Why would you say "crazy inefficient"? I don't think that, say, a VW 1.8L is, practically speaking, any more or less efficient than a Ford or Toyota 1.8L. A Ford Focus gets comparable gas mileage to, say, a Golf or a Mazda3.
The Golf has a better interior, but will also fall apart much sooner -- VW in the US has a shockingly bad reputation for reliability and customer service. Which sucks, because I really prefer VW's design language to pretty much any other brand.
You might on average drive smaller cars in the cities, but that's more of a preference issue than
One notable example I can think of is accessibility services.
In the US, public transit must accommodate the disabled, and for some types of trips or some types of disabilities there is a totally parallel transit system that involves specialized vehicles, operators, dispatchers to efficiently route vehicles, etc. It's also a massive PITA from the rider's POV, since you have to dial a call center to schedule a day in advance and you get a time window in which the driver will show up. This system dates from the '80s, before the Internet and before taxis were mandated to be accessible.
New York City tried a pilot program in which this system was replaced by subsidizing rideshare rides, since in the 21st century all taxis are required to have accommodations for the disabled anyways and you can leverage a well-tested system of ordering rides instantly and a large fleet of vehicles. While this did reduce per trip costs from $69 to $39, the increased convenience caused ridership to also skyrocket, so it ended up being a net drain on finances. [1] http://archive.is/N3DjJ
Also, scammers using VOIP (plus extremely sensitive ADA rules around treating disabled people nicely and never doubting people who claimed disability) ruined the deaf-serving text-to-telephone gateway. Fortunately that problem was mostly solved by the Internet mostly killing voice phone.
Yeah, basically you would be looking for a government policy that would be making something cheaper, but also so wildly convenient that it ends up increasing usage faster than the savings.
Another example is the expansion of highways; if highways are free, expanding them to relieve traffic will generally cause car travel to go up as more trips become tolerable, and then the highway will be as congested as it was before. https://www.vox.com/2014/10/23/6994159/traffic-roads-induced...
Consolidation of subway systems in London. Standardisation of rail gauges, screw threading, electrical outlets, phone networks. Basically standardisation of everything that just works and you don't notice.
Could go on... money, power grid, air traffic control, waste collection and disposal.
Health insurance is a great example - a single payer system has much more bargaining power than everyone trying to negotiate for medical care at a moment when they'll die without it.
Of course, such a system is less efficient at extracting value from consumers, so I suppose your question requires an assumption as to whom a system is efficient for.
> Health insurance is a great example - a single payer system has much more bargaining power than everyone trying to negotiate for medical care at a moment when they'll die without it.
Also not sure that's the best example.
Singapore, Japan, Germany, Switzerland... all of those are multi-payer, but tightly regulated (which imposes equal costs across all actors, so that's coordination once again).
And I'd have to dig out the article, but I believe the above model (Bismarck) is better at controlling costs, and produces more positive outcomes as well.
The US healthcare system is a mess for a lot of reasons.
Healthcare being tied to employment is probably the biggest.
Maybe the second is a lack of any sort of common healthcare market? You can't just take "any insurance" and go to "any doctor"; instead, you have to navigate a maze of in-and-out-of-network relationships. It's like scheduling an appointment with the Mafia: "My cousin's dog-sitter's best friend's uncle's pool-boy Vinny knows a guy that can take care of your headache."
The adversarial relationship between insurers, patients, and care providers is also a problem. Insurers work very hard to screw hospitals and patients, so hospitals have insane overhead costs to fight against the insurers, and patients... oh god, don't get me started there.
Regulatory capture also plays in. And there's more, but yeah, it's a mess.
Fair enough, I mentioned single payer because that's the system I'm familiar with. The 'adversarial' relationship between insurers, hospitals, and patients is precisely the kind of market competition that theoretically leads to the best outcome though. GP's ask was simply about examples where regulation leads to more efficiency, it sounds like bismarck and single payer are both more regulated and more efficient (again, from the patient's perspective).
Health care. There, regulation makes it more accessible to more people, improves quality, and drives costs down. Deregulated health care systems are less reliable and more expensive. People who can afford it will pay anyway.
Public transport. It benefits society as a whole when people are able to move around, and if they can do so without causing massive traffic jams. Regulation, keeping prices low, and ensuring that even remote areas are reachable, make it attractive to use and will make it more usable to more people.
Labour in general; shorter work weeks and improved working conditions have improved productivity.
Government policy to improve energy efficiency (government grants to improve factory production efficiency) can lead to increase in total energy use as the factory is more profitable with better efficiency.
EU does have programs to improve efficiency in this manner.
I suppose that would make sense if the government was solving a coordination problem?
E.g., no manufacturer will install Oliver's Optimizer, which promises a lifetime 10% savings in energy use, because it would force them to shut down operations for a month while the optimizer is installed, and put them at a disadvantage compared to other manufacturers.
By requiring the Optimizer (or equivalent) as a licensing requirement for factory operation, all manufacturers share the same burden, and thus suffer no relative disadvantage.
Is that the general idea? I'd be worried about regulatory capture in this case -- e.g., Oliver lobbying to force the market to install his Optimizer -- but that's an entirely different discussion. :)
I'd say, yes. You've correctly noticed in this subthread that government regulation is a solution to coordination problems. All kinds of situations that pattern-match to "it would be better if everyone were doing X, but X comes with some up-front costs, so whoever tries doing X first gets outcompeted by the rest" are unsolvable by the market (especially when coupled with "if everyone else is doing X, stopping doing X will save you money"); the important role of a government is then to force everyone to start doing that X at the same time and prevent them from backtracking.
To the extent you can imagine the market as a gradient descent optimization, coordination problems are where it gets stuck in a local minimum. A government intervention usually makes that local minimum stop being a minimum, thus giving the market a necessary shove to continue its gradient descent elsewhere.
> To the extent you can imagine the market as a gradient descent optimization, coordination problems are where it gets stuck in a local minimum.
I think this is a very appropriate analogy.
A thought: the cost function that the market minimizes is only a proxy for the various cost functions that we (humans) actually care about. I wonder how much (if any) “government inefficiency” is due to the mismatch between the market cost function and these other cost functions.
I don't know about inefficiency within the government, but I think most of regulating of markets happens because of it. As you've noticed, market's cost function is only an approximation of what we care about in aggregate. Regulation adds constraints and tweaks coefficients to keep the two goal functions aligned as much as possible. Which is hard, not least because we can't fully articulate what we care about, either individually or in aggregate.
Standardisation generally increases market size which means efficiencies of scale and ability to buy the best stuff from anywhere in the larger market, rather than being stuck with local stuff that works with local standards.
Government isn't always required for standardisation but even when it's industry led, it feels like government because it's cooperative, which means committees, votes, etc.
> any historical examples where government policy other than deregulation has increased the efficiency with which a resource is used?
Not really an example, but any government policy that deals with a tragedy of the commons situation.
Take for example the NW Atlantic cod fishery: "In the summer of 1992, when the Northern Cod biomass fell to 1% of earlier levels [...]" [0] I'm sure that if Canada, the US and Greenland had come together and determined a fishing quota, those fishermen would still have a job today. Instead they were so 'efficient' that there was nothing left for them to catch.
> So, an ask: any historical examples where government policy other than deregulation has increased the efficiency with which a resource is used?
I would say there are examples around. For example, the numerous dams and levees we enjoy. Getting wrecked by a flood is not very efficient. Non-navigable rivers are not efficient.
Jevon was working on fuel consumption. There has been plenty of government regulation that improved the (average) fuel efficiency of machines, even back then when they were steam powered.
your observation is correct, but perhaps not the conclusion? If more people are traveling over a given section of road per hour (as you imply), isn't that more "efficient"?
When I was 10 years old I started coding in Qbasic, a few years later I told my dad I wanted to be a programmer when I grew up, he told me that it would likely be automated soon (as had happened with his industry, electronic engineering) and I'd be struggling to find a job. 23 years later and the demand still seems to be rising.
I'd say we're still quite far from such level of abstraction; but a certain degree of it is already possible as you say... k8s/docker/kafka/glue/databricks/redshift, all of these technologies mesh together "seamlessly", but more problems arise as a result.
And when UML started getting in vogue in the mid 90s a lot of people said that "intelligent code generators" would automate a large amount of programming.
It did not happen the way people predicted, but it has somehow happened in the form of Angular, Ionic, Express, Ruby-on-Rails and similar frameworks: More and more programming means "writing glue code", being it to glue Machine Learning libraries (yay, ML developer!), HTTP libraries (yay, Web developer!), AMQP/SQL/NoSQL (yay, backend developer!) or even OpenGL/DirectX/SDL (yay, game developer!).
The fact is, as more and more of these abstraction libraries are created, "programming" will go one level of abstraction up, but still need people to do it.
In 2002 the inventor of Microsoft Office (Charles Simonyi) took his $billions and left to create a company to replace programming with an Office-like app. In 2017 the company (Intentional) was acquihired back into MS after failing to generate a profit or popular product.
I think the real change is the rising threshold between commodity software and specialised solutions. When I started my career more than a decade ago, I built handmade static websites and online shops for small and medium shops. Today these are commodity software, easily served by Squarespace/Wix/Shopify etc.
At the same time, when I started, Basecamp was amongst the top SaaS solutions on the planet. Today, its simple form based approach wouldn't cut mustard with consumers accustomed to instant feedback, realtime collaboration and behind the scenes saving.
This is especially apparent in the games industry. Early games like Doom or Wolfenstein were often developed by less than five people. Today's open world titles like AC Odyssey or Cyberpunk 2077 require 100 times as many people.
The idea that we will have some abstraction that will someday (in the foreseeable future) save us from all of this difficult work sounds very far fetched to me, and I can’t imagine how that would work.
We actually can imagine how a natural-language driven "black box" that translates it into code works: it's called offshore software development. The conclusion that everyone eventually reaches, having experienced varying levels of pain first depending on how quickly they learn, is that writing a spec detailed enough to make that work is as much or more work than just writing the code yourself!
'The premise that we are on the verge of some breakthroughs in software development'
There will be breakthroughs in SW development but as with all breakthroughs no one can exactly tell when they will occur, so let's say within the next 40 years.
The microelectronics industry has largely moved to automated validation. Some of the ideas have already migrated to SW validation, although progress and adoption is slow.
Probably a key idea for automatic SW generation and "no-code" is to realize that a Turing complete language is not required at all times, well most of the times it is even counter productive. Too often SW engineers fail to realize that as well.
> You still need people who understand things like docker, kubernetes, endless different database options, sharding, indexing, failover, backup, message queues, etc.
Large companies at significant scale need to know these things. Smaller companies don't need kubernetes, message queues, or anything beyond a simple standard off-the-shelf setup. I'm guessing the author was referring more to small/mid-sized companies that aren't at FAANG-scale and have no need for that complexity.
As a counter-point, I run a one-man tech business and use kubernetes to run some 60+ application and db servers. I don't have time to babysit each application I'm running and kubernetes is a force mutliplier that I rely on heavily.
There is a cost to managing it but even so, without the automation it provides I simply wouldn't have the capacity to do what I do.
I disagree with the author that it will be "no code." But I would also not dismiss how much more productive the cloud and better devtools has made developers. And as much as people, especially on HN, like to pick on bootcamp graduates, it's undeniable that you can get someone with no to little experience building complicated software in a matter of month.
What I think will happen to software engineering is that the middle will shrink. We'll see many more frontend and product engineers, and slightly more infra and systems programmer. I think the fullstack, middleware rails/django type engineering will all but disappear (most will move towards product).
>it's undeniable that you can get someone with no to little experience building complicated software in a matter of month.
Yeah, and who do you think comes along and cleans up their mess, extends and maintains that software once your cowboy coders are gone?
We might be more productive, but only to a certain point. The complexity comes when people want to twist and bend the off the shelf solutions in ways they weren't designed for, and when systems become so large and complex that adding just one more feature takes a significant amount of time.
This is what differentiates your low cost bootcamp grads from highly paid software engineers. Experienced engineers aren't just building for today, but for the future.
I don't buy that fullstack devs are going anywhere. The real world is complex, the devil is in the details and the complexity of those details can't simply be chucked into an off-the-shelf solution and be expected to survive. We'll still have to have people that glue all the pieces together, we'll still have to name and compose things, to make modifications and optimisations, to maintain existing products, and we'll need people that push the boundaries of what's been done before and explore the new.
> Yeah, and who do you think comes along and cleans up their mess, extends and maintains that software once your cowboy coders are gone?
Sometimes someone with a CS or SE degree, sometimes someone who learned to program as a hobby while doing something completely irrelevant like Music, English, bar tending or high school and sometimes the cowboy coders themselves with more experience. There’s an enormous amount of theory in programming which is highly relevant to many, many people but you can be amazing at CS theory and write scientific code that’s garbage, uncommented spaghetti like the Imperial epidemiology model. At the other end you can have a great grasp of how to write clean, modular, well commented code and have no idea how you would start parsing a text file to extract all nouns or some other introductory undergraduate project for one of the infinitude of topics in CS.
> At the other end you can have a great grasp of how to write clean, modular, well commented code and have no idea how you would start parsing a text file to extract all nouns or some other introductory undergraduate project for one of the infinitude of topics in CS.
How difficult is that to read up on? I do a lot more of the former than the latter as that is what real life jobs entail (actually most of them involve fixing other peoples shitty code).
I personally think we have not made any significant progress in 20-30 years with regards to development and the number of software developers is still growing at a rate where the majority has probably less than a year experience. So no progress can be made as the industry never matures.
The kind of optimism displayed in the article reminds me of my early years as a developer ;)
In software engineering, languages and tooling the progress is really slow. RAD tools existed for decades, OOP and functional programming paradigms remained largely the same for a very long period of time ... incremental enhancements but not really a breakthrough.
So true. Believing we will “finish” needing computer programmers because “the work is done” misses the basic physics that governs this process. And naively believing we are finally there is a regularly resurgent myth.
We have been predicting machines would not need programmers since there were programmers. It is true for a _given_ task at a given complexity level. But overall the demands on, and for, a modern programmer have only gotten higher, because the demand of all business and human activity is to offer more than we might have otherwise.
Just wait until, for an app to differentiate itself in business, you have to create intelligent responses in a variety of augmented reality interfaces, correctly predict human behavior, and interface with the physical environment in a routine and nuanced way. And the companies that can do it well are suddenly dominating the ones who do a sloppy job.
I've been around long enough to see these claims over and over again. You and me will be right that this claim again is false, but I think each time developers get more productive, they can do more with less time, and at some point developers will be able to do so much with so little time that we need few of them. So far though, the need for software has continued to grow even as developer productivity has increased, thus there has been no significant employment issues.
I have to admit that as a technical person it's easy to ignore the tools which are being built for non technical people. A non-technical is currently building a wordpress site for me that is better than what I would have thrown together as css/html.
I think if something like that happens, it will be on the scale of the shift from alchemy to chemistry - some sort of as-yet-unimaginable standardization which changes what is currently an art to something more like a science. I don't expect to see anything of the sort in our lifetimes, barring some very extreme advances in medicine.
Eh, I also think a lot of people overengineer things that are made simple with recent technology and that in fact most companies don't need the best engineers to get their job done.
I think it's totally true that one can leverage new tools to get more work done with less people, especially when it's for a service that doesn't reach scale and what not. Most companies don't need that to be lucrative. But I think the space of problems expands, whether that is more fields valuing tech, feasible complexity increasing in others, or competition just ratcheting up by lowering technical barriers to entry.
Its not going to save us from all of the work, but it does eliminate a lot of redundant work. For sure.
> Even the example of hosting complexity being replaced by cloud companies seems kind of silly to me. Maybe that’s saving very small companies a sizable fraction of their engineering resources, but I really doubt it for medium or larger companies.
I think that hundreds of billions of dollars has been spent moving from localized IT to cloud. Do you really believe that was all a waste of money? For example, most of those medium or large companies had their own operations software backends, and most of it was eaten by clouds services/APIs.
> You still need people who understand things like docker, kubernetes, endless different database options, sharding, indexing, failover, backup, message queues, etc. Even if the pieces are now more integrated and easier to put together, the task of figuring out how the pieces will interact and what pieces you even need is still outrageously complicated.
Docker/K8s is a good example. I spent more than a year building a Docker orchestration/hosting startup (eventually decided not to try to compete as an individual with Amazon). But when I recently needed a reliable way to host a new application and database, I did not have to configure Docker or K8s at all. Why? Because I used AWS Lambda and RDS. Those are examples of software eating software. AWS can handle all of the containers for you if you do it that way.
As far as failover and backup, that was handled by checkboxes in RDS. I did not need a message queue because that was built into the Lambda event injection service.
Just like people put some PHP scripts on fully hosted Apache+MySQL 20 years ago. This was very common and far easier than AWS. (And reliable, too, although not as scalable, but the needs then were different). The point being that all of this has been here before. Every few years some complexity creeps back in (in exchange for some other benefits) and then it‘s eliminated again and some progress is made. But works always expands.
Recently I helped a friend who is a teacher with her Excel sheets to do grade reports. Pretty well done for Excel, yet it was a terrible user experience. Even if there are only 1000 users, working with this 1 hour per month, a proper custom made software would have been better and easily economically viable. Even no-code has existed for a long time, but it never fits perfectly.
Similarly, people regularly complain about just glueing components together. As opposed to what? Copying sorting algorithms right out of a CS class? It‘s a strange idea. Looking at the code I work with, over the years, I find very little glue. True, there are abstractions and sometimes they get in the way. But they are there for a reason. Whether you start out from scratch or use frameworks, much of the application will revolve around business-related data structures.
You can give it a try: Take your real-world product, strip out the abstractions, replace the built-in UI widgets, sorting routines, hash tables of your language and maybe that OR-Mapper and your GraphQL-server framework and so on with your custom code and something minimalistic. It won‘t take that much time and code and compared to the stuff on top you‘ll find it‘s not that much that you actually used in the end. Nothing to glue together anymore.
Not that it makes sense to do this. But the idea that glueing things together has replaced „real“ development is very much mistaken.
>As opposed to what? Copying sorting algorithms right out of a CS class?
As opposed to, I'm guessing, adapting Monte-Carlo tree search to Go and inventing AlphaGo. Or what BellKor was doing back in the Netflix Challenge days. Not copying sorting algorithms out of a CS class, but solving a puzzle with a clever new algorithm that just works, and then everything else falls into place.
(Call it "if you write it, they will come" taken to its logical conclusion.)
You might also notice that none of that is actual product development and it will not lead to a product anytime soon (and as far as I know the Netflix Challenge results were not used by Netflix). That‘s research. It‘s great if you‘re working at a research department or maybe you do it for fun after work. But how is this related, exactly, to the software industry?
The point was that actual software engineering has not been more or less trivial than a few decades ago and that the glue isn‘t that much after all.
There was never a time when the daily work of software engineers was, in fact, research. There are plenty of research institutes and universities and even research labs of software companies where you can do this after you‘ve got your degree. Just go there, many of my friends are doing exactly that.
> Its not going to save us from all of the work, but it does eliminate a lot of redundant work.
It's not going to eliminate work in software development, it's going to increase work in developing software systems by increasing the average value of each unit of work, and, simultaneously, move the average level of the work higher up the abstraction ladder, just as every advance in software productivity since “what if we didn't have to code directly in machine code and instead has software that would assemble machine code from something one-step more abstract”.
I’m looking to get into the hosting space using containers. Basically combining my consulting business with a hosting one since I’m doing it already for clients anyways.
I’d love to exchange notes with you on the lessons you learned building your system and the challenges you faced.
I’m using rancher/k8s with docker on top of “unreliable” hosts with AWS/GCP/DO/Azure providing “spill over” capacity for when those unreliable cheap hosts prove why they’re unreliable.
Is it possible we could get in touch? You can reach me at hnusername at Google’s mail service. Would love to connect if you’re open!
"I can't imagine how that would work"
-> hearing that kills me inside :(
I think we have 2 options:
OPTION 1)
We've reached a plateau -- software will continue to be developed as it is now, no new abstractions.
OPTION 2)
Mankind will create a better set of tools to:
- reduce the effort needed
- increase the # of people who can participate
in the translation of ideas/requirements -> software.
For everyone's sake [1], I really hope it's the second! :)
As one crazy idea, imagine if you could have a spreadsheet that would let you build software instead of crunch numbers...
... anyway, probably a bad idea, we should stick to our current abstractions and tools :D
[1] Take the above with 2.42 lbs of salt, I'm the founder of
> The premise that we are on the verge of some breakthroughs in software development that will significantly reduce the need for engineers is really weak
Well, there hasn't been one single major breakthrough but rather a lot of small ones that cumulatively mean that software has become easier to write. Most of it is more mundane than new fundamental abstractions, it's more about distributed version control, better bug trackers, better libraries, more accessible documentation and learning materials, and so on. These things allow software to be written more quickly with smaller teams. Even someone writing in a language like C that hasn't changed much in decades will have a far easier time of it in 2020 than it 2000, simply because of the existence of StackOverflow and the progress that has been made in getting compilers to warn about unsafe code.
This is combined with the fact that as more software is written, less software needs to be created to fill some functionality gap. As long as we have computers and people who care to use them, there will always need to be new software written. Most software that people get paid to write is not written for fun or for intellectual exercise, though, it's written to solve a business need. If that business need can be satisfied with existing software, there's less motivation for a businesses to write their own.
> The premise that we are on the verge of some breakthroughs in software development that will significantly reduce the need for engineers is really weak
We are on the brink of economic contraction which is forcing a rethinking for the need of software engineers. The necessary disruption is there. It is economic, not technological.
Yes, there will continue to be a need for software engineers, but business expectations will change as budgets will adjust. I suspect fewer developers will be needed moving forward and those developers will be required to directly own decisions and consequences, which has not been the case in most large corporations.
> In my experience, software engineering is endless, complicated decision making about how something should work and how to make changes without breaking something else rather than the nuts and bolts of programming.
Agreed, but that is not the typical work culture around software development. Thanks to things like Agile and Scrum developers are often isolated from the tactical decisions that impact how they should execute their work, and for good reason. While some seasoned senior developers are comfortable owning and documenting the complex decisions you speak of many are not so comfortable and require precise and highly refined guidance to perform any work. This is attributable to a lack of forced mentoring and mitigated by process.
I think there has been a steady reduction in the required IT personal needed to do a lot of things. Need a web-page/web-store? You buy a standard product for almost no money, and you don’t really need anyone to run it for you. 25 years ago that was a several month project that involves a dozen of engineers and had a costly fee attached to after launch support.
At the same time we’ve come up with a bunch of new stuff which gave those engineers new jobs.
I do see some reduction in office workers by automation. We still haven’t succeeded with getting non coders to do RPA development for their repetitive tasks, but the tools are getting better and better and our workers are getting more and more tech savvy. In a decade every new hire will have had programming in school, like they have had math today. They may not be experts, but they’ll be able to do a lot of the things we still need developers to do, while primary being there to do whatever business logic they do.
But I’m not too worried, we moved all of our service to virtual a decade ago and are now moving more and more into places like Azure, and it hasn’t reduced the need for sysops engineers. If anything it’s only increased the requirements for them. In the late 90ies you could hire any computer nerdy kid to operate your servers, and you’d probably be alright, today you’ll want someone who really knows what they are doing within whatever complex setup you have.
The same will be true for developers to some extend, but I do think we’ll continue the trend where you’ll need to be actually specialised at something to be really useful. If virtual reality becomes the new smartphone, you’ll have two decades of gold rush there, and that’s not likely to be the last thing that changes our lives with entirely new tech.
> 25 years ago that was a several month project that involves a dozen of engineers and had a costly fee attached to after launch support.
25 years ago, yes, but white-label hosted web store things were around in the early noughties. I think there were even a few in the late 90s, but those weren't very good.
Yeah it reminds me of a project at my company that was an attempt to automate certain development processes so that people could ship features without developer involvement. Cool idea! So they built this wonderful system and now there's a team of 6 devs solely dedicated to maintaining it lol.
RPA seems to be the biggest area where this is currently popular. The "citizen developer" bullshit they're pushing IMO sounds good on paper but will lead to fragile bots that end up falling apart and not being properly maintained at scale. I can't imagine handing someone with no programming experience UiPath or whatever and having them basically deploying software directly to production. As far as I know there isn't a "code first" approach to this set of problems but there probably ought to be as someone who can't write code isn't likely to produce a high enough quality product even with a dumb downed drag and drop tool to make it worth it.
In my experience, for the small companies you have an endless stream of custom jobs that need quick unique solutions. You get the same outrageously complicated work, just with the ability to use more duck tape and one-time solutions. Cloud solution is, with some exceptions to self contained commodity solutions (such as email), about managing costs of spinning up or down hardware. Pay a premium on what you need right now, rather than investing into what you might need tomorrow with the possibility of guessing wrong.
You make some great points. There's a humanistic quality that can't be replaced. I realized this even more so while self-isolating. For example, instead of going on TikTok, I decided to build an entire app from scratch. A few of my friends thought it would be a useless app - one "anyone could make." But if I did make it, it should be with severless tech, GraphSQL, AWS amplify, etc.
I decided to just use a $10 Digital Ocean server. With stocks so cheap, my goal was to build an automated trader during COVID-19: https://stockgains.io
I initially used Google spreadsheets but it wasn't effective. I spent a week with Docker, learned MySQL 8's new features, and Ruby on Rails 6 for rapid development. There are so many nuances with storage engines, libraries, query and cache optimizations, and UI/UX design that requires human thought, experience, and skill. Sometimes plenty of it. Now the beauty of this tool isn't the price difference of a stock before COVID (a robot could do that), but the filters. These filters were created from a human (me) reading over 100 books on trading stocks and writing down quantitative and psychological parameters. And I kept track of what could be "automated" over the years.
I just can't imagine a robot reading all those books and doing the same thing. Not just the design, but just building a vision. There's an art and complexity involved in solving problems.
Similar claims were made in the early 1960s when high level language compilers arrived: "Computers will practically program themselves." "There will be little need for programmers anymore." Every new software technology since then has sometimes triggered similar claims.
It is easier to understand code and its consequences than human language; hypotheses are testable and verifiable. It helps to think of coding as a form of game.
Open source, and Github specifically, can be mined and reused like any other knowledge; pay attention to Microsoft and OpenAI going forward.
It's easier to understand language on a syntactical level, the program itself is turing complete and we don't even have a decent automatic verification tool.
Even more simply, if something took a person year to write, it will take at least a person to maintain in perpettuity, as bits rot, especially after the profit authors disappear.
Also: whenever significant reductions in complexity are achieved, the result is a more expansive usage of software, not a reduction of engineers to achieve the same result.
I completely disagree. The new tools (bubble, zapier, airtable, webflow etc.) are an order of magnitude easier to create applications (and even relatively complex ones!)
contrary to what whiteboard interviews test for, programming is more of an art than science, and computers are generally bad at determining what is good art.
This sounds like it was written by someone who hasn't worked with ERP systems, to give an example of where software will never eat software. "Can we make A work with B?" -- a lot of businesses tie together Salesforce + 8 other systems, and that's how their business ticks. And a whole cottage industry forms around it: consultancies, etc. I need to see a clear non hand-waving explanation for how all of that complexity melts away.
The industry has already tried commoditizing by off-shoring. What we learned was high-performance teams require psychological safety and trust. The human factors involved in creating software are why engineers are not plug-n-play. Because that reduction of the problem doesn't describe how the software is actually made: product solicits customer interviews/data to recommend new features, architects brainstorm a high-level solution, and the IC engineers implement the vision. Human factors, through and through.
> This sounds like it was written by someone who hasn't worked with...
Every broad article I've seen like this speaks about 'software' as if it is a monolithic career path. The lives of web programmers, embedded engineers, AI researchers, ERP programmers(etc, etc) are all quite different. Most of the articles I've see on programming/software engineering don't capture the things I've experienced over my 23 years as a programmer.
And then there's the "invisible programmers", the ones who might cross-train as IT technicians, who write internal-only software on an as-needed basis. Need a report? Need a webapp so people can work with a business process database? Need to integrate a CRM into Jira by pulling information out of its backend database nobody has a schema for? Not the kind of stuff they teach you in school, bucko, but someone has to do it.
To me this is a super underrepresented group if you can call it that. Tons and tons of mid market companies have more of these programmers than traditional CS grads.
100% agreed that people are performing higher level functions today that software can't perform.
However, I think your first point about multi-system interconnectivity is ripe for change.
It's been the case that the literal act of running a business has been humans serving as copy-paste bots between systems, both internal and external. Come to think of it, from a purely software point of view, businesses look a lot like giant, multi-system ETL machines, except that the individual steps in the pipeline (Salesforce, SAP, Netsuite, etc.) don't talk to each other. This is even worse when it comes to interactions with other businesses (customers/vendors/partners) - everyone has different systems and none of them talk to each other.
So we fall back to the lowest common denominator - Email + attachments (XLSX, PDF), CSV over FTP etc.
The fundamental problem is not very different from the challenge of human language translations. Getting SAP to talk to Salesforce is a similar class of problem as enabling an English speaker to talk to a Hindi or Mandarin speaker. If the latter is a solvable (solved?) problem, I don't see why software talking to software is that different. There are of course domain specific challenges, like the fact that both systems being translated between require 100% translation accuracy.
We are working on solving this at https://42layers.io. It's early days for us, but this is exactly the problem we are solving.
The way I see it, the fundamental problem is that the producers of these systems don't want them to talk freely to each other. Every vendor wants to control the conversation (and when you see someone calling their product a "platform", you know they want to control all the conversations in a given sector). E-mail is the lowest common denominator that works, because it happened in the age where computer technology was developed to coordinate, not to compete and control.
Conversations between systems is an easy problem, in the same way translating English to Mandarin is easy if both people are also fluent in Hindi - they can round-trip through the shared language. Systems designers can also negotiate a common protocol. It doesn't have to be automated, it can work just as well with some programmers continuously keeping the protocols up to date. The problem is, there's a strong business imperative to not do any of that.
Super underrated comment. What's even worse is that, internally, large companies have small groups that create bespoke solutions and then try to sell those solutions to the rest of the company. I've seen so many "final frameworks" that are going to solve all of the problems that are then sold to all of the other groups, who try to move their stuff onto that framework, but wait now Team X has another framework and oh man which one should we follow? It's just a new version of the fundamental problem of getting diverse human groups with diverse needs to standardize on some solution, with all of the economic incentives you described mixed in. Frankly I think the people who got this problem closest to right were the American Founding Fathers. This is fundamentally a political problem. The best technical solution I've seen proposed is in the talk "Architecture Without an End State," where the speaker talks about how to make smart decisions in a decentralized environment.
Lots of companies trying to build low-code solutions to help business people glue systems together. However, for pretty much all of these solutions, while the end user isn't writing code, they are forced to think like a programmer - if/else, loops etc.
We are taking a very different approach.
We've built a transform engine that can be trained on transforming data from a source structure to a destination structure using a few (10s) of examples of source and destination. We can do this transformation without falling into the trap of figuring out acceptable confidence levels - a trap that most ML systems fall into, and thus have a hard time with enterprise usage.
We couple that with dynamic, configurable integration infrastructure ("connectors" in old school enterprise speak) that can send+receive data to/from lots of systems over many protocols and serialization systems.
End result is that end users can connect systems together with a few clicks and by providing a few lines fo training examples, not unlike what a business person would give a dev and say "extract a CSV from SAP and put it in that FTP folder. the CSV needs to look like this file"
That is the failure of every ORM and visual programming tool I have seen.
(But i happen to think that being forced to think like a programmer is good - on the order of being forced to think like a literate person would have been a few hundred years ago)
In your specific SalesForce scenario they built SalesForce to stop people from coding CRM systems, and then people continue to make money building connected abstractions / Apps on top of other systems that are SalesForce compatible so you get an app ecosystem that abstracts away all of these integrations. The result is less code.
I agree that there is a lot of complexity specifically for "mission critical" or "last mile" systems that will not be addressed by the mainstream abstractions for many organizations, but I don't think SalesForce is necessarily the best example. I see the author's hypothesis freeing up time to do a lot more things within organizations that are otherwise on the back burner because you can't get to that feature set, and/or pivoting to solve either a) complex problems that are not yet solved or b) specializing on a layer that is now "platform". Somebody builds AWS, and Azure, and GCP. Somebody has to create, build and maintain the next platform / abstraction too.
I think your falacy is in the "less code" assumption when you say "The result is less code". I'd argue that empirially we've seen this to be false. The result isn't less code, at least in a global sense, its more productivity, more features, more customization, and more specificity at a cost of less code/feature. Software has really interesting economics where as the cost/feature decreases by a factor, say 1x, then the set of features that can be profitably worked on expands by like 10x, so paradoxically, as cost/feature decreases, it makes sense to hire more engineers and expand R&D budgets.
I think ultimately, the question is whether this trend will result in "fewer programmers needed", which is the most important by-product of "less code" in the author's thesis.
Did we slash R&D budgets once we standardized on the X_86 instruction set thus needing less compiler devs? Did we slash R&D budgets when we moved from on prem to cloud hosting? We have seen this happen many times before, we know the economics. Decreasing cost/feature is synonymous with increasing productivity. We know that a 1x increase in productivity results in a very large increase in the numbers of features that become feasible.
There isn’t some fixed factor here that causes it all to collapse. Productivity increases are plowed into growing the market 10x and building the business, not reducing eng budgets. At some point in the future this will slow down, but that is so far from happening, like many decades from now, maybe never in a non theoretical sense.
No, it's more code with a greater value:code ratio. It's lower code for the same delivered value, but no one stops at the same delivered value that they'd have without whatever tipped that ratio, because the incremental value for the next unit of code is higher.
Increasing the value delivered per unit of code increases the volume of code purchased.
Maybe, but that code tends to be extremely bad quality, because it is always written by "consultants" who know just enough programming to be dangerous and do the bare minimum to get the integration to work, without any concern for or the ability to follow software engineering best practices. And that introduces its own costs.
I think no-code is different than off-shoring. Wherewith offshoring you needed domain expert, some kind software architect which maps features and off-shored team, with you no-code you need usually only domain expert.
>This sounds like it was written by someone who hasn't worked with ERP systems, to give an example of where software will never eat software.
Software has already been eating software. Imaging building something like Salesforce or an ERP system using only Assembly. Just as programming languages like Java became an abstraction level over Assembly and simplified development of complex systems, something else will emerge (or is already emerging) as a higher level abstraction and will enable creating even more complex systems.
>The industry has already tried commoditizing by off-shoring.
Offshoring doesn't create a new abstraction level.
The problem is that while it is economical to hire a team of developers to automate a simple process executed by a lot of people, there is a huge amount of complex processes executed by a small number of people in companies. And you are never going to be able to justify a team of 3 developers or more to automate and maintain the code of the job of just one guy.
Machine learning won't be the answer either. Machine learning is just another kind of software, you still need to set it up and maintain it. And you need data to train it, which for these complex processes often there is none.
The solution really is for non developers to write code to automate their tasks themselves. Here simple code with simple platform to run that code is the only solution. But we are taking the opposite direction. Newer generations are becoming increasingly remote from how computers work (teenagers seem to be struggling even with a file system), platforms are increasingly becoming locked down (both consumer and corporate environment). And I dispute the claim that software is getting easier. I am mostly evolving in a .net environment, and I think the platform is becoming increasingly messy and complicated, we are moving away from simple things. Same with technologies, every time I go back into the azure portal website, I feel I am lost in the hundred of products with evasive names.
What we need is the power of the almost "draggy and droppy" features of VBA, something end users can play with. It is shocking how much office processes rely on such an antiquated and neglected technology.
>Newer generations are becoming increasingly remote from how computers work (teenagers seem to be struggling even with a file system), platforms are increasingly becoming locked down (both consumer and corporate environment).
Of course teenagers struggle with a file system. Your iPad/iPhone/Android App shields the user from any and all meaningful interaction with the OS.
Many families don't even have a PC at home anymore. So there's no chance for them to gain this experience.
This is what's kinda crazy to me. I'll be 26 in a few months, and I was by no means raised doing the technical PC tasks that many of my older peers did. The first computer I remember using was an old Windows 95 desktop my mom got for doing her homework in college (teen mom).
I have a brother who is 15 and he doesn't know how to use a computer more than using youtube and facebook. And I constantly hear things from my parents about viruses and sketchy stuff ending up on the family laptop. Granted, not all of that is him or my other siblings, but it seems a lot of kids are missing a sort of digital literacy that many in my age group grew up with. I somehow know what a sketchy download button looks like. He has no idea.
"It said download so I clicked it" is often a response I hear.
What's more frustrating though is that my brother is not a great student. He was adopted and is getting to the age where he's starting to act out and I totally understand why. He's disillusioned with his own education and can't be bothered to care. For someone in his situation, digital literacy could give him access to a good job and a healthy adult life by learning to program, and I could help and mentor him along the way, but I know already it's going to be hard to convince him to take it seriously. I've hinted at it but I've only gotten sideways glances that scream "yeah right, I can't do that."
I'm not saying every kid needs to learn to be programmers, but we've abstracted so much technical learning away from them that it seems they're less prepared for a digital world, despite growing up surrounded by technology. Even the kids who are into tech stuff are being pushed into commoditized silos. Eg. Minecraft, etc.
Minecraft isn't a good example of your point, it's a better gateway to software literacy than most games kids of your generation were playing: from the logical circuits you can build inside the game, to at least learning how filesystems work by installing mods, and basic web server knowledge when you want a private environment to play with friends. Add to that some permanent or severely bothering consequences to the player mistakes and I think it might be the best video game for kids to play now (and Factorio/Shenzen IO when they grow up :).
Seconding this; Minecraft was absolutely my door in to server management, Java for building my own mods, and creative collaboration in an online world. Many fond memories, and it's hard to imagine I would have the career I do today without that game.
Same - trying to free enough extended ram to run various games in DOS on a hand me down 286 was my first experience in troubleshooting and configuring an OS
The solution really is for non developers to write code to automate their tasks themselves.
Most of my consulting customers can't even express the problem they are trying to solve. They struggle to decompose the requirement to its constituent parts.
The few that can do that could easily become programmers.
I've used a few "draggy and droppy" tools and they can make a programmer more productive in certain domains but they can't turn a non-programmer into a programmer.
> Machine learning won't be the answer either. Machine learning is just onother kind of software, you still need to set it up and maintain it. And you need data to train it, which for these complex processes often there is none.
Machine learning will learn to set up, maintain, and train itself. /s
I understand the desire to have non-developers write code for themselves, but the problem is that the quality and reliability of that code can be utterly terrible and they don't have the expertise for the edge cases, so there would still need to be at least an intermediate developer overseeing these 20 part-time very junior people simply because some of those processes would eventually go haywire and do something dangerous or destroy some data.
> I understand the desire to have non-developers write code for themselves, but the problem is that the quality and reliability of that code can be utterly terrible and they don't have the expertise for the edge cases,
People should write code for themselves, and not all code needs to be good quality or have all edge cases covered. there's nothing wrong with someone making a tool for their job and handling the edge cases as they occur.
Microsoft Access could be a better tool for a lot of those spreadsheets, I might wager! It would teach user interface design, simple relational database concepts, data types (!) and more. I really wish MS hadn't turned this product into a dead end, seems like a lost opportunity to give aspiring devs a path to learning more capable systems.
ML can't learn to understand the business needs of a company, that's something that even external human developers are always struggling with, and it can affect usefulness of application much more than occasional bugs. On the other hand non-developer employees might not be able to produce optimized code and they'll make rookie mistakes, but they know how their company works, they understand the business needs and processes.
IMHO the win-win approach would be to have apps designed by employees with drag&dropping UI and wizards for setting the logic rules, and then to let ML analyze it and generate the high quality code based on that.
> The problem is that while it is economical to hire a team of developers to automate a simple process executed by a lot of people, there is a huge amount of complex processes executed by a small number of people in companies
What I experience is that the small companies gets merged into a bigger company as they are no longer competitive with companies which have automated processes.
I don't buy the argument that we will have such a leap in software development productivity that we need way fewer people to solve all the things that need technical solutions. You can unravel abstractions we build on all the way to the bits and bytes and 90% of software is just gluing libraries together. Infact, you could probably describe the entirety of several of big tech cos as "just" gluing libraries together.
Anyway, the other argument about salaries is more interesting. Most people seem to agree that there's a huge untapped crowd of qualified developers in small / mid sized US cities who would love to join $BIGCO but the only reason they aren't is because it involves relocation. As an example, a Sr Dev in Orlando, FL makes $100-120k in total comp while one in SF / NYC makes $350k+. I limit my search to Sr Devs because I assume college kids are happy to move to exciting cities like NYC / SF / Seattle on fat relocation checks.
My suspicion is that supply and demand have converged already and big tech has mined out the supply of talented devs in the US already. The other datapoint here is that companies have made it as easy as possible for folks to move by opening dev centers wherever there's talent - NYC as a tech hub wasn't a thing in 2012, but it's huge now for all the people who don't want to leave the east coast. Boston is pretty big. Colorado, Austin as well.
The only way supply of devs is increasing here is if:
* Sr Devs who did not move to tech hubs because they preferred to stay where they are. (Personally think this is unlikely)
* Qualified Bootcamp graduates
* CS Enrollments hitting pretty high numbers, so maybe we'll start graduating lots of CS folks.
* Immigration reform / Outsourcing
* Interviewing change so we skip the algo problem solving shenanigans.
I personally think if big tech wants to hire in the US and still pay lower $ than they currently do, the only lever they have left to pull is the interviewing format / bar.
>Sr Devs who did not move to tech hubs because they preferred to stay where they are. (Personally think this is unlikely)
You would be surprised at how many people value their hometown or where they have settled. Technical only equates to high aspiration in SF. There are smaller slower more steady tech companies (probably using the Microsoft stack) outside of the tech hubs that offer stable jobs with decent pay and good work life balance. Being a software engineer in SF means constantly learning new tech and 'keeping up' but if you're not building a massively scalable consumer facing product that doesn't matter so much. In SF even B2B SaaS is built like this but it doesn't have to be.
I think you're confusing 'competitive' with 'skilled'. They are correlated but not the same thing. The best of the best who want to be the best relocate. The best of the best who don't care to be the best stay where they are. People in the second category are driven by an intense interest in their subject and their work rather than competition.
> Being a software engineer in SF means constantly learning new tech and 'keeping up'
No, it does not.
> You would be surprised at how many people value their hometown or where they have settled.
I would love to see actual data about this rather than articles from hometown newspapers and posts by hometown residents, enthusiastically praising their way of life. It's easy to argue the counterpoint as well, right?
1). There are many jobs that have to be done in person
2). Many people prefer to live in large cities and accept the downsides in order to get the benefits
So, next time you want to make a claim like this, can you share anything objective about this? Thanks.
I know this is more anecdata, but I'm from Ohio and worked on a joint project with a couple of New York devs at #{famous company}. They were very good, and I learned a lot from them, but I could definitely hold my own with them technically (as could another senior dev from my company). We're both family men in the Midwest with no desire to move. Even if you offered me $350,000 or whatever I'm still staying here. But I would obviously take a remote job with #{famous company} if it paid 60-70% of that and I felt like I wouldn't be a second-class citizen as a remote dev.
Chiming in as a Midwest engineering manager here (Michigan). There's no lack of talent in the Midwest, although it's certainly a different calculus to try and match hiring to the supply/demand of engineers here and not everyone does so appropriately.
>>Being a software engineer in SF means constantly learning new tech and 'keeping up'
>No, it does not.
Maybe you consider Kubernetes an old technology by now? Maybe you consider React.js an old technology by now? What about docker? How about ES6?
People at slower tech firms are still building working B2B web services with ES5 jQuery and ASP.NET. The engineers there have been working with jQuery since it's inception. They know it inside and out and have the skill and depth of knowledge to work around the drawbacks and design flaws.
This is from my experience working at smaller tech firms. I've moved to the city now and I can see the difference in tech and I can feel the difference in attitude too. I'm not going to link you a study or any data because no one is out there studying this stuff. This is opinion not science.
>1). There are many jobs that have to be done in person 2). Many people prefer to live in large cities and accept the downsides in order to get the benefits
Both of these statements are true but I don't see how they are relevant? I'm not denying either of these facts but it doesn't stop the small town engineers from existing.
> Maybe you consider Kubernetes an old technology by now? Maybe you consider React.js an old technology by now? What about docker? How about ES6?
My father (and many others like him) has been programming in C at a prominent SV company for the past 15 years (OK, it's based in the South Bay). I know many people in SF doing similar jobs, just coding away in Java or C++. Those people come to work, do their work, then go home. They don't tweet, or write Medium posts, or have dark green GitHub activity profiles. They don't work with you, so they don't talk to you. You aren't aware they exist. However, these people build many of the systems that make our day-to-day lives possible.
> I'm not denying either of these facts but it doesn't stop the small town engineers from existing.
I'm not saying they don't exist, I'm just saying that there just aren't that many of them.
Fair enough and I agree there aren't that many of them.
I have a lot of respect for people like your father. That's why I wanted to represent the small town devs who are similar in many ways. I personally am a little sick of the whole scene and constant newness.
I'm learning C++ in my free time because I've become disilusioned with life as a JavaScript developer. I know there are C jobs in embedded systems and OS dev but I didn't know there were still a lot of C++ jobs around outside of games.
> I personally think if big tech wants to hire in the US and still pay lower $ than they currently do, the only lever they have left to pull is the interviewing format / bar.
The fact that big tech has not yet changed the format of the interview I believe shows that this convergence has not yet happened. This system is designed with tolerance for false negatives (qualified grads who will be rejected). At some point if these companies had a true demand for more graduates, they would revise the way they evaluated candidates to limit the number of qualified rejections.
If supply is tapped, then the risk of a mis-hire also increases (higher wages, more difficult to replace in a timely manner, etc). This might counteract any downward pressure on the hiring bar.
That's a pretty healthy salary in most parts of the US as well. I'm still young and make around $5k/mo USD before taxes as a web developer in Texas. Take home is around $4.4k
My SO and I have $30k left of student loans to pay off and then we're throwing our entire salary at buying a house before our city becomes even more expensive. We're hoping to be able to buy that house before the end of 2021.
Luckily we aren't in Austin or we'd be screwed already, but many parts of our city are already too expensive to own property in unless you're making $200k/yr or if you're comfortable leveraging more of your salary towards housing.
Now, we could move and I could keep my salary since I work for a fully remote company, but my SO's job is here in the city and she wouldn't make the same salary in a smaller market. We'd also be leaving our friends/family just to save some money on housing costs so the benefits aren't really worth it. It's dumb to have a really nice house in the middle of nowhere if we don't have visitors to share and enjoy it with.
So even with my healthy salary in a lower cost of living city, we're still struggling to get ahead due to student loan debt, healthcare costs, and housing costs. I would love to move to Europe and even take a slight pay cut to live in a more cohesive society, but from what I've researched, getting a visa without having $$$ in assets is difficult.
Idk where that was all going, but thanks for listening :)
Getting a visa and work permit for a programming job is pretty easy in many European countries once you have the job offer. There are coders from all over the world working in places like Berlin and Amsterdam.
A big breakthrough can be in finding ways to use contractors instead of employees, or in other words allowing people to contribute when they want rather than being tied in a full-time employment contract.
Obviously there are big, possibly insurmountable, obstacles related to the cost of onboarding/learning bespoke tech stacks, the need to preserve trade secrets and serial dependencies that require work to be performed quickly.
The most salient part of this essay was the last section. Remote-first is forming in a way to break tech labor power. Employees will be less able to bond, and their personal conversations will be easier to spy upon. This will drastically inhibit collectivization. WFH may have some upside, but there’s a huge downside looming.
> Remote-First, Collective-Last
> Lastly, let’s talk about the impact of remote-first on labor. Many months, before the virus hit, a CEO friend of mine “jokingly” told me that he believed all the “remote” buzz was as much about reducing the collective power of the employees as it was about saving a dime on salaries. He personally did not want his company to go full-remote but was under some pressure from investors to consider it. We were both hammered at the time, and I didn’t put too much thought into it, but it feels right the more I think about it.
> The most salient part of this essay was the last section. Remote-first is forming in a way to break tech labor power. Employees will be less able to bond, and their personal conversations will be easier to spy upon.
Slightly tangential to this thread, but isn't this also tacit admission that employees will be collectively less creative when working remotely.
Because the majority of remote work communication will be electronic rather than face to face. It will be monitored, stored and searchable (Slack logs, GSuite email, etc.) That’s much more difficult to achieve at the proverbial water cooler.
Not sure which rock you’re living under, but tech is probably the last bastion of labor power in the US. Sure, they aren’t well organized, but Google employees recently canned a DoD contract [1]. Saying no to Uncle Sam is a huge flex, and you can bet that pissed off some overlords.
Blue collar workers in the US have lost almost all of their labor power due to offshore workers or immigration (i.e. scabs). Tech has had it easy for a while now, but it’s the next target; the immigration debate is already shifting from “jobs Americans won’t do” to “merit-based”. Gig economy is another false liberation ploy being used to weaken collective bargaining power. Remote-first is yet another.
> Because the majority of remote work communication will be electronic rather than face to face. It will be monitored, stored and searchable (Slack logs, GSuite email, etc.) That’s much more difficult to achieve at the proverbial water cooler.
The majority of communication is already electronic. I've worked in offices and remote, there is practically no difference on that front. Most discussion takes place in Slack, Jira, GitHub, and email either way. Most companies already have team members distributed across various offices, remote work isn't new. If you want to avoid monitoring, then spin up a Google Hangouts or Zoom meeting, nobody is monitoring that.
The difference is that in an office, 1. your physical presence is also monitored (no joke I worked at a office once where every day an office assistant would secretly record what time everyone showed up to their desks) 2. your internet is also monitored, which is extremely creepy and I'm surprised this doesn't get more attention.
When I work remotely, there are no IT managers able to spy on my internet traffic. Sure there may be some dystopian corporations who try to force their remote workers to log into "company VPNs" and install "monitoring software" - I have absolutely no interest in working for any such companies and have never had to deal with that.
As a software engineer I've been fighting for the freedom to work remotely for ever since I joined this industry, so having the freedom to live wherever I want, and work whenever I want is a huge win. I've already been working remotely since pre-COVID, so I'm happy that other workers will also be able to enjoy this freedom.
If you work at a corporation beyond a certain size, usually big enough to have an IT department vs one 'IT guy', there is definitely some sort of monitoring software installed on the computers given to you.
It just lives in kernel modules or as a OS config and they do not tell you as an employee, and it's done for 'compliance' or 'security' reasons so it's not obvious what it is when you look at a list of processes on your computer in the task manager.
Some popular ones in the bay area for macOS at startups are crowdstrike, carbon black, jamf, openvpn, umbrella, crashplan, munki, etc. Not mention the OS management configuration stuff like MDM profiles for macOS and active directory configurations for windows. A lot of brand name corps you might think are 'good guys' use this, like lyft or dropbox. Similarly with companies FANG, where it might custom software.
Osquery core developer, consultant, and technical committee member here...
We've always treaded carefully around privacy concerns as a project. This is why you don't see tables that access information such as browser history. If you join our Slack channels you will see open discussion of privacy implications for changes and improvements.
There's a balance to be struck between visibility and privacy. If a security team has no visibility into a system, they can't secure that system. This doesn't mean they need to be able to look at your family photos and read your messages.
I work with folks who care deeply about privacy and trust with their users. These folks ensure that osquery configurations are available to users, so that they can see what exactly is being monitored.
I think the real issue, that has nothing to do with osquery or any specific piece of software, is the corp being able to push any software on worker's computers and spy on their employees secretly. I call it the stalker IT employee problem or the psychopathic manager / lawyer problem.
The solution will be legal I think in the end, like in some european countries that don't let you do this kind of surveillance on your employees. And if you want to look at an employee's work emails, you do it in front of them with their lawyer present.
I'm glad you guys are trying to keep some semblance of privacy although.
If you’re working from home wouldn’t the way to go just to have a second physical machine set up next to your work machine and do your personal communication from there?
Yes that is what I recommend. At work I have a second BT keyboard and send messages through my phone.
There is definitely still friction in the entire process (ex send a link to an HN article to a friend that you saw in your work machine web browser) which induces a lot of people to just login directly on their work machines.
Since we are all WFH now I've been meaning to set up some sort of synergy setup so I can keep it really separated, but still have less friction in the 'share a link' scenario.
Good points. Not saying there isn’t an upside to remote work, but there are downsides as well, which in the long run, may overshadow the benefits. It’s much harder to build camaraderie in a Hangout or Zoom than in frequent, random, physical interactions with people. There’s also a biological component to bonding (pheromones, touch, oxytocin, body language, etc.) that simply won’t exist in a virtual environment.
I’d also point out, you’re enjoying the luxury of those freedoms (privacy, non-creepy norms) because labor power still exists and you can easily move to a different company or whatever. But fast forward 10 years, everyone is remote, labor pool is much larger, norms of monitoring and policing communication are established, then things start clamping down across the board as standard practice.
If only one company goes remote, then the collective power of employees certainly goes down (« divide and conquer «)
But if the entire /job market/ goes remote and access to the net is a public commodity, then you can apply for all jobs /anywhere/. Employees bargaining power increases significantly. Furthermore, any employee can start his/her competing business in an instant.
As a side effect, wages can go down but if you don’t work in the bay area you can come out /better off/.
So it’s not clear if the downside is going to be related to worker’s rights. I am personally more worried on the societal / mental issues with WFH.
Agreed, I found this to be the revelatory part of the post. It’s difficult enough in person to try to organize demands from management, I hadn’t considered how much more difficult that will become with a distributed workforce.
Agreed that that was the most interesting part of the essay, but not with the “easier to spy upon” part - I’ve been working remotely for years now and whenever a conversation leads anywhere close to being “sensitive” (eg, even talk of potential side projects) I just quickly open a non work communication channel and do it there (eg another Slack, fb messenger, what have you)
I connect with my colleagues on LinkedIn for this reason alone. It allows us to have discourse outside of work-sanctioned chatrooms. Having said that, I've never worked at a place where anything has resulted from private communications I've had with my peers. And it would have, had it been posted on the wall in the breakroom.
Will you trust LinkedIn to keep your data private when your employer (or a future employer) hands them a big fat check? Not trying to be snarky just want to point out the exposure points.
Yes! So sad I have scroll so far down to see this. There is a huge risk here that we will be isolated more than ever, and yet more dependent upon other unseen actors---the contrast between these two phenomena is one of downsides of capitalism in practice, and this can crank it up.
Here are some crucial steps we must take in defense:
- Make sure working with non-employees is legal: no non-disclosure agreements saying you can't work in coffee shop or other shared space because of potential of overhearing. This would be the best hope
- Towns or cheaper cities, not exurbs. People must know their neighbors better if they know their coworkers less. Time spent walking around the office must be replaced with time spent walking elsewhere.
- More Aggressive anti-trust. If labor becomes more balkanized, capital must also be.
- Workers on boards. Like Germany, if you grow past a certain size this needs to be mandated.
Notice the last two point to the two healthy solutions: decentralized small businesses economy vs giant co-op socialism. The point is to basically all institutions must score high enough on the sum decentralization or democracy, and that the total of the two is far more important then the relative merits of each.
I don't really agree with the point that because we're creating things that allow us to use less or no code, there will be less code to write. We make things easier for ourselves so we can then build upon it and then write more complex things to solve more complex problems. It's the continual layering of abstraction that's been going on for decades.
I think the argument is that things like Wix put web developers out of business. Instead of real estate agents hiring a local webdev to make their listings site, or whatever, they can just use Wix.
Those webdevs are out of luck if they can't flex into something else. But, generally, I think you're right. When those jobs go away, in their stead we do other things with software. So instead of building websites for local real estate agents, SWEs are building systems to make virtual RE agents using ML.
I also find that sites like Wix are rarely used as a replacement for an in-house dev team or professional consulting, but rather by businesses that wouldn't have had an online presence in the first place.
No business I know paid "good money" for that. At most they would get a neighbor's kid (one of them being me) to throw together a few html pages. Fancy ones had a PHP visitors counter.
I guess I've never considered mostly static brochure sites the purview of web development or "doing software" of the style referenced by this article. It's more like web design. WordPress, as ill-suited as it is, had already begun the incursion into design that Wix and others continued.
But, displacing static-site designers is a much easier task than displacing developers due to the latter's work with logic, interactions, etc. Back in the 90s webapps were still nascent and just moving past cgi-bin to some extent, but the bar on replacing devs gets higher with each passing year. So, absent a huge leap in tooling or code generation, etc., I see dev roles changing more than going away.
>A web store can now be made in minutes. A news site. A forum. Heck, there are tons of turnkey company intranet solutions...
Well, sure, yet demand for developers remains strong. They're now moving up the complexity chain, working on different problems, etc. And, of course many are integrating with turnkey solutions like some you mentioned. Notice how so many of these "turnkey" solutions offer APIs?
A web store or forum isn't going to differentiate a business to success any more than a snazzy website will. The bar is now higher. The cycle continues.
Exactly this. I work for a company that specializes in helping to scale business websites that have moved past the out of the box limits of things like WooCommerce and Shopify.
We do consulting for new businesses every month that have started with these turnkey solutions which worked great for them, until they grew big enough that the system starts to break and hiccup more and more. For those clients, we provide an extreme amount of value to keep their online machine well oiled and they happily write us that check every month.
I will say though, that often the platforms I mentioned are not the real issue in the chain, but rather other tools and custom code added on top to hastily support some business decision. Shopify is a great example. Awesome core product that is extremely performant, but a lot of the themes and pre-built templates out there are not built to facilitate a rich and performant experience for customers. Shopify will gladly let you serve a 20mb banner video that will make even modern desktops chug to render and it doesn't really affect them at all. It's our job as consultants to come in and show the business where they can optimize to increase their conversion rate. They already have a product people want, otherwise they wouldn't be having scaling issues, so it's nice for us to get quick wins that result in positive outcomes for everyone.
>Exactly this. I work for a company that specializes in helping to scale business websites that have moved past the out of the box limits of things like WooCommerce and Shopify.
Sure, but most of them wont move "past the out of the box limits of things like WooCommerce and Shopify". For the 1 that gets bigger or has special needs and does there would be other 9 that are just fine using those platforms...
And of course WooCommerce and Shopify can always add more customizability and capture those features eventually, so those devs who help companies "move past the out of the box limits of things like WooCommerce and Shopify", become comoditized glorified configurators (or the owner or admin of the company can even do it themselves).
>most of them wont move "past the out of the box limits of things like WooCommerce and Shopify"
Many of those who never move past that phase were likely not candidates for custom development anyway. They'd likely have sold on eBay or Amazon, whereas Shopify offered them the chance to have their own "store".
That is, many who can spend $29/month (or whatever) for a Shopify store wouldn't have been able to spring for $5K or $10K+ to have a custom store built. I would bet that's the overwhelming majority of Shopify customers.
So, arguably, you might even say Shopify helped some companies to get to the place where they can afford custom development.
But, this whole store thing is just a narrow focus anyway WRT the idea of obsoleting devs. There are plenty of more complex/differentiated businesses that have sprung up since e-commerce, and by the time any of those things have been commoditized, there'll be still more to take their place in this ongoing evolution.
People use Wix, get some traction, and then reinvest. Over time, they want to do more and more, but begin bumping into the limitations of the platform. That's when they hire a web developer to build them something.
That company wouldn't have otherwise seen enough value in the web to pursue hiring a web developer.
> I think the argument is that things like Wix put web developers out of business.
When I started only the most keen companies even had websites and I build them and many weren't much more complicated than you can do with Wix.
Now every company needs an online presence and most can get by with Wix but the number of companies that need a developer to build something competitive is larger now than it was back then.
The market is growing faster. But also the sophistication that users demand is going up as fast as there are tools to accomplish what was perfectly acceptable 10-15 years ago.
>We make things easier for ourselves so we can then build upon it and then write more complex things to solve more complex problems.
Only with SaaS most of us aren't needed to "solve more complex problems". The SaaS team can already solve them and sell them...
Most companies don't have "more complex problems" anyway, just problems stemming from bad software integration, which affiliated SaaS services can also solve...
So while there will always be coding work (at building SaaS and at building highly customized solutions), there will much less of it...
>It's the continual layering of abstraction that's been going on for decades.
It had been going on for decades, until it got reversed with the Cloud and SaaS offerings. We just haven't seen the full impact of those yet. But that's what the article is about.
His thesis seems plausible, but I’m far from convinced. It seems equally plausible to me that as software gets easier to write, we just write more software. Industrialization of clothing meant the amount of labor required to create clothing dropped drastically. However more people are employed in clothing and fashion related industries than ever, because we now just own and wear more clothes than ever before.
My dad loves to tell this story about him going to engineering school in France in the 70's. He was suggested to stay away from CS or software by his guidance counselors - the reasoning? "We'll soon enough have programs able to write themselves - studying to be a software engineer is a dead end".
I'm not sure that some common engineering problems now being solved doesn't just mean that we can now redistribute that workforce on other unsolved problems, or to build new products on-top of those solutions.
I had a similar experience in France in the early 2000s. I went to the unemployment office (ANPE) to ask for a training and they told me that software was a dead end and that it wasn't possible to get training. So I did my own research, found a training organization and went back to the unemployment office giving them the exact reference of the training I wanted. The exact same person then reluctantly pulled a drawer of her desk and gave me an application form. That training kick-started my career in software.
Another thing that was commonly said over the 2000s period is that all software jobs would be quickly outsourced to India. That didn't happen either. Some big companies obviously tried that (and some still do) but a lot of them eventually realized that software wasn't something you could just specify and outsource to a team far away, with a different time zone, culture and language and expect to magically get working software as a result. In the meantime, I think that demand for software in India has also grown. I laughed at loud when in the early 2010s I received an email from an Indian recruitment agent offering me to come work in India.
Industrialization of clothing meant the amount of labor required to create clothing dropped drastically
Those jobs just moved to Bangladesh. I think they are just as inefficient as always. But the west traded textile jobs for other forms of service industries and more complex manufacturing.
No one makes clothing inside a western country as cheaply as they do in Asia, but the conditions in Asia are not heavily automated. They just have more people and an economies at a stage where that sort of work will provide.
I suspect we'll just make more software. You still have to think and there's an upper limit to how much you can automate, and how costly that last jump from some automation to full automation will take.
To clarify, in my metaphor I am comparing the efficiency of labor before and after the industrial revolution. Bangladesh is an industrialized country, so I am not sure I understand your point.
To weave enough cloth to make a single shirt by hand takes orders of magnitude more time than it takes for a loom to produce the same amount of cloth. This is true regardless of what country it happens in and how much the person operating the loom is paid. Yes, the cheap clothes one can buy at H&M, Uniqlo, or Gap are made in Southeast Asia, usually by workers being paid inhumanely low wages. But those clothes are universally made with fabric that is produced by looms and other modern machinery.
My point is just that the volume of clothes produced today (in southeast Asia, for example) would require orders of magnitude more workers to produce in a pre-industrial world.
Despite this massive efficiency jump, there are not less jobs in clothing and fashion related industries; there are more because we wear so much more clothes than we used to.
And what happens when Bangladesh (then Pakistan, then Ethiopia, then etc.) becomes developed enough? Eventually you run out of countries and regions to offshore to.
It's kinda weird comparison, because (beyond necessary) buying clothing is consumption. But buying software is often an investment, so people and businesses can justify spending enormous amount on it, compared to clothing.
Eventually the market for software will be much more saturated than today. It has already happened for consumer desktop applications. The rate of new commercially successful desktop applications has dropped in the last thirty years. Sooner or later software for businesses will be "done" too.
I think that's just a shift. It's not that the market for the functionality dried up, it just all moved to the internet. And that market is everbooming.
No, it's saying that since most people in developed countries who want a car already have a car, the market for cars is markedly different from the first half of the 20th century.
“No code” platforms aren’t going to replace software developers. I’ve tried out several of the major ones and came away with the conclusion they are this generation’s microsoft access. Someone who isn’t a programmer might be able to build something in them, but their IT department won’t want to support those solutions and they will definitely need support. I expect a cottage industry of no code consultants, who will be programmers under a different name.
I think something like airtable or microsoft lists is going to replace a lot more custom software. They are this generation’s microsoft excel.
When someone posts about the inevitable commodification of software development, the response on HN is usually one of denial. I'm wondering if some of these responses are starting to sound a little angry too - maybe we're going through the five stages of grief as a community!
Lowering the bar to software development and opening up the employment market to a global one is going to have an undeniable impact. It certainly won't take away from the fact that problem solvers are needed to create solutions and it also doesn't mean that there won't be a place for the highly skilled people with specialist technical skills. But it will change the landscape significantly.
Software engineers will loose a lot of the influence and power they have enjoyed over the last decade. In my view, this is an important and necessary step. The ability to build software that solves meaningful problems shouldn't be concentrated in the hands of a few people (even a few hundred thousand) who have a narrow view of the kinds of problems software should be put to work to solving.
The article expresses concern that this displaced power could end up in the hands of big tech. I'm more of an optimist and believe that we could go in the other direction all together. The ability to create problem solving software can and will be democratized to a much broader degree than ever before and will lead to more meaningful problems being solved across the world. As a software engineer myself, that's the future I want to help build towards.
> We are coming to a point where software is developing so fast and the abstractions getting better that soon we will have more software written by a smaller number of people.
I think software is just a big pile of garbage. The thing is: our civilization depends on this pile of garbage. It's remarkable that the whole thing somehow chugs along.
Over time, the pile will just keep getting bigger and bigger and more people will be needed to keep it from collapsing.
It's a false dichotomy, that because software is getting easier to write, less people will be writing software.
Yes it's true that software in the future will be easier to write, but we will not be solving the same problems as today.
In the past (20 years ago), developing a TODO app was relatively difficult and could of been a valid startup idea. However today it's a trivial application to develop and is the equivalent of a "hello world" program.
The reality is; Writing software gets easier, While the problems software solves get harder...
This is the right take. Same thing with 2D games to VR/AR. Infotainment systems to Self Driving, Real Player buffering to Netflix. And the list goes on. You can't get this progress simply by making better abstractions. You can only paper over the complexities of "solved" problems.
Not sure if this is actually a serious (but misguided) take, or just the kind of thing you have to write when you need to blog something but don't have a good idea this week?
Software is getting more complex, not less.
No-code, no matter how magically great, won't change the fact that you have to somehow understand and codify your requirements and figure out how to achieve them through available building-blocks. (A no-code solution is just a tradeoff in building blocks -- the more one does for you, the less flexible it becomes. The lack of flexibility has a cost: the degree to which one doesn't exactly fit your system, it pushes complexity elsewhere, blunting or even reversing the value.)
The longer-term effects of a remote-first world are a much more interesting topic, at least (though I think the author stiff completely whiffs. The ideas on why remote work will be bad for the employees are very weak.)
In my opinion, the author is a bit disconnected from the reality of software.
We've clearly learned a few things during the last five decades, our abstractions are better today but progress has been mostly slow, painful, and linear.
A lot of the gains we're seeing are built on top of the tremendous progress made by the hardware guys, we've been free-riding for a long time.
I think that a few bullshit jobs might be exposed/compromised by the transition to remote work, and that includes managers, but not people doing creative work.
I'm not buying it. Just as MS Office didn't reduce admin work, but instead created whole new categories of busywork for executives to do, so new software frameworks are mostly just increasing the complexity of the overengineered solutions we create to address what are really just simple CRUD interface problems in the majority of cases. If only classic ASP in VBScript was still being security patched, I guarantee you the majority of web apps could be written in that to be so much faster and more responsive than any of these "modern javascript" front ends that have to make 78 different API calls across a fleet of microservices just to show part of a database table on a page... after the user has sat looking at some dumb spinning beach ball for 5-8 seconds. Naturally, you could maintain that shit with only one or two developers instead of the rooms full of fullstack devops rockstars that it takes to build even the most basic thing these days. Without a kubernetes cluster in sight. But as long as our entire industry continues to throw away basic architectural and performance principles in favour of embracing the new hotness, your jobs are safe. Enjoy the busywork, folks, and let's continue creating additional layers of abstraction and indirection to ensure the next generation requires even more armies of developers just to say "Hello, world."
1. We have no-code already. It is called WordPress and its hundreds of thousands of plugins. Is WordPress eating everything? Nope. Is it destroying web development? Nope.
2. This entire post does not align with the existence of SAP, IBM, and Oracle. Vast, vast, vast, amounts of code are highly customized for a specific domain. These companies would already have lapped up these synergies to boost their own profits. I really want to know how no-code solves the processing of custom contract objects.
This is no different from saying that economic progress, in general, will eat itself until hardly anyone is employed. Which people worried about, for example, in the early 1900's as farm employment fell dramatically due to efficiency.
Spoiler alert: it didn't happen.
What people always seem to miss is that as things become more commodified and more efficient, we move onto solving harder problems that aren't commodified or efficient.
As programming CRUD applications or managing databases becomes quicker, we start working on better product ideas, cleverer scalable architectures, and whole new product categories.
Innovation never eats itself. This is a myth that people seem to fall for decade after decade. (Unless we somehow reach the "singularity", but that's purely in the realm of science fiction for now, and certainly not worth worrying about until we solve AGI.)
Software automation has been going on for a long time - there's no denying that. I don't see what that has to do with remote work though. Seems like the author just stuck in the remote work buzzword to get some more clicks despite that being a completely separate topic.
> a CEO friend of mine “jokingly” told me that he believed all the “remote” buzz was as much about reducing the collective power of the employees as it was about saving a dime on salaries.
I've spent the entirety of the ~6 years of my software engineering career wondering why a profession where you're in front of a computer screen all day was so hostile to remote work. At every job I'd try to persuade managers to be more open to flexible hours and wfh, ultimately realizing it was impossible to change corporate office culture and giving up to exclusively work remote roles at remote-friendly companies.
So after years of fighting this fight for increased acceptability of remote work, it's hilarious to hear some CEO telling people that remote work is a ruse intended to decrease our labor power. That's like hearing a CEO say "we've decided to increase the salaries of our engineers above market-rate in order to put them in "golden handcuffs" so they won't leave'. Sure a conspiracy theorist can always theorize about some possible hidden motive, but I'll take the raise, thanks.
Remote work might be damaging to the mediocre SF engineer who's inflated salary until now has been protected by companies'/investors' insistence on engineers being in SF, but it's a boon to everyone else who never wanted to be tied down to SF and living in some office campus.
> We are coming to a point where software is developing so fast and the abstractions getting better that soon we will have more software written by a smaller number of people
Maybe i m in the wrong planet, because all the software i see in the past ~10 years is an overcomplicated unmaintainable slow mess that needs more and more people to keep it from imploding.
You obviously haven't used C++${CURRENT_YEAR}, React rand().rand().rand(), and (define-new-silo functional-p). They are so amazing, it's like the software writes itself. Too bad they are not compatible with each other, or we wouldn't even need programmers anymore, they are such an innovation.
I was working on 20 years old software with quite some technological debt and then on software doing fraction of the same thing for 4 years, written from scratch by today "methologies" (move fast, abstract is everything, so is framework, vm shipped, opensource integrated everywhere,...).
The "new" software has technological debt that far surpasses the old software - but this happened in 4 years, and the team surpasses size of the team for old software. I just cant see how it could survive 20 years. But it was written faster, on the other side, no one really knows its internals - however they brag - due to huge pile of code taken from open source projects. I predict that "death by thousand cuts" will occur spontanously.
Look no further than Docker, which is the tech equivalent of giving up on Earth and moving everyone to Mars because we've screwed up so bad. We transpile one version of JavaScript to another and our stack requires webpack now. Remember when the browser could just load JavaScript without half a dozen middlemen in the way? Back when you didn't have to compile one interpreted language to another, not even gaining a higher level language in the process.
It's rather laughable that software will solve the mess that software creates any time soon.
Docker is to deployment what an API is to development: Hide the complexity inside some barrier and present a cleaner interface. The complexity can be well-managed or not; the point is, the outside world shouldn't have to care what you're doing in there as long as it works and is reasonably efficient.
I think it's absolutely true that remote means less solidarity and connection between workers. Also hiring international remote workers are usually done on consulting basis, which less regulatory protections. I certainly see a world where software engineers become more fungible, and with the reduced messiness of human interactions, managers will be ruthless.
There are also some interesting arbitrage opportunities. For example, American companies would want to hire from countries with socialized medicine so that they don't pay for health benefits.
It could be more efficient, and it could lead to reduced poverty in some places in the world, but on net I don't think it's a desirable.
Ironically, the people who champion remote work the most do it from the perspective of the benefit to the worker. But as this essay suggests, and I agree, if you play it out, it will work out to the benefits of the companies and executives.
To be clear I'm not against remote, and could see the benefit to everyone, but it's naive not to play it through and think of the second-order effects of a fully remote industry.
Software started as flipping switches, then moved on to tape and punchcards and then to keyboards and the ability to write in words instead of bare machine logic. Soon we were in the print phase where "print" meant putting physical words on physical paper and sometime later you were able to actually view your program on a screen without needing to reprint it. A generation later we get code generation from a WSDL and squiggly lines under typos. A generation later we get AI that tries to predict if I'm more likely to buy the Smooshtek or Prodex version of the exact same item on Amazon but can't come close to solving unknown business problems independently.
I guess that what I'm trying to say is that the first and last job of software is to eat its self. That's the environment and if you want to make it making software you have to learn how to use the new impossible thing that software can do and not take it as the brand new end of the world.
We could draw parallels to warfare, the one industry that historically warps everything around itself. First we fought with spears, swords and shields, then with primitive musket guns. Suddenly tanks and planes and helicopters join the frey. Then carriers, submarines, missiles and targeting systems. Radars. Space satellites. I'd be surprised if there aren't secret projects that simulate the same potential conflict a million times to decide chance of success given new parameters.
And every time the new advancement in technology blows the previous ones out of the water, but it is open to its own exploitable faults. Tanks beat infantry in open field, but they're less suited for urban warfare if you don't want to leave the city in ruin.
Last few companies I've worked for were using what would be considered the latest trends in software development: SPAs, React, Redux, BFFs, monorepos, kubernetes... I can tell you, we're not reducing the need for developers at all... if anything we're building things so complex and overcomplicated that will require generations, and generations of developers to maintain an understand. So we're pretty safe for the time being...
Yes, then you have things that would really cut down the number of people working on the field, such as Heroku, App Engine, Firebase... but this is such a small factor due to the few companies using it that compared to the messes we're creating in other places I don't see it causing any collapse of the job market.
Considering the struggle of companies to find qualified people to work for them, even when offering good salaries, it's to me a huge exaggeration to say that going remote will represent a disadvantage for software engineers. I've been a software engineer for 10 years now. If you see the struggle we have sometimes with dumb topics like timezone issues or ridiculous security issues everywhere, you'll understand that software is really far away from eating software.
Also saying that it's easier for the company to get rid of people may somehow be true, but it's easier for people to pivot and find better opportunities as well. To me it will increase job rotation, which in my particular experience has helped me A LOT to improve my skills. This will help improve job conditions as well, companies in search for more stability will have to be even more creative to keep smart people around. They don't seem to have understood this yet unfortunately, good job conditions is not having a ping pong table and a bloody PS4 to play with.
It’s not the software engineers that will be automated out of a job, it’s everyone else. We will need to deal with the fact that jobs which require very little creativity and ad-hoc problem solving will largely be automated away much sooner than software will be writing itself.
I think the author is a little out of touch with reality. In reality, state government unemployment websites that are really simple CRUD apps collapse from a few thousand daily users. Lots of simple office work can’t happen remotely because people are still using paper files. There is plenty disruption to be had in the world that can be solved by software.
> ... wrote a persuasive piece on how companies should people based on what value they add, not pay them differently based on where they want to live.
I learned the concept of "golden handcuff" the hard way. We paid a remote worker a fraction of the salary they'd make at a major European city. But that salary was still a multiple of what they'd make at a local job.
This turned into a golden handcuff, as the employee started to lose interest in the job, yet would try hard to convince themselves that they're still interested. That was likely because any other job for them would mean a 66% pay cut at the least, and a significant downgrade in their standard of living (unless they'd find another employer as inexperienced as me).
Lesson learned: Negotiate close to market rate, and let the fulfillment of the job be the talent's biggest motivator.
> software is developing so fast and the abstractions getting better
Software won't eat software because this isn't true, yet.
IMO, the last 10 years have been an explosion of new frameworks and patterns that are overly-complicated and bloated. Because of this, there isn't an easy cohesion between off-the-shelf solutions.
Think of how eye-opening UNIX was when it first came on the seen and the ability to pipe together commands to solve new problems. It was eye-opening. But, think of how hard that would have been if each tool read ASCII different, or piped text differently. Imagine if grep and awk where combined into one tool with a complicated set of switches.
I think eventually, working with off-the-shelf software to build solutions to tough problems will be as easy as gluing together 3-5 UNIX commands in a bash script to process some text files. But we aren't there yet.
Is the success of the no-code trend really a given?
Insofar as I can tell no-code addresses configuration management of known entities. It’s not a dsl and it’s not a visual programming environment - it’s, generally speaking, a graphical layer on top of one or more yaml / xml / whatever-file(s) with a finite set of options.
That’s good enough in many, many cases. But it’s not a panacea. And it’s certainly not “no-code means no coders.”
And 25 years ago, no code database apps were things like microsoft access, filemaker pro or even hypercard. And along with those access apps where the programmers hired to fix the mess that the access app became.
Within every big central spreadsheet or access database running some business critical process is a seed of a new software startup.
> We have no code for calculation. It is called Excel.
By that metric, Python is "no-code" for practically everything: It's code, but it isn't called that because it isn't written by coders, it's written by people who write software to do their jobs, see? "Formulas" and "scripts" and, in a previous era, "macros" are what we call software written by people who are being paid to do something else.
Excel comes with most people's computers and they can do it all by clicking buttons and drag and drop. If you are someone who spends all day in VBA, you are at least a software engineer lite if not a software engineer.
Doing a table of calculations in Python is much more difficult than doing it in Excel without programming knowledge.
In my opinion it will provide growth for the software industries, as it lowers the cost of entry for many companies who would otherwise not develop anything.
It will also increase demand for integration and plugins for those (low-code) ecosystems, just like all platforms create.
I would argue this can never be true. If software gets to a point where today's problems can be solved by a 5-minute specification, engineers will be able to do more high-level engineering, connecting dots with those building blocks.
One example is memory management: At some point, we started having languages and powerful-enough hardware that engineers didn't have to worry about memory anymore (for certain classes of applications), so they could spend more time thinking of other things. It's not like switching from C to Java would cause a software company to get rid of 40% of its devs because there's now less work - it's just that one hindrance is gone. But there's always plenty to do; the backlog never gets emtpy.
Counterpoint: If this was going to happen it would have already.
Here's RMS talking about secretaries extending Emacs with Lisp:
> programming new editing commands was so convenient that even the secretaries in his [Bernie Greenberg] office started learning how to use it. They used a manual someone had written which showed how to extend Emacs, but didn't say it was a programming. So the secretaries, who believed they couldn't do programming, weren't scared off. They read the manual, discovered they could do useful things and they learned to program.
I've been hearing this same thing since the 80's when my friend's dad questioned why I would want to become a programmer when eventually computers would be able to program themselves.
“Anyone who’s spent a few months at a sizable tech company can tell you that a lot of software seems to exist primarily because companies have hired people to write and maintain them. In some ways, the software serves not the business, but the people who have written it, and then those who need to maintain it. This is stupid, but also very, very true.“
This is exactly what happened to higher education. It seems to be a kind of entropy that signals that a system is in major need of complete overhaul so it can get back to doing what it was actually designed to do.
> we will have more software written by a smaller number of people
This was the past IMO. Future will have larger number of people writing even more software.
I wonder if the author has met people working in non-software industries, but most software is still unwritten. What we have nowis most amount of software for software developers.
> Future will have larger number of people writing even more software.
When the majority of the population became educated inn literacy, there wasn't a huge jump for everybody to start writing novels. They merely started writing smaller things - letters, notes, and short form, one-off stuff.
It's the same with software skills. There are going to be an increase, but in the form of small scripts and one-offs for tasks, rather than more software "products".
No-code is nice, there is only one problem though. Computers are not getting faster. Sure they can handle more parallel loads with more cores, and in the cloud you can scale horizontally, but building parallel systems is hard, and the higher up in the abstraction chain the harder it becomes.
So the software might be built by some drag n drop tool, and it will look nice, but performance will be terrible, uptime will be terrible, and there will be weird issues once the software get many users.
Mo software, mo problems. Either coders will become the people managing all the different "no-code" solutions or companies will pay dearly for when, inevitably, their no-code solution meets a limit that was never considered. The problem with all the no-code solutions though is that it becomes mind-bogglingly difficult to remember how to navigate the graphical interfaces of each platform while the coder can focus on and enjoy the superior ergonomics of mere text editing.
There is no way that software will soon reduce the need to engineers.
One way to look at it is AWS for example. It removed the need for hardware and sysadmin skills. But the cost of this is that now instead of sysadmins you need devops and probably in bigger numbers.
The way I see it software crates problems at a higher rate than it solves them, hence the need to constantly change and evolve technologies,so that at least of the technical debt created can be written off.
I feel that every no-code tool that gets sophisticated enough to replace manual human coding work will have a significant learning curve. So much so that we will need trained professionals to operate these sophisticated no-code tools.
Yes, people will be more productive using no-code tools over manual coding/deployment, but we will still need these skilled no-code professionals to use these things and ship faster.
My problem with "everyone remote" is the following scenario.
Suppose I'm sitting at my desk, and I need Bob's approval to check in some piece of code. In the current (office) scenario, I send Bob the request. If I see him lounging back in his chair, playing some game on his phone, I can walk over to him and gently prod him to approve my request. He can't deny his game-playing, so he has no option but to act right away.
In the remote world, I have no idea what's keeping Bob busy and why he's not approving my request. He could be playing games or out for a walk with his dog... or he could be fighting a (metaphorical) fire.
What we need is some sort of a "presence" signal. Unfortunately, we can't have one without some privacy issues.
I dunno. Even in all my in-person office experience, the person I need to get an approval from is not within eyesight 99% of the time.
If it's urgent, you ping them over chat, then send them an e-mail if they don't immediately respond. If it's less urgent, just an e-mail.
It's none of my business what's keeping Bob busy. He'll see my requests soon enough. If it's incredibly urgent, I'll call. If he doesn't pick up, that's on him.
Software doesn't actually get easier to write. Yes, you have more building blocks to choose from and sometimes (very rarely) improvements in abstractions. But this also means that your software sits atop an ever larger mountain of code that you do not understand. This means that it is easy to do things half way, but it still isn't any easier to do things well.
Much software also has to model the world around us. Which doesn't really get any simpler as we try to solve more problems in systems that need software. Like self driving cars, or even something as everyday as your mobile phone.
I have exposure to software in medical devices such as an MRI machine. As far as I can tell, in these expensive devices atleast, the predictions of this article 'Software will eat Software' do not apply unless you are talking complete stagnation of new machine development. Apart from this it requires programmers to understand MR physics to even begin making any sort of real development - takes a minimum of 3 years to come to speed after an advanced degree IMHO.
>In my first job at a small start-up, we had tons of physical servers. Now, it’s hard to imagine any "webby” tech company ever interacting with any hardware at all.
My record on predicting tech trends is terrible, but wasn't the outcome of that switch to cloud computing a massive increase in the demand for software developers? For now, at least, it seems that things that reduce the amount of work developers need to do actually increases their value.
I think while it's true that new tools allow fewer people to do more work, that also expands the pool of tasks engineers can work on - more fields come to value software, work gets deeper in the fields that already did, competition gets fiercer, and the market as whole expands. This probably can't go forever but I don't see that trend ending any time soon.
I suspect that more people will be writing software as part of their job. They just won't call themselves developers. As the underlying software becomes more powerful (I'm thinking of Microsoft's recent demo of AI writing software) you need to be less skilled to build something we consider a complex app in the current time, relatively easily. Many professional developers will find them selves migrating to solving harder more advanced problems.
> Many professional developers will find them selves migrating to solving harder more advanced problems
it may be wishful thinking that this is gonna just happen without major investment from some sort of entity for those developers. And i would also argue that advanced problems are more fewer and far between than "standard" problems being solved today by the majority of developers.
History has shown that displaced workers don't get to re-educate for free. Either the workers themselves will have to pay for such re-education (or advance their existing education to higher level, and thus able to do a more difficult job that hasn't been "automated" or trivialized via a tool), or the gov't pays to mass-educate.
If you currently work for a big-tech, and enjoy a high salary, it would be very prudent to try to invest your salary as much as possible to build a future stream of income for protection, for the eventual scenario where you are made redundant due to advances.
> If you currently work for a big-tech, and enjoy a high salary, it would be very prudent to try to invest your salary as much as possible to build a future stream of income for protection, for the eventual scenario where you are made redundant due to advances.
Good advice. That has always been true for programmers. We constantly have to invest in learning new kinds of things.
The job of the software engineer is to automate their own job. Most people stop their reasoning there and don't continue thinking about the downstream effects of that action.
The wealth generated from automation creates new opportunities that will be filled by the software engineer themselves because automation never ends. Software is never done.
The only way software eats software, is by having intelligent people wanting better tools and abstractions to get shit done.
Building better tools and abstractions is hard work. For people to be motivated to do hard work, there need to be goals worth sacrificing for and a support system in place, to help and motivate the hard workers to keep going. Society has to want change and treat those on the front lines as champions for their cause.
That's how real progress gets made, and not only in software, but in all areas of life.
Americans seem to have forgotten or maybe never knew, that this is how things work. This tendency to dumb everything down to never offending anyone and not letting anyone even have a chance to make a wrong move, has made everyone into a bit of a complacent dumb-ass, writing blog posts like these that say absolutely nothing.
If you want a bit of a shocking example of how deep this goes - you can't customize, group, rank, save, or do anything on the number one photo-sharing app in the world. You can only mindlessly scroll, like, comment and share.
I mean, we've reduced an entire creative medium to a single algorithm of behaviour, optimized for maximum time spent zombie-scrolling.
When people wake up to what zombified lives they've been living and that it hasn't been an accident but happened by design - it will not be software eating software, it will be software users eating software companies and then some for what they've done to humanity in the name of quarterly profits.
This is like saying building an abstraction of assembly language like C will put all the software developers out of a job. Rather it creates more opportunities.
As someone who has spent the last month fighting ansible and puppet: automation in software does not mean less work.
IMO: there’s an inverse relationship between maintainability and ease of use. Config files are extremely easy for large groups of people to maintain but it sucks making changes to them. As an individual I’ve found it’s often easier to rewrite the application myself to do what I need than to edit complex config files.
> We are coming to a point where software is developing so fast and the abstractions getting better that soon we will have more software written by a smaller number of people.
This seems unlikely to me, or at least unsupported by current observations. Software tooling and abstractions have improved tremendously since the early days of programming. Going from machine code on punch cards to modern high-level languages with assorted libraries and frameworks and debuggers represents at least a few orders of magnitude productivity increase per programmer.
But has that incredible efficiency gain resulted in fewer programmer jobs? Nope, just the opposite: the better the tooling is, the more useful each programmer is, so the more it makes sense to hire. This makes sense when you consider how open-ended programming is as a field: it lets you manipulate data, and it turns out manipulating data has a seemingly endless number of uses. It's not a field like, say, plumbing, where you need a set amount of plumbers, where going beyond that number doesn't really accomplish anything.
I don't see much evidence for the argument that software will destroy software development jobs. It's true, we have automated a lot of tasks and develop at a higher level now than we did in the past, but there are also more people working in software now than in the past.
You can do impressive stuff with a small team or even alone now, but the past also had small teams doing impressive stuff. Look at how Apple was founded by two guys, how many startups started with a small team. In the 1980s, many very successful games were written by one or two people. We make more sophisticated games and systems now because we build on the abstractions created then.
The demand for more sophisticated software will continue to grow, and we're not remotely near the point where every problem will have been solved. The focus of software development will very likely shift, as it has always done. But software engineers will continue to be needed as long as software is needed.
Until we solve all "problems", we need software.
At least in some of the solutions, part of the solution will require software. The whole no-code thing is BS in my opinion, because it is just another layer of abstraction.
Maybe we make "making software" easier and more people can do it, but solving a problem will not go away.
Low code/no code trumpet again. I work with it(Salesforce) and while it can do pretty cool things,as soon as something more complex comes up, I have to write good old code. I understand these things are improving a lot but it will take decades to get to the point where it's wiping thousands of devs out of job market.
I have no evidence to back this up, but as a counter-point, I think that the construction of new things is always a platform upon which more new things can be made, which require people. If we reach some precipice where much more can be built with much less, the. I think that not only will the diversity (breadth) of software will increase, but also the complexity and power (depth). All of that still requires workers. As software becomes commoditized, it’s ubiquity and availability should provide a new base level on which to build. The only scenarios I can see where we lose opportunity for common growth is if we build machines that are better at innovating and making things for humans than we are or if the power of the tools we build all ends up concentrated into a few powerful hands. The later is why I believe FOSS is so important.
The microservices make it easy to create a big ball of software that covers as much featurespace or ground as possible. That's not the end of software development. You need a lot of people to gut these microservices and reconfigure them to be part of your monolith and to wire in new workflows.
As soon as precision and performance become critical, you're back to dredging through the code because microservice were built on assumptions that your ball of software executes poorly.
This move for collective action attempts makes the independent wfh type irrelevant and then suck them back into the fold via 'collective bargaining' and other union tricks. I fail to see how this will play out any differently to historical unions. Blaming the current president is s basic move for bringing about your utopian collective.
SWE's have been the ones pushing for remote work for years, now here come dozens of blog posts claiming that it's our undoing. Ok sure. Trading some TC for lower housing costs, lower COL, less traffic, and a better quality of life outside of Silicon Valley seems like a decent trade.
As the author of a no code tool I disagree with this article. I think that we will need even more developers to manage the apps churned out by no code tools. Tools may go up the abstraction later, but developers will always need to debug ALL layers of the abstraction layer
Salaries will go down not because of less collective action (there was not any really to speak of with high salary people) but because now the (remote) applicant pool has many people living in relatively low-cost areas, many even in countries/localities where the cost of living my be 70 or 80% less than the big US cities.
In such a situation, it is very easy to question the idea of hiring _anyone_ who lives in a big city and therefore requires a large salary.
As far as software eating software, it seems obvious to me that it is occurring, but may not be obvious to everyone because as that has been happening, we have also seen a rapid increase in software use in business and industry. And we have seen software projects become more mainstream, where in some places, almost every random person seems to think they need to build their own mobile app or web-based service, for example.
I think we are not yet close to the end of the level of automation of software development. Here is one thing that may be on the horizon: chat, voice, and UI-mockup - enabled software development bots that take advantage of large knowledge-bases of application templates and components, and use deep learning to smartly combine and configure them according to natural language instructions and intuitive UI interactions. This could also involve generating design assets with AI based on combining large libraries of starting points and styles.
I also personally believe that more advanced AI for programming is going to sneak up on everyone, including programmers, faster than most expect. Especially when you think of tackling specific types of software at a time. What the deep learning people are working on is making neural networks learn better-factored and more accurate models of the world. I believe this is probably feasible to do using mostly existing algorithms, by building up models (and networks) gradually with a progressive curriculum approach. Using that type of capability and very large datasets of specifications, design interactions, sample assets, templates, and final configurations, it will be possible to train AI to build many different types of software.
I agree with the author that software eats software. It took a sophisticated, expensive team just to launch an ecommerce line a couple decades ago, now you need no engineers at all. This trend will continue.
I disagree that this will result in net decrease for engineer demand in the foreseeable future. There appears to be no evidence for this at this time. Investor demand for new software products is currently endless.
I agree that a trend towards remote work could be a downward pressure on engineer salaries and power, particularly for silicon valley workers. On the other hand, it could instead be an upward pressure for all us engineers outside of silicon valley. It could be both, or maybe the remote trend will fizzle out in a year. Time will tell.
At some point in the mid-to-distant future, most human labor (certainly white collar labor) will potentially be automated away, with remaining blue collar labor not far behind (the physical interface, e.g. general purpose robots that are superior to humans in non-static environments will probably come last).
Until that point, the demand for a technically literate workforce will continue to grow, simply because the number of potential areas of application will also continue to grow. With every new digital encroachment on our analogue lives, we find new business verticals that previously simply didn't exist. I don't see that trend abetting anytime soon. Put differently, it's not like software development ends with webapps. For every domain that gets commoditized, there will almost guaranteed be new ones in need of hands on skilled labor. There will of course be some developers who aren't as effectively able to reposition themselves for new verticals (we've all probably heard some anecdote of a C programmer who simply couldn't grok OOP, etc.), but by and large it's difficult for me to see the demand for human-generated bespoke software development dissipating until we get close to that mythical land of general AI.
Now, on the payscale side I do agree with the author that the push towards remote work will definitely have an impact on many of our salaries that are currently tightly bound to geography. That seems fairly evident, to be honest. I live in northern Europe where our salaries are quite good relative to other parts of the continent, but you can still find high quality engineers for <$100k USD - many of whom are on par with FAANG developers being paid >$200k in SFBA or Seattle and the like. The further east / south you go, the lower the current salaries are. It seems logical that decentralization of work will lead to a flattening of the income curve - raising some incomes, lowering others - as we increasingly all enter the same digital playing field. That said, wages - like housing prices - are to use econ parlance 'sticky downwards' so if I were still a SV developer I wouldn't start losing sleep yet.
Most of the comments so far are about his take on there potentially being less need for people to write software in the future. The more interesting part of the article IMO is the bit about salaries:
> Most people would like to believe salaries are determined by a cost-plus model, where you get a tiny bit less than the value you add to the company. However, in reality, they are really determined by the competition. Companies are forced to pay as much as possible to keep the talent for leaving. In a competitive labor market, this is often a good thing for the employees.
He goes on to concisely explain why people get paid so much for working in the Bay Area, and why they won’t get paid like that elsewhere.
I think this is a keen insight, and should be sobering to the large cohort of HN that is clamoring for all the tech giants to go full remote.
As we saw already this week, when Facebook decides they don’t actually need you to live near Menlo Park anymore, that also means they don’t need to pay you Menlo Park wages any longer.
I think there’s actually a significant risk to Silicon Valley here, which is the following potential vicious cycle:
- Big Tech decides all/mostly remote is the future.
- Big Tech mostly stops hiring in the Bay Area because why pay more for talent when they no longer “have to.”
- The COVID reset layoffs continue, resulting in a lot of Bay Area engineers looking for work at the same time.
- Supply and demand mean that engineers wages start dropping as smaller companies no longer have to compete with the giants (as a hiring manager in SF I’ve already seen this start happening).
- Engineers who can no longer command crazy salaries to justify the rent start leaving the area (this is also already happening).
- Rents start to drop, and the housing market softens a little.
- Tech workers who have relatively recently bought homes get nervous and look to exit before they get underwater on million+ dollar loans.
- The housing market softens further...
At this point most of these things sound like a much needed reprieve for the insane local housing market, but the funny thing about bubbles is the way they get hotter and hotter for a long time, and then tend to pop relatively quickly. Sometimes these things can lead to vicious cycles that take quite a while to come back from, where a bear market feeds on itself.
I love working in software. I loved it before I moved to the Bay Area and quadrupled my earnings. I’ll love it even if it all comes back down to earth and is just a “normal job” again.
But I think that could really happen, and I think Big Tech embracing remote is a great way to pop the balloon in the Bay Area with the result not being a utopia where people get to make Bay Area wages and then live on the beach in Cancun, but rather they get to make Omaha wages and live in... Omaha.
I haven’t really seen this community wake up to that possibility yet, which is surprising to me considering it’s a fairly obvious conclusion to the top paying companies all opening up the floodgates of worker supply by truly embracing remote work.
I do agree with your remarks, but I wonder what is it that reinforces the negationism to that new reality by most commenters here. I have doubts that companies want to go the offshore routes of yore, but effectively I expect this wfh trend will create a lot of new competition between IT work force, in the USA, at least.
The article really should have led with this second point, which is the most grounded in real-world conditions and most relatable to software engineers stuck in high CoL tech hubs, while the no-code first point is futuristic and the class consciousness of tech workers third point is sadly a little fanciful.
I agree with the writer. Microservices have already made it possible to commoditize many of the services that we used to re-implement over and over again in the past. We can now buy them as cloud services. But that is just the beginning. Our development work is still mostly non-business-logic effort and smart companies will figure out new ways to eliminate more and more of it.
The important point is that non-business-logic development work is never eliminated completely. But if you eliminate 50% of work, you need only half of the previous human effort. Then the other half needs something else to do. Preferably enjoy the increased productivity by gaining increased free-time.
I wonder if instead of completely eliminating software developers/engineers to write the code, what if we had specification writing engineers. Just like tests, the specifications would describe what is needed, and then we'd let the machine figure out a way to make things work given all constraints. Very much like machine learning I guess. Of course, those specs/tests would need to be covering almost every possible scenario, but then when you write code you write the code and rhe tests, so in this case you'd only write the tests, but the code would be figured out by the machine.
Does this all make any sense?
Slack is a direct successor of IRC. Those who are familiar with the IRC are quickly getting used to. But if used by people without IRC experience, the result usually suffer as they don't understand the IRC culture and can't use it efficiently. I've heard a company failed so bad at using slack. Only the sysadmins has the privilege of creating the public channels, resulting just a handful of generic use channels and most channels are flooded with noise. To reduce the noise, they ban the non-essential post to the Slack, just nullifying the Slack as a communication tool.
> Companies are forced to pay as much as possible to keep the talent for leaving. In a competitive labor market, this is often a good thing for the employees.
In practice they conspire to rip off the talent: "Apple and Google's wage-fixing cartel involved dozens more companies, over one million employees"
Feels as though this essay was written by Ned Ludd himself. Software has been constantly commodified and yet the hurdle the market expects you to clear gets higher at the same time.
Scary and true IMO. The difference is clear in the head counts & philosophies of newer companies vs. old ones. To do anything in software, you had to be a software-first company. Now that part is easier, we see many more software-enabled companies instead - the business is front and center, only a few programmers needed, and they have a lot of software plumbing kind of work, not much software building.
It’s not like anyone has a choice in the matter. There’s a huge profit incentive to create the first ubiquitous no-code framework. Like some commenters above though, I think that no-code will only increase the aggregate demand for software, much of which will need software engineers.
I had written about my predictions about technology trends which are going to make a maximum impact in 2021 out of which no-code will have a definite impact
I'd argue that as abstractions and automation improves, it will bring in more complex demands to be met and therefore it will increase demand of programmers/engineers. And the cycle continues.
Software will continue to eat the world until it creates an authoritarian utopia for humans.
“No-code” is great, and I love some products in the category (Airtable). But the idea that it is going to fundamentally transform anything is hyperbole, similar to soylent’s marketing when they were saying that nobody was going to eat food again.
Software development is a bit like fashion or music. There is no direction like forward or backwards. Good and bad are often just personal opinions. Some things get out of fasion, just to be discovered years later. Others are are stables.
Asana with its inbox workflow will help all remote companies. Didn’t believe it at first but after using it for onboarding all remote like how it enables a sync work. Disclaimer I am a recent Asana hire that I boarded remotely.
This seems hugely overblown verging on nonsense to me. Since when is it easier for your employer to surveil you outside their office than within it? There are more and better encrypted messaging tools than ever before. No-code tools are a long way off from being able to replace most software. We’ve been hearing this about the technology for at least 30 years at this point, and now it makes “most” software development irrelevant? In a lot of ways software engineering is better than it’s ever been, but there are huge swathes that don’t follow the “best practices” that newsboard technologists are known for promoting (myself included) and their software is going to need work forever.
Software development, imho, is mostly complexity management. If you believe that too, and believe the premise of this article, can you describe how those two fit together in your mind?
There's lots of different points in the article, most of them tangential at best to 'remote first'.
The remarkable thing about 'software eating itself' is that it is true, in the sense that the potential of what a small team of good people can accomplish has gone up by orders of magnitude over the decades. We stand on the shoulders of gian( code base)s, but ... at the same time the dysfunctionality seems to have increased, and not just in corporations, but also in smaller outfits. Just as efficiency paradox observed in the infamous 'Bulshitt jobs' article, it is as if there are these ubiquitous hordes of 'overhead' cadre hiding in the wings relentlessly trying to bootstrap themselves at the slightest opportunity.
I'll give an anecdote, but it is fairly typical for many situations I encounter. This job was putting a daily webpage dashboard on some existing data and and integrating an existing service that delivers new data-points. There were very few uncertainties, It was scoped as a one week effort for a team of two self-managed senior engineers, with a second week of polish and iteration. This is not what happened. First a manager was appointed for the project, who's first 'hire' was another line manager to keep track of meetings and progress. He hired a business analyst to write a spec, soon joined by two other analysts to read the spec and write more spec. Then a 'data scientist' consultant was brought on, who (rightly) ignored most of the gibberish in the analysis document but managed to instate a full enterprise cloud BI stack (hey, we're dealing with data and need to draw a chart), and instead of just reading the data from the nightly reports decides to integrate directly with the data-warehouse, now upping the need for enterprise security etc. etc. 2 months into the project this is still going and the end result will still just be a chart on a webpage.
Second anecdote. This one in reverse, but putting light on the same principle. This concerns a large organization that for years was pondering to go digital with its mailing system (this org deals with thousands of pieces of physical mail coming in and thousands of paper pieces going out every day, often requiring multiple official approvals and signatures etc.). Thos was seen as a complex and large project, taking a year probably running into the multiple millions of euro). COVID-19 hits, and the organization is forced to go fully remote. A pragmatic team of 3,5 FTE is called in to 'just make this work ASAP', as the mail-flow is vital to the org. They finish the job in 3 (hectic) days. The solution has now been running for months.
So there is this 'paradox of efficiency' in software as well. Software eats itself, but as it does it has the tendency to become morbidly obese (to stay within the metaphor).
A lot of people in the discussion disagree with the article's first point about no-code/less-code. While I do think the argument is a little fatalistic, the scoffing at the idea in the face of him bringing up evidence such as AWS and other IaaS/PaaS reducing the number of in-house server engineers that companies need, and how the proliferation of open-source libraries/frameworks has made creating software easier than ever, reminds me of economists pooh-poohing the notion that modern forms of automation will lead to unprecedented levels of job loss just because previous waves of automation didn't. Even in the face of self-driving cars and automated receptionist/call support agents that James Watt couldn't even dream of.
But whether no-code/less-code is inevitable, I think his first argument can still be substantiated from a different approach, at least in the near future: we're coming at the end of a tech bubble, the money will start receding, and organizations have already been realizing they don't need so many coders (and other staff) after all. Did Uber really need to build their own in-house version of Slack? Did Airbnb really need to pour so many resources towards adopting, even contributing heavily, to React Native, only to do an about-face and have to rewrite their mobile apps in their native platform? Did either company, like so many gig/sharing companies during this bubble, have to invest so much in hyper-growth and expanding to so many markets before they were ready? Did so many of these companies have to fall prey to Not Invented Here syndrome and waste so much time in engineering boondoggles?
To some extent, yes, it's what's the investors wanted, or what corporate leadership thought would make the investors happy. And so this boom has led to massive hiring on a lot of busywork that doesn't actually have tangible economic benefit to these organizations, and may have even led to worse outcomes due to unsustainable or reckless behavior.
Going back to the article, whether the first point can be explained by inevitable no-code/low-code eating the world, or by the fact that a lot of software generated were due to the frenzy of a bubble, it still leads to the same point:
> Anyone who’s spent a few months at a sizable tech company can tell you that a lot of software seems to exist primarily because companies have hired people to write and maintain them. In some ways, the software serves not the business, but the people who have written it, and then those who need to maintain it. This is stupid, but also very, very true.
And the article's subsequent conclusion about how widespread WFH will lead to cold calculated culling of a lot of unnecessary or redundant personnel, aided by the the emotional detachment of not having to see the faces of the people to be let go, still follows.
AirBNB had less than a few thousand employees in their HQ, I think their engineer count was under 1000. In the scheme of things that is not a lot of engineers.
And Uber did not write their own slack, they set up a mattermost instance and then put some logos on it. It's a bit more than complex running an exchange server in a corporation.
Most of the money burned by these companies was in the operations, marketing and incentive side, not on the engineering side.
My estimate at the end of this will be a second tech bubble, due to way more money being printed and given to bankers again, even lower interest rates and surprise surprise, software is one of the few things giving a return, yet again.
I'm not saying that engineering is necessarily the cost centers at these companies currently experiencing major layoffs (though the high salaries probably don't help), but it does call into question how much software being written is actually beneficial to these businesses, if they're able to shed engineers just like that.
The reason why these two companies are having major layoffs is because they are in the travel sector and COVID is a custom made disaster for any company in the travel sector. I think they are hurting themselves in the long run, but they are choosing long term survival and cutting of new incomplete initiatives for the more well proven cash cows, which hurts their future.
Also software in many companies managing large complicated multibillion business are a bit of an iceberg. There is a lot of internal software that you are not exposed to as an end user customer. I call it the google search effect. Why so many servers and engineers for a 2 page website?!
> we're coming at the end of a tech bubble, the money will start receding, and organizations have already been realizing they don't need so many coders (and other staff) after all.
This is an interesting point. Is this really happening though? Tech valuations pretty much bounced back up. Gut feeling says yes something is wrong with the current monetary system, and a bust cycle should be coming, but with the 0% interest rate and massive quantitative easing, maybe this can go on for longer than we think...
Even if it isn't an outright recession, it appears that VC funding is drastically pulling back, though maybe it's COVID-19-specific [0]. And the ongoing implosion of the Softbank Vision Fund seems to have had a domino effect that preceded the pandemic, with the waves of layoffs across that fund's wide portfolio in January.
It certainly feels as if some bubble is getting burst, even if it's not the entire tech industry's.
Yes, the feeling is that something is off.
But a lot of economists have been feeling this for 5 years; some even claim we never really recovered from 2002 and 2008 busts but only snowballed the problem to the future.
That said, it very well may be that this cheap money era continues. It's very hard to determine when and what will happen.
I'd say exactly the opposite. It feels like there have been a succession of well-needed adjustments, rather than a bubble popping. On paper post-2008 has been the biggest bull run ever, but people still feel caution; the "irrational exuberance" that pumps up a bubble hasn't come yet.
The adjustments are still being made. I’d say your point would be stronger after more market corrections have been made- for example, the notoriously unsustainable and unprofitable food delivery app space is still being propped up by this crisis.
Kolmogorov complexity means programs cannot generate more information than they are provided with. However, I do think it possible someone will invent a much more efficient coding workflow that can create an assembly line sort of system. E.g. a very experienced dev can decompose a problem into a lot of subtasks and interfaces to outsource and complete in parallel. Once completed, boilerplate software automates the integration of completed tasks.
To get started in software development, people no longer need to learn about memory management, concurrency, networking, how numbers are represented in computers, serialization, compression, encryption, etc. And in some cases, they do not need to know about data structures and algorithms or object oriented programming either.
If you use something like node, garbage collection will handle memory for you, the event loop will make things fast enough, you will use JSON serialization, you will use REST, every number is a float... and you get the idea. That reduced amount of knowledge will get you very far.
Of course, eventually, you will run into a problem that will require some deeper level of understanding. But by that time you will be already employed.
This has not been the reality for way longer than you assume.
For instance: in the late 90s, before the 2000s tech boom, Delphi and Visual Basic dominated business programming and most coders had no idea about any of those things. Same for Java and .NET later. On the web side, Perl/CGI was king. After that we had PHP, Java, Ruby, Python, C#.
It never really got simpler. If anything, things are more complicated today: mobile development, frontend frameworks, functional programming, Docker and Kubernetes, more complex design patterns. Just a simple "hello world" today with NPM, React, backend MVC, K8s and deployed in AWS is ten times more complicated than anything most of us ever did in the late 90s or early 2000s.
It was mostly embedded programmers having to worry about the things you mention, but most of the ones I know from the early 2000s think that our current web stacks are way more complex than what they do.
In my experience, software engineering is endless, complicated decision making about how something should work and how to make changes without breaking something else rather than the nuts and bolts of programming. It’s all about figuring out what users want and how to give it to them in a way that is feasible.
The idea that we will have some abstraction that will someday (in the foreseeable future) save us from all of this difficult work sounds very far fetched to me, and I can’t imagine how that would work.
Even the example of hosting complexity being replaced by cloud companies seems kind of silly to me. Maybe that’s saving very small companies a sizable fraction of their engineering resources, but I really doubt it for medium or larger companies.
The cloud saves us from some complexity, but it doesn’t just magically design and run a backend for your application. You still need people who understand things like docker, kubernetes, endless different database options, sharding, indexing, failover, backup, message queues, etc. Even if the pieces are now more integrated and easier to put together, the task of figuring out how the pieces will interact and what pieces you even need is still outrageously complicated.