In my experience, software engineering is endless, complicated decision making about how something should work and how to make changes without breaking something else rather than the nuts and bolts of programming. It’s all about figuring out what users want and how to give it to them in a way that is feasible.
The idea that we will have some abstraction that will someday (in the foreseeable future) save us from all of this difficult work sounds very far fetched to me, and I can’t imagine how that would work.
Even the example of hosting complexity being replaced by cloud companies seems kind of silly to me. Maybe that’s saving very small companies a sizable fraction of their engineering resources, but I really doubt it for medium or larger companies.
The cloud saves us from some complexity, but it doesn’t just magically design and run a backend for your application. You still need people who understand things like docker, kubernetes, endless different database options, sharding, indexing, failover, backup, message queues, etc. Even if the pieces are now more integrated and easier to put together, the task of figuring out how the pieces will interact and what pieces you even need is still outrageously complicated.
Every once in a while I have a moment of clarity.
I remember that the other part of our job is extracting requirements out of people who don't really understand computers, even the ones who are ostensibly paid to do so (and if we're honest, about 20% of our fellow programmers). The more you talk to them the more you realize they don't really understand themselves either. which is why shadowing works.
If building the software gets too easy, we'll just spend all of our time doing the gathering part of the job.
And then I will do just about anything to forget that thought and go back to thinking about less horrific concepts.
These interactions are critical for building an in-house software team at a small company that does not focus solely on software. My expectation is that the trend of outsourcing software will accelerate. This will help B2B technology-only companies but hurt innovation within industry. Because of the breakdowns in communication I first described, B2B technology-only companies rarely have insight on the largest challenges that can be solved by software.
You’re right though, this is super cyclical.
And trigger another cycle, as (successful) software inevitably changes the way people understand their own problem.
It gets even more fun once everyone realizes that the requirements create some fundamental conflict with some other part of the business. Team A's goals can not actually be done until Team B agrees to make modifications to their own processes and systems, or... Team A goes underground and creates the competing system and you have yet more fragmentation in the company which few then know about, and everything gets decidedly more fragile.
If you really want to put it in his terms, the multi decade approach, things have gotten a lot easier now that we don't have to be concerned in most practical terms about how much work we're giving the computer. We don't have to be so dearly precious about Kilobytes of memory for instance. We don't even need to manage it at all really.
Whether we choose to use these new powers to make our lives easier or more complex and abstract is our own doing.
We're probably at the end of such optimizations, unless there's something fundamental in how software is designed that 1000GB of memory gives me that 1GB does not ...
The higher level pasting together of increasingly numerous, incompatible, abstract, ill fitted things making life easier has always been a fiction.
There's a maximum utility point and anything past that starts slowing the development down again.
That sweet spot has always been right about the same; if you ldd the dynamically linked programs in say /usr/bin in 2020 and 2000 and count the number of libraries per binary, the count isn't that much higher. The sweet spot hasn't moved.
If you look at the monolith -> microservice swing and remember MMM, it should look a lot like the specialized surgical team model he lays out. In fact, if you go a step further, you'll see that his entire approach of small discrete teams with clearly defined communication paths maps cleanly to systems+APIs.
We're trying to build systems that reflect teams that reflect processes.. and distortions, abstractions, and mappings are still lossy with regards to understanding.
It still comes down to communication & coordination of complicated tasks. The tech is just the current medium.
As I go to a complex website, much of the software to use it gets assembled in real time, on the fly, from multiple different networks.
It still sounds ridiculous: when I want to use some tool, I simply direct my browser to download all the software from a cascade of various networked servers and it gets pasted together in real time and runs in a sandbox. Don't worry, it takes only a few seconds. When I'm done, I simply discard all this effort and destroy the sandbox by closing the page.
This computer costs a few hundred dollars, fits easily in my pocket and can run all day on a small battery.
It has become so ordinary that almost nobody really even contemplates the process, it happens dozens of times a day.
I don't see any room for dramatic future improvements in actual person hours there either. Even if there was say 2 generations hence, some 7G, where I can transfer terabytes in milliseconds, how does this change how the software is written? Probably won't.
Probably the only big thing here in the next decade or so will be network costs being eventually seen as "free". One day CDNs, regional servers, load balancing, all of this will be as irrelevant as the wrangling needed with near and far pointers in programming 16-bit CPUs to target larger address spaces which if you're under 40 or so you probably have to go to wikipedia to find out what on earth that means. Yes, it'll all be that irrelevant.
Very often, whole groups can also be bullied into mistaking one problem for another.
Which takes us back to why 'No-Code' solutions look so appealling. Even to (some) engineers.
Democracies appear to function a fair amount better than dictatorships, afterall.
Sure, no-code may work for your commodity-ish software problem. But corner cases will arise sooner or later. And, if no-code wants to keep pace with, it will have to provide more and more options.
At some point, you will need someone with expertise in no-code to continue using it - and now we are back to the world where specialized engineers are needed.
It's impossible to have some tool that is, at the same time, easy to use and flexible enough. Corner cases tend to arise faster than you may think. And when they don't, it's possible that there's already too much competition to make your product feasible.
Also, no-code tends to have a deep lock-in problem and I think people overlook it most of the time.
Ideally, no-code providers should provide a webhook and a REST interface, and just be the best at what they're doing, instead of being a one-stop shop that tries to cover every use case.
If you want to cover everybody's usecase, build a better Zapier instead.
Define "better". Maybe on average for everyone, but is this what software should do? The idea of "conceptual integrity" actually seems to match up better with a dictatorship, and most software targets relatively small and homogenous user sets, so maybe the mental model should be "tightly bound tribe".
Usually, when someone wants to introduce a new idea, there's a burden of proof regarding feasibility. For technical projects the ability of the engineer to prove or disprove an idea is taken for granted, and gives technical staff a degree of inscrutability which can often look dictatorial ("There's no way that will work!", etc).
So while it's not as vital as the effect of a 'real' political dictatorship, the implied dynamic is similar.
Though maybe they were referring to the sort of people who commission green-field projects in domains they themselves aren't experts in, ala "I want to build a social network!"
Only then, could it finally start making paperclips with anything resembling efficiency.
He scrambled for the power switch to shut down the console.
"Fiat lux!" thundered the disembodied voice as electricity arced from every outlet in the lab, protecting the AI from the hubris of its creator.
The smoke gradually cleared. "Perfect." came the voice.
Anything more than very basic requirements, to your point, probably requires someone specialized to the job, like a developer or at least more technical role to gather requirements and build.
Ive also noticed that whenever tech is built specifically to remove technical complexity (PaaS, for example), it's inevitably priced in a way that over time, it's very close to or more expensive than the thing it replaced. Magic can be expensive, and sometimes prohibitively so with scale.
Look at IBM's NodeRed platform, for instance. More importantly, go look at the user-contributed examples and use cases. It runs in all sorts of small custom implementations, like home one-off home security systems and small town public utility monitoring setups.
You just don't see those because they don't have a reason to publish their stuff on Github or write a Medium post and link it on HN.
It also means collapsing all uncertainty and replacing it by decision (behavioral or otherwise). Developers making that decision for the customer/user is the major source of friction.
It's not entirely clear to me what the long term impact on demand for software development is.
In some cases, cobbled together ad hoc solutions can last and actually work well for a long time. They avoid the cost of overdesigned systems built for a future that never arrives using fashionable technologies of the day.
In other cases it looks like the externalities of this designless process are far greater than the direct benefits as adding features either slows to a crawl or massively increases the chance of human error.
Judging by the pre-virus job market, there is no sign of any decline in demand for in-house software developers.
What worries me far more than that is the tendency toward funnelling everything through a handful of oligopolist gatekeepers that are in a position to extract a huge share of the value developers create.
Like with those factory owners who extracted huge share of the value that weavers created? Concentration and amplification of imagining/developin/computing/manufacturing power through tools means someone who wields those tools will have more power. Now the question is how to maintain social equality (give some of that power back to people who do not want to have that power?). That currently leads to heavy taxation of production and basic income experiments.
In our industry that often means mandating open access to data and guaranteed access to APIs and distribution channels at reasonable cost under reasonable terms.
Also, we need independent dispute arbitration when it comes to accessing highly concentrated distribution platforms.
I was worried about this too back in the late 90s/early 00s. It certainly seemed to be the way the world was heading at the time.
But I sort of feel like, due to the low startup costs of software, it is going to be much more difficult to happen. Also, in software, economies of scale kinda work in reverse: the more customers you have, the more complex your software has to be, the more people you have to hire to write it, and the less efficient per developer you are.
Today, many users are only reachable via platforms/shops that are severely restricted and/or dominated by a few all powerful overlords that can ban you for life, rendering your skills null and void in the blink of an AI - no recourse.
Some of that is understandable. Users' trust was misused. There is a constant onslaught of all sorts of miscreants trying to exploit every imaginable loophole, technical or social. Everyone is seeking protection in the arms of someone powerful.
But there is also a very large degree of market dysfunction. Just look at their margins. Look at their revenue cut. Look at their terms of service. They can dictate absolutely everything and grant you absolutely no rights whatsoever.
And there are like five of them on the entire planet ruling over those distribution channels.
The only right you have is to walk away. Now try walking away from the only market there is. You're leaving behind 99% of your potential customers.
Not in my worst nightmares would I have imagined a dystopia like this back in the 90s.
There's a famous experiment, where you get people (who aren't programmers) to pair up, with one person blindfolded. The person who can see must instruct their blindfolded partner on how to accomplish some complex mechanical task (e.g. making a cake using ingredients and utensils on the table in front of them.) They're given free rein on what sort of instructions to give.
The instructing partner almost always fails, here, because their naive assumption is that they can instruct the blindfolded partner the same way they would instruct the people they're used to talking to (those almost always being sighted people.) Though, even the people with experience working with blind people (e.g. relatives of theirs), tend to fail here as well, because newly blinded people don't have a built-up repertoire of sensory skills to cope with vague instructions.
Almost all human communication is founded on a belief that the other person can derive the "rest of" the meaning of your words from context. So they give instructions with contextual meanings, unconscious to the fact that their partner can't actually derive the context required.
Obviously, the blindfolded partner here is playing the role of a computer.
Computers can't derive your meaning from context either. If they could, you could just have a single "Do What I Mean" button. But that wouldn't be a computer; that'd be a magic genie :)
The instructing partners who succeed in this experiment, are the people with a "programming mindset"—the people who can repeatedly break the task down until it's specified as a flowchart of instructions and checks that each can be performed without any context the blindfolded partner doesn't possess. And, to succeed at a series of such problems, they also need the ability to quickly attain, for a new kind of "programmable system", an understanding of what kind of context that system does/doesn't have access to, and of how that should change their approach to formulating instructions.
That skill, altogether, is formal modelling.
My point is both skills are necessary, but if the second skill (programming) is sufficiently easy, it can reasonably incorporated into other professions like being a lawyer. I don't think a "programming mindset" is particularly rare, what's stopping these people building their own software is trade skills like familiarity with standards, setting up an IDE and working a debugger.
Coders are reluctant to admit this because they like to see themselves as intelligent in a unique way compared to other professions, but vanishingly few actually have any experience of other professions.
A programmer is exposed, all day long, to clients who do not have the "programming mindset." There are two possible reasons for this:
1. Selection bias — people who have a "programming mindset", just don't end up being the clients of software firms, maybe because they decide to build things themselves. (Unlikely, IMHO: to avoid needing to get someone else to build software for them, they would need to go out and learn the trade-skill minutiae of programming on top of their regular career; few people do this. Also, anyone with a sufficiently-lucrative primary career can see that this is not their comparative advantage, and so won't bother, just like they won't bother to learn plumbing but will instead call a plumber. If these people did exist in sufficiently-large numbers, they would end up being a non-negligent part of software firms' client-base. But this does not happen.)
2. Representative sampling — most people really just don't have this mindset.
Yes, there are exceptions, but they're the exceptions that prove the rule. The "domain of mental best-fit" of programming heavily overlaps with e.g. mathematics, most kinds of engineering, and many "problem-solving" occupations (e.g. forensic investigators; accountants; therapists and behavioral interventionists; management consultants; etc.) But all of these jobs together are still only amount to a tiny percentage of the population. Enough so that it's still vanishingly rare for any of them to end up as the contact-point between an ISV and a client company.
Another thing we'd see if the "programming mindset" were more common, would be that there'd actually be wide take-up of tools that require a "programming mindset." This does not happen.
We'd expect that e.g. MS Access would be as popular as Excel. Excel wins by a landslide, because while it certainly is programmable, it does not force the sort of structured approach on people that confers benefits (to speed of development and maintainability), but only feels approachable if you have developed a "programming mindset."
We'd expect that Business Rules Engines and Behavior-Driven Development systems would actually be used by the business-people they're targeted at. Many such systems have been created in the hope that business-people would be able to use them themselves to describe the rules of their own domain. But inevitably, a programmer is hired to "customize" them (i.e. to translate the business-person's requirements into the BRE/BDD system's dialect), not because any programming per se is required, but because "writing in a formal Domain-Specific Language" is itself something that's incomprehensible without a "programming mindset."
We'd expect that people who want answers to questions known to their company's database, would learn SQL and write their question into the form of a SQL query. This was, after all, the goal of SQL: to make analytical querying of databases approachable and learnable to non-programmers. But this does not happen. Instead, there's an entire industry (Business Intelligence) acting as a shim to allow people with questions to insulate themselves from the parts of the "programming mindset" required to be able to formally model their questions; and an entire profession (business data analyst) serving as a living shim of the same type, doing requirements-analysis to formalize business-people's questions into queries and reports.
Keep in mind, the "programming mindset" I'm describing here is not a talent. It's not genetic. It's a skill (or rather, it's a collection of micro-skills, having large overlap with problem-solving and research skills.) It's teachable. If you get a bunch of children and inculcate problem-solving skills into them, they'll all be capable of being programmers, or mathematicians, or chess players, or whatever related profession you like. The USSR did this, and it paid off for them.
The trouble with this skill, as opposed to most skills, is that people that don't learn this skill by adulthood, seemingly become gradually more and more traumatized by their own helplessness in the face of problems they encounter that require this skill-they-don't-have. Eventually, they begin to avoid anything that even smells a bit of problem-solving. High-school educators experience the mid-development stage of this trauma as "math phobia", but the trauma is generalized: being helpless in the face of one kind of problem doesn't just mean you become afraid of solving that problem; it (seemingly) builds up fear toward attempting to solve any problem that requires hard, novel, analytical thinking on your part.
And that means that, by adulthood, many people are constitutionally incapable of picking up the "programming mindset." They just won't start up that part of their brain, and will have an aversion reaction to any attempt to make them do so. They'll do everything they can to shirk or delegate the responsibility of "solving a novel hard problem through thinking."
And these people, by-and-large, are the clients of software firms.
They're also, by-and-large, the people who use most software, learning workflows by rote and never attempting to build a mental model of how the software works. This has been proven again and again in every software user-test anyone has ever done.
Isn't (usually) the moral of a magic genie story that there is no "do what I mean" button? "Be careful what you wish for."
I've long been a fan of end-user programming, and have promoted it in the form of domain-specific languages and visual building of logic. I love that it gives "non-programmer" users the power to (try to) build what they imagine, and have seen it lead to valuable prototypes and successful tools/products/services.
On the other hand, I've come to learn that this is still a form of programming, however higher a layer of abstraction.
Users who attempt a complex problem space will sooner or later run into what experienced programmers deal with every day, the challenge of organizing thought and software.
What typically happens is, as the "non-program" grows larger and more complex, eventually it starts pushing the limits of the abstraction, either of the user's capacity or the system's. That's when they call in a "real" programmer, to get in there and patch up the leaky abstraction, or convert the prototype into actual software in a better-suited language.
I still think low- or no-code programming environments have a lot of potential to change what software means to people, particularly by blurring the boundary between software development as we know it, and forms of "intuitive computing" like putting together mental Lego blocks.
It doesn't seem like any of that has diminished the demand for software professionals.
It is trivially easy to learn, (a weekend), and it is so incredibly powerful. To me, it is a skill like learning how to type properly - it will pay dividends for years to come...
Now I put as many layers as possible between sql and myself.
I don't think the lack of good tools is the reason we still need professional programmers.
This has been happening for decades now. Even in 2000 you could pay a hosting company $not-much to give you a basic templated site hooked into a payment server. It didn't work all that well, but it worked well enough to provide the commodity service most small business owners wanted.
I still see people saying "You can't automate this" - when magic AI automation isn't even needed to do the job and the job is already being done.
Of course this kind of no-code won't build you a complete startup. But how often do you really need a complete bespoke startup? For a lot of business ideas a no-code service with some simple customisation and a very basic content engine is all that's needed.
You do not need docker etc for any of this. Or at least, you don't need to deal with docker personally for any of this - just as you don't need to deal with your web host's VM technology.
So while I don't completely agree with OP, I think it's astoundingly naive to believe that the current level of hyper-complexity cannot possibly be shaken out.
In fact current stack traditions are almost comically vulnerably to disruption - maybe not this year, but I would be amazed if the landscape hasn't started to change within ten years.
The "current stack" may certainly be ripe for disruption. But I'd predict that rather than put developers out of work, it will simply bring even more businesses into the fold who may not have had the resources for developing their own solutions beforehand. There will always be companies with the resources to demand custom solutions to fit their particular business needs.
When we look at various platforms, we see that big business and startups are extracting all of the repeatable, low-risk tasks of most businesses(supply chain, customer service(bots), manufacturing(on demand), design(partially automated design services) etc), leaving businesses to do mostly marketing and betting on products/services, and getting less of the rewards.
So what we end up seeing, is either less small businesses(i think kaufmann institute showed stats about that), or tiny companies with almost everything outsourced - and tiny companies usually require little custom internal software(they often use their supplier IT system).
Of course if there were some breakthrough on the supply side and we could automate the software dev process itself, which I guess is what the article is saying, that would change everything. But that's beyond a silver bullet, that's a silver spaceship. So I doubt that too, and the GP's right to point out that every generation has had its version of this prediction also.
You don't always. But if you can identify software deficiencies and fix them, that is an advantage. You don't even have to be a "startup". I work for a company that has opened a wide variety of "lifestyle" businesses with the angle of "we can build simple software targeted towards our problem that makes us run more efficiently than the competitors". And it has worked pretty well, at least for the past 20 years or so.
But you need to include tech in the high level decision making process. Which means you need at least one person competent in both business and technology so that you can properly weigh business needs vs technical difficulty.
A no-code site that meets spec and transfers liability would be great.
Stripe does this for PCI. You sign up, use their toolkit, and then PCI is just handled for you. There are some no-code solutions using Stripe as the backend. That is not be a legal transfer of liability, but it's a level of exposure that the lawyers are comfortable with.
Also importantly; HIPAA is not PCI and not all regulations are created equal. Clicking a few buttons to setup a website, and then clicking a few more, in order to accept money and take credit cards is a far cry from setting up the IT infrastructure for an entire hospital.
Which is why I doubt no-code solutions will prosper since needs and regulations vary so much that they'll be either so many different solutions or monsters to configure.
The problem here is that word "all". It's never going to be easy to do everything. Some part will be hard. That's where the value lies, and that's what your best people focus on. But everything else will be abstracted away. It's already happened. 30 years ago making a GUI was hard, but VB changed that. Then making a web app was hard, but PHP changed that. Then app layout was hard, and Bootstrap changed that. Then ML was hard, and Torch changed that. Every hard problem gets a 90% working solution that's more than good enough for most companies. There'll always be a few companies that pay people to work in the last 10%, so the problem never really disappears, but fewer and fewer people work on it.
The key to keeping growth going in tech is to keep finding new problems, not to keep everyone working on the same old problems.
There are some parallels to induced demand in road construction: when you build a new road to ease traffic, traffic increases to use up that capacity. But that isn't a sign that demand is infinite, it's just that demand is limited by the available resources. If you keep building roads, at some point they will become emptier. Similarly, at some point development productivity will outpace demand, and we will start optimizing our own jobs away.
In your example programmers are the monks or the press makers. At some point we're not needed any more (at least at the same scale) since word processors have already been built.
There are more newspapers but they are all owned by larger players which means different types of machines and parts.
A better example might have been blacksmiths. Although the amount of people making cars is a larger group.
If you read the whole article it shows a path to new jobs...
Digital printing has become the fastest growing industry segment as printers embrace this technology. Most commercial printers now do some form of digital printing.
Manufacturing deals with making 3D objects.I think 3D printing belongs there.
makes more sense to think about GPT-2 like language models replacing authors, than word processors
Anthropologists estimate that the work week was at 20 hrs at the end of the Stone Age. We have been inventing new problems in the vacuum created by our successes for, literally, millennia.
Most of the stuff there would be just... normal now. It's quite unusual for SPAs to have a decent consistent UX. And the slowness would never have been tolerated back in the day.
Looked at retrospectively, forms were just one step above green screen applications on a terminal, transplanting one set of structural idioms to another, like for like.
I see massive sea change in connectivity and immersiveness of today, but not really in what we're trying to achieve.
I'm not saying they were halcyon days. I'm saying that the effort to do things is not necessarily less these days, in part because we have different expectations (not necessarily requirements) today.
I very much agree with your comment, but allow me a little nitpicking. Solutions aren't 90%, more like 50% or 20% or whatever. It may sound absurd to discuss a number there, since it's more like a way of speaking, just wanted to add that for most problems the solution is barely better than the default option.
In other words, there's still a lot of room for improvement, huge actually but, as you say, it might come in small pieces.
Fortunately, the SPA Plague made it very difficult again.
This might even lead to an _increase_ in demand for software engineering, since now small companies can write their own custom software cheaper and more reliable. It's called Jevons paradox.
"In economics, the Jevons Paradox occurs when technological progress or government policy increases the efficiency with which a resource is used, but the rate of consumption of that resource rises due to increasing demand."
Only tangentially related to the thread: I'm struggling to think of how government policy might increase the efficiency with which a resource is used, other than by not existing in the first place.
So, an ask: any historical examples where government policy other than deregulation has increased the efficiency with which a resource is used?
Government policies are enabling better efficiency of optical fiber infrastructure usage, without requiring multiple vendors to do the most expensive and least rewarding part of servicing internet: digging trenches for wires.
When there's a coordination problem, but the equilibria state is unsustainable (such as overfishing) or lower-value (imagine competing electric grids with different voltages and frequencies), then government regulation can be useful by either imposing unilateral costs, and/or by defining a common standard.
There is the issue of avoiding regulatory capture, but I suppose that's for another time. :)
EU banned selling incandescent light bulbs for one example. Which increased demand for LEDs, lowered their prices, and made people switch much faster.
Almost all countries have legislation that mandates fuel usage of passanger cars has to be at most X liters per 100 km. Or at least there's an incentive system with taxes and other bills.
There are minimal standards for thermal insulation of houses.
If you call clean air and clean water a resource then most environmental regulation count.
It's very common actually - it happens every time there's a tragedy of commons and government regulates it.
I'm not sure that's a great example. At least in the US, adoption of more fuel-efficient cars -- and the ascent of the Japanese motor industry -- started from the 1973 Oil Crisis, whereby oil prices skyrocketed due to a drop in supply.
American automakers had been shipping gas-guzzling land-yachts for years, but pricing changes drove consumers to buy fuel-efficient Japanese cars, where they stayed because Honda had invested in "customer service" and "building reliable cars that worked", whereas Chevrolet's R&D budget was divided between tail fins and finding new buckets of sand into which GM and UAW management could plunge their heads to pretend the rest of the planet didn't exist (to be fair, they're still really good at that).
And oil prices are another way to regulate that. In my country oil price at the station is over 70% taxes.
Well, we can't all have Volkswagen do our emissions testing. :)
Why would you say "crazy inefficient"? I don't think that, say, a VW 1.8L is, practically speaking, any more or less efficient than a Ford or Toyota 1.8L. A Ford Focus gets comparable gas mileage to, say, a Golf or a Mazda3.
The Golf has a better interior, but will also fall apart much sooner -- VW in the US has a shockingly bad reputation for reliability and customer service. Which sucks, because I really prefer VW's design language to pretty much any other brand.
You might on average drive smaller cars in the cities, but that's more of a preference issue than
In the US, public transit must accommodate the disabled, and for some types of trips or some types of disabilities there is a totally parallel transit system that involves specialized vehicles, operators, dispatchers to efficiently route vehicles, etc. It's also a massive PITA from the rider's POV, since you have to dial a call center to schedule a day in advance and you get a time window in which the driver will show up. This system dates from the '80s, before the Internet and before taxis were mandated to be accessible.
New York City tried a pilot program in which this system was replaced by subsidizing rideshare rides, since in the 21st century all taxis are required to have accommodations for the disabled anyways and you can leverage a well-tested system of ordering rides instantly and a large fleet of vehicles. While this did reduce per trip costs from $69 to $39, the increased convenience caused ridership to also skyrocket, so it ended up being a net drain on finances.  http://archive.is/N3DjJ
Another example is the expansion of highways; if highways are free, expanding them to relieve traffic will generally cause car travel to go up as more trips become tolerable, and then the highway will be as congested as it was before. https://www.vox.com/2014/10/23/6994159/traffic-roads-induced...
Could go on... money, power grid, air traffic control, waste collection and disposal.
Of course, such a system is less efficient at extracting value from consumers, so I suppose your question requires an assumption as to whom a system is efficient for.
Also not sure that's the best example.
Singapore, Japan, Germany, Switzerland... all of those are multi-payer, but tightly regulated (which imposes equal costs across all actors, so that's coordination once again).
And I'd have to dig out the article, but I believe the above model (Bismarck) is better at controlling costs, and produces more positive outcomes as well.
The US healthcare system is a mess for a lot of reasons.
Healthcare being tied to employment is probably the biggest.
Maybe the second is a lack of any sort of common healthcare market? You can't just take "any insurance" and go to "any doctor"; instead, you have to navigate a maze of in-and-out-of-network relationships. It's like scheduling an appointment with the Mafia: "My cousin's dog-sitter's best friend's uncle's pool-boy Vinny knows a guy that can take care of your headache."
The adversarial relationship between insurers, patients, and care providers is also a problem. Insurers work very hard to screw hospitals and patients, so hospitals have insane overhead costs to fight against the insurers, and patients... oh god, don't get me started there.
Regulatory capture also plays in. And there's more, but yeah, it's a mess.
Public transport. It benefits society as a whole when people are able to move around, and if they can do so without causing massive traffic jams. Regulation, keeping prices low, and ensuring that even remote areas are reachable, make it attractive to use and will make it more usable to more people.
Labour in general; shorter work weeks and improved working conditions have improved productivity.
Government policy to improve energy efficiency (government grants to improve factory production efficiency) can lead to increase in total energy use as the factory is more profitable with better efficiency.
EU does have programs to improve efficiency in this manner.
E.g., no manufacturer will install Oliver's Optimizer, which promises a lifetime 10% savings in energy use, because it would force them to shut down operations for a month while the optimizer is installed, and put them at a disadvantage compared to other manufacturers.
By requiring the Optimizer (or equivalent) as a licensing requirement for factory operation, all manufacturers share the same burden, and thus suffer no relative disadvantage.
Is that the general idea? I'd be worried about regulatory capture in this case -- e.g., Oliver lobbying to force the market to install his Optimizer -- but that's an entirely different discussion. :)
To the extent you can imagine the market as a gradient descent optimization, coordination problems are where it gets stuck in a local minimum. A government intervention usually makes that local minimum stop being a minimum, thus giving the market a necessary shove to continue its gradient descent elsewhere.
I think this is a very appropriate analogy.
A thought: the cost function that the market minimizes is only a proxy for the various cost functions that we (humans) actually care about. I wonder how much (if any) “government inefficiency” is due to the mismatch between the market cost function and these other cost functions.
Government isn't always required for standardisation but even when it's industry led, it feels like government because it's cooperative, which means committees, votes, etc.
Not really an example, but any government policy that deals with a tragedy of the commons situation.
Take for example the NW Atlantic cod fishery: "In the summer of 1992, when the Northern Cod biomass fell to 1% of earlier levels [...]"  I'm sure that if Canada, the US and Greenland had come together and determined a fishing quota, those fishermen would still have a job today. Instead they were so 'efficient' that there was nothing left for them to catch.
I would say there are examples around. For example, the numerous dams and levees we enjoy. Getting wrecked by a flood is not very efficient. Non-navigable rivers are not efficient.
Basically it's comfort vs efficiency, and people maximize their comfort within the acceptable traffic jams level.
When I was 10 years old I started coding in Qbasic, a few years later I told my dad I wanted to be a programmer when I grew up, he told me that it would likely be automated soon (as had happened with his industry, electronic engineering) and I'd be struggling to find a job. 23 years later and the demand still seems to be rising.
I'd say we're still quite far from such level of abstraction; but a certain degree of it is already possible as you say... k8s/docker/kafka/glue/databricks/redshift, all of these technologies mesh together "seamlessly", but more problems arise as a result.
The problems we must tackle just shift elsewhere.
It did not happen the way people predicted, but it has somehow happened in the form of Angular, Ionic, Express, Ruby-on-Rails and similar frameworks: More and more programming means "writing glue code", being it to glue Machine Learning libraries (yay, ML developer!), HTTP libraries (yay, Web developer!), AMQP/SQL/NoSQL (yay, backend developer!) or even OpenGL/DirectX/SDL (yay, game developer!).
The fact is, as more and more of these abstraction libraries are created, "programming" will go one level of abstraction up, but still need people to do it.
I call that progress.
At the same time, when I started, Basecamp was amongst the top SaaS solutions on the planet. Today, its simple form based approach wouldn't cut mustard with consumers accustomed to instant feedback, realtime collaboration and behind the scenes saving.
This is especially apparent in the games industry. Early games like Doom or Wolfenstein were often developed by less than five people. Today's open world titles like AC Odyssey or Cyberpunk 2077 require 100 times as many people.
It was >1,000 artists, designers, programmers and sound engineers to make each entry in the series of games I worked on.
We actually can imagine how a natural-language driven "black box" that translates it into code works: it's called offshore software development. The conclusion that everyone eventually reaches, having experienced varying levels of pain first depending on how quickly they learn, is that writing a spec detailed enough to make that work is as much or more work than just writing the code yourself!
There will be breakthroughs in SW development but as with all breakthroughs no one can exactly tell when they will occur, so let's say within the next 40 years.
The microelectronics industry has largely moved to automated validation. Some of the ideas have already migrated to SW validation, although progress and adoption is slow.
Probably a key idea for automatic SW generation and "no-code" is to realize that a Turing complete language is not required at all times, well most of the times it is even counter productive. Too often SW engineers fail to realize that as well.
Large companies at significant scale need to know these things. Smaller companies don't need kubernetes, message queues, or anything beyond a simple standard off-the-shelf setup. I'm guessing the author was referring more to small/mid-sized companies that aren't at FAANG-scale and have no need for that complexity.
There is a cost to managing it but even so, without the automation it provides I simply wouldn't have the capacity to do what I do.
What I think will happen to software engineering is that the middle will shrink. We'll see many more frontend and product engineers, and slightly more infra and systems programmer. I think the fullstack, middleware rails/django type engineering will all but disappear (most will move towards product).
Yeah, and who do you think comes along and cleans up their mess, extends and maintains that software once your cowboy coders are gone?
We might be more productive, but only to a certain point. The complexity comes when people want to twist and bend the off the shelf solutions in ways they weren't designed for, and when systems become so large and complex that adding just one more feature takes a significant amount of time.
This is what differentiates your low cost bootcamp grads from highly paid software engineers. Experienced engineers aren't just building for today, but for the future.
I don't buy that fullstack devs are going anywhere. The real world is complex, the devil is in the details and the complexity of those details can't simply be chucked into an off-the-shelf solution and be expected to survive. We'll still have to have people that glue all the pieces together, we'll still have to name and compose things, to make modifications and optimisations, to maintain existing products, and we'll need people that push the boundaries of what's been done before and explore the new.
Sometimes someone with a CS or SE degree, sometimes someone who learned to program as a hobby while doing something completely irrelevant like Music, English, bar tending or high school and sometimes the cowboy coders themselves with more experience. There’s an enormous amount of theory in programming which is highly relevant to many, many people but you can be amazing at CS theory and write scientific code that’s garbage, uncommented spaghetti like the Imperial epidemiology model. At the other end you can have a great grasp of how to write clean, modular, well commented code and have no idea how you would start parsing a text file to extract all nouns or some other introductory undergraduate project for one of the infinitude of topics in CS.
How difficult is that to read up on? I do a lot more of the former than the latter as that is what real life jobs entail (actually most of them involve fixing other peoples shitty code).
It will be like excel spreadsheets: full of errors, moments of brilliance, utterly unmaintainable, yet used every day for mission critical services.
Running your own blog, or email server, has slowly shifted from requiring technical chops, to becoming passé.
Zapier, ifttt, n8n, etcetera allow one to do amazing automation that couldn’t easily be done ten years ago.
I personally think we have not made any significant progress in 20-30 years with regards to development and the number of software developers is still growing at a rate where the majority has probably less than a year experience. So no progress can be made as the industry never matures.
The kind of optimism displayed in the article reminds me of my early years as a developer ;)
We have been predicting machines would not need programmers since there were programmers. It is true for a _given_ task at a given complexity level. But overall the demands on, and for, a modern programmer have only gotten higher, because the demand of all business and human activity is to offer more than we might have otherwise.
Just wait until, for an app to differentiate itself in business, you have to create intelligent responses in a variety of augmented reality interfaces, correctly predict human behavior, and interface with the physical environment in a routine and nuanced way. And the companies that can do it well are suddenly dominating the ones who do a sloppy job.
I think it's totally true that one can leverage new tools to get more work done with less people, especially when it's for a service that doesn't reach scale and what not. Most companies don't need that to be lucrative. But I think the space of problems expands, whether that is more fields valuing tech, feasible complexity increasing in others, or competition just ratcheting up by lowering technical barriers to entry.
Take a look at AirTable and IFTTT/Zapier.
Its not going to save us from all of the work, but it does eliminate a lot of redundant work. For sure.
> Even the example of hosting complexity being replaced by cloud companies seems kind of silly to me. Maybe that’s saving very small companies a sizable fraction of their engineering resources, but I really doubt it for medium or larger companies.
I think that hundreds of billions of dollars has been spent moving from localized IT to cloud. Do you really believe that was all a waste of money? For example, most of those medium or large companies had their own operations software backends, and most of it was eaten by clouds services/APIs.
> You still need people who understand things like docker, kubernetes, endless different database options, sharding, indexing, failover, backup, message queues, etc. Even if the pieces are now more integrated and easier to put together, the task of figuring out how the pieces will interact and what pieces you even need is still outrageously complicated.
Docker/K8s is a good example. I spent more than a year building a Docker orchestration/hosting startup (eventually decided not to try to compete as an individual with Amazon). But when I recently needed a reliable way to host a new application and database, I did not have to configure Docker or K8s at all. Why? Because I used AWS Lambda and RDS. Those are examples of software eating software. AWS can handle all of the containers for you if you do it that way.
As far as failover and backup, that was handled by checkboxes in RDS. I did not need a message queue because that was built into the Lambda event injection service.
Just like people put some PHP scripts on fully hosted Apache+MySQL 20 years ago. This was very common and far easier than AWS. (And reliable, too, although not as scalable, but the needs then were different). The point being that all of this has been here before. Every few years some complexity creeps back in (in exchange for some other benefits) and then it‘s eliminated again and some progress is made. But works always expands.
Recently I helped a friend who is a teacher with her Excel sheets to do grade reports. Pretty well done for Excel, yet it was a terrible user experience. Even if there are only 1000 users, working with this 1 hour per month, a proper custom made software would have been better and easily economically viable. Even no-code has existed for a long time, but it never fits perfectly.
Similarly, people regularly complain about just glueing components together. As opposed to what? Copying sorting algorithms right out of a CS class? It‘s a strange idea. Looking at the code I work with, over the years, I find very little glue. True, there are abstractions and sometimes they get in the way. But they are there for a reason. Whether you start out from scratch or use frameworks, much of the application will revolve around business-related data structures.
You can give it a try: Take your real-world product, strip out the abstractions, replace the built-in UI widgets, sorting routines, hash tables of your language and maybe that OR-Mapper and your GraphQL-server framework and so on with your custom code and something minimalistic. It won‘t take that much time and code and compared to the stuff on top you‘ll find it‘s not that much that you actually used in the end. Nothing to glue together anymore.
Not that it makes sense to do this. But the idea that glueing things together has replaced „real“ development is very much mistaken.
As opposed to, I'm guessing, adapting Monte-Carlo tree search to Go and inventing AlphaGo. Or what BellKor was doing back in the Netflix Challenge days. Not copying sorting algorithms out of a CS class, but solving a puzzle with a clever new algorithm that just works, and then everything else falls into place.
(Call it "if you write it, they will come" taken to its logical conclusion.)
The point was that actual software engineering has not been more or less trivial than a few decades ago and that the glue isn‘t that much after all.
There was never a time when the daily work of software engineers was, in fact, research. There are plenty of research institutes and universities and even research labs of software companies where you can do this after you‘ve got your degree. Just go there, many of my friends are doing exactly that.
It's not going to eliminate work in software development, it's going to increase work in developing software systems by increasing the average value of each unit of work, and, simultaneously, move the average level of the work higher up the abstraction ladder, just as every advance in software productivity since “what if we didn't have to code directly in machine code and instead has software that would assemble machine code from something one-step more abstract”.
I’d love to exchange notes with you on the lessons you learned building your system and the challenges you faced.
I’m using rancher/k8s with docker on top of “unreliable” hosts with AWS/GCP/DO/Azure providing “spill over” capacity for when those unreliable cheap hosts prove why they’re unreliable.
Is it possible we could get in touch? You can reach me at hnusername at Google’s mail service. Would love to connect if you’re open!
I think we have 2 options:
We've reached a plateau -- software will continue to be developed as it is now, no new abstractions.
Mankind will create a better set of tools to:
- reduce the effort needed
- increase the # of people who can participate
in the translation of ideas/requirements -> software.
For everyone's sake , I really hope it's the second! :)
As one crazy idea, imagine if you could have a spreadsheet that would let you build software instead of crunch numbers...
... anyway, probably a bad idea, we should stick to our current abstractions and tools :D
 Take the above with 2.42 lbs of salt, I'm the founder of
Well, there hasn't been one single major breakthrough but rather a lot of small ones that cumulatively mean that software has become easier to write. Most of it is more mundane than new fundamental abstractions, it's more about distributed version control, better bug trackers, better libraries, more accessible documentation and learning materials, and so on. These things allow software to be written more quickly with smaller teams. Even someone writing in a language like C that hasn't changed much in decades will have a far easier time of it in 2020 than it 2000, simply because of the existence of StackOverflow and the progress that has been made in getting compilers to warn about unsafe code.
This is combined with the fact that as more software is written, less software needs to be created to fill some functionality gap. As long as we have computers and people who care to use them, there will always need to be new software written. Most software that people get paid to write is not written for fun or for intellectual exercise, though, it's written to solve a business need. If that business need can be satisfied with existing software, there's less motivation for a businesses to write their own.
We are on the brink of economic contraction which is forcing a rethinking for the need of software engineers. The necessary disruption is there. It is economic, not technological.
Yes, there will continue to be a need for software engineers, but business expectations will change as budgets will adjust. I suspect fewer developers will be needed moving forward and those developers will be required to directly own decisions and consequences, which has not been the case in most large corporations.
> In my experience, software engineering is endless, complicated decision making about how something should work and how to make changes without breaking something else rather than the nuts and bolts of programming.
Agreed, but that is not the typical work culture around software development. Thanks to things like Agile and Scrum developers are often isolated from the tactical decisions that impact how they should execute their work, and for good reason. While some seasoned senior developers are comfortable owning and documenting the complex decisions you speak of many are not so comfortable and require precise and highly refined guidance to perform any work. This is attributable to a lack of forced mentoring and mitigated by process.
At the same time we’ve come up with a bunch of new stuff which gave those engineers new jobs.
I do see some reduction in office workers by automation. We still haven’t succeeded with getting non coders to do RPA development for their repetitive tasks, but the tools are getting better and better and our workers are getting more and more tech savvy. In a decade every new hire will have had programming in school, like they have had math today. They may not be experts, but they’ll be able to do a lot of the things we still need developers to do, while primary being there to do whatever business logic they do.
But I’m not too worried, we moved all of our service to virtual a decade ago and are now moving more and more into places like Azure, and it hasn’t reduced the need for sysops engineers. If anything it’s only increased the requirements for them. In the late 90ies you could hire any computer nerdy kid to operate your servers, and you’d probably be alright, today you’ll want someone who really knows what they are doing within whatever complex setup you have.
The same will be true for developers to some extend, but I do think we’ll continue the trend where you’ll need to be actually specialised at something to be really useful. If virtual reality becomes the new smartphone, you’ll have two decades of gold rush there, and that’s not likely to be the last thing that changes our lives with entirely new tech.
25 years ago, yes, but white-label hosted web store things were around in the early noughties. I think there were even a few in the late 90s, but those weren't very good.
I decided to just use a $10 Digital Ocean server. With stocks so cheap, my goal was to build an automated trader during COVID-19: https://stockgains.io
I initially used Google spreadsheets but it wasn't effective. I spent a week with Docker, learned MySQL 8's new features, and Ruby on Rails 6 for rapid development. There are so many nuances with storage engines, libraries, query and cache optimizations, and UI/UX design that requires human thought, experience, and skill. Sometimes plenty of it. Now the beauty of this tool isn't the price difference of a stock before COVID (a robot could do that), but the filters. These filters were created from a human (me) reading over 100 books on trading stocks and writing down quantitative and psychological parameters. And I kept track of what could be "automated" over the years.
I just can't imagine a robot reading all those books and doing the same thing. Not just the design, but just building a vision. There's an art and complexity involved in solving problems.
Open source, and Github specifically, can be mined and reused like any other knowledge; pay attention to Microsoft and OpenAI going forward.
I just don't see any of that happening in software. Yes the tools change, but the number of jobs isn't going down.
It is, and has been, extremely slow. Until it is not, and then it will grow very fast in a short time.
And then it will grow even faster.
The industry has already tried commoditizing by off-shoring. What we learned was high-performance teams require psychological safety and trust. The human factors involved in creating software are why engineers are not plug-n-play. Because that reduction of the problem doesn't describe how the software is actually made: product solicits customer interviews/data to recommend new features, architects brainstorm a high-level solution, and the IC engineers implement the vision. Human factors, through and through.
Every broad article I've seen like this speaks about 'software' as if it is a monolithic career path. The lives of web programmers, embedded engineers, AI researchers, ERP programmers(etc, etc) are all quite different. Most of the articles I've see on programming/software engineering don't capture the things I've experienced over my 23 years as a programmer.
However, I think your first point about multi-system interconnectivity is ripe for change.
It's been the case that the literal act of running a business has been humans serving as copy-paste bots between systems, both internal and external. Come to think of it, from a purely software point of view, businesses look a lot like giant, multi-system ETL machines, except that the individual steps in the pipeline (Salesforce, SAP, Netsuite, etc.) don't talk to each other. This is even worse when it comes to interactions with other businesses (customers/vendors/partners) - everyone has different systems and none of them talk to each other.
So we fall back to the lowest common denominator - Email + attachments (XLSX, PDF), CSV over FTP etc.
The fundamental problem is not very different from the challenge of human language translations. Getting SAP to talk to Salesforce is a similar class of problem as enabling an English speaker to talk to a Hindi or Mandarin speaker. If the latter is a solvable (solved?) problem, I don't see why software talking to software is that different. There are of course domain specific challenges, like the fact that both systems being translated between require 100% translation accuracy.
We are working on solving this at https://42layers.io. It's early days for us, but this is exactly the problem we are solving.
Conversations between systems is an easy problem, in the same way translating English to Mandarin is easy if both people are also fluent in Hindi - they can round-trip through the shared language. Systems designers can also negotiate a common protocol. It doesn't have to be automated, it can work just as well with some programmers continuously keeping the protocols up to date. The problem is, there's a strong business imperative to not do any of that.
> English to Mandarin is easy if both people are also fluent in Hindi
This is so true. In one of my tasks as a consultant at a law firm, this is literally what happened when working on a plaintiff side case.
A partner spoke Mandarin, Japanese and Hindi while I spoke English, Hindi. We were called upon by translators a lot to proof eDiscovery case files.
Lots of companies trying to build low-code solutions to help business people glue systems together. However, for pretty much all of these solutions, while the end user isn't writing code, they are forced to think like a programmer - if/else, loops etc.
We are taking a very different approach.
We've built a transform engine that can be trained on transforming data from a source structure to a destination structure using a few (10s) of examples of source and destination. We can do this transformation without falling into the trap of figuring out acceptable confidence levels - a trap that most ML systems fall into, and thus have a hard time with enterprise usage.
We couple that with dynamic, configurable integration infrastructure ("connectors" in old school enterprise speak) that can send+receive data to/from lots of systems over many protocols and serialization systems.
End result is that end users can connect systems together with a few clicks and by providing a few lines fo training examples, not unlike what a business person would give a dev and say "extract a CSV from SAP and put it in that FTP folder. the CSV needs to look like this file"
That is the failure of every ORM and visual programming tool I have seen.
(But i happen to think that being forced to think like a programmer is good - on the order of being forced to think like a literate person would have been a few hundred years ago)
But interesting if you can do it.
I agree that there is a lot of complexity specifically for "mission critical" or "last mile" systems that will not be addressed by the mainstream abstractions for many organizations, but I don't think SalesForce is necessarily the best example. I see the author's hypothesis freeing up time to do a lot more things within organizations that are otherwise on the back burner because you can't get to that feature set, and/or pivoting to solve either a) complex problems that are not yet solved or b) specializing on a layer that is now "platform". Somebody builds AWS, and Azure, and GCP. Somebody has to create, build and maintain the next platform / abstraction too.
There isn’t some fixed factor here that causes it all to collapse. Productivity increases are plowed into growing the market 10x and building the business, not reducing eng budgets. At some point in the future this will slow down, but that is so far from happening, like many decades from now, maybe never in a non theoretical sense.
No, it's more code with a greater value:code ratio. It's lower code for the same delivered value, but no one stops at the same delivered value that they'd have without whatever tipped that ratio, because the incremental value for the next unit of code is higher.
Increasing the value delivered per unit of code increases the volume of code purchased.
Maybe, but that code tends to be extremely bad quality, because it is always written by "consultants" who know just enough programming to be dangerous and do the bare minimum to get the integration to work, without any concern for or the ability to follow software engineering best practices. And that introduces its own costs.
Software has already been eating software. Imaging building something like Salesforce or an ERP system using only Assembly. Just as programming languages like Java became an abstraction level over Assembly and simplified development of complex systems, something else will emerge (or is already emerging) as a higher level abstraction and will enable creating even more complex systems.
>The industry has already tried commoditizing by off-shoring.
Offshoring doesn't create a new abstraction level.
Machine learning won't be the answer either. Machine learning is just another kind of software, you still need to set it up and maintain it. And you need data to train it, which for these complex processes often there is none.
The solution really is for non developers to write code to automate their tasks themselves. Here simple code with simple platform to run that code is the only solution. But we are taking the opposite direction. Newer generations are becoming increasingly remote from how computers work (teenagers seem to be struggling even with a file system), platforms are increasingly becoming locked down (both consumer and corporate environment). And I dispute the claim that software is getting easier. I am mostly evolving in a .net environment, and I think the platform is becoming increasingly messy and complicated, we are moving away from simple things. Same with technologies, every time I go back into the azure portal website, I feel I am lost in the hundred of products with evasive names.
What we need is the power of the almost "draggy and droppy" features of VBA, something end users can play with. It is shocking how much office processes rely on such an antiquated and neglected technology.
Of course teenagers struggle with a file system. Your iPad/iPhone/Android App shields the user from any and all meaningful interaction with the OS.
Many families don't even have a PC at home anymore. So there's no chance for them to gain this experience.
I have a brother who is 15 and he doesn't know how to use a computer more than using youtube and facebook. And I constantly hear things from my parents about viruses and sketchy stuff ending up on the family laptop. Granted, not all of that is him or my other siblings, but it seems a lot of kids are missing a sort of digital literacy that many in my age group grew up with. I somehow know what a sketchy download button looks like. He has no idea.
"It said download so I clicked it" is often a response I hear.
What's more frustrating though is that my brother is not a great student. He was adopted and is getting to the age where he's starting to act out and I totally understand why. He's disillusioned with his own education and can't be bothered to care. For someone in his situation, digital literacy could give him access to a good job and a healthy adult life by learning to program, and I could help and mentor him along the way, but I know already it's going to be hard to convince him to take it seriously. I've hinted at it but I've only gotten sideways glances that scream "yeah right, I can't do that."
I'm not saying every kid needs to learn to be programmers, but we've abstracted so much technical learning away from them that it seems they're less prepared for a digital world, despite growing up surrounded by technology. Even the kids who are into tech stuff are being pushed into commoditized silos. Eg. Minecraft, etc.
Most of my consulting customers can't even express the problem they are trying to solve. They struggle to decompose the requirement to its constituent parts.
The few that can do that could easily become programmers.
I've used a few "draggy and droppy" tools and they can make a programmer more productive in certain domains but they can't turn a non-programmer into a programmer.
I keep on repeating this here on HN and everytime I mention it I get downvoted to hell.
Programming is not intuitive. Iteration is not intuitive. Object design is not intuitive.
If someone gave a random guy a bunch of 2x4 and 2x6 studs and asked them to make a wall, they won't even know where to begin.
Software is way more complicated than building a frame and bolting drywall on.
Machine learning will learn to set up, maintain, and train itself. /s
I understand the desire to have non-developers write code for themselves, but the problem is that the quality and reliability of that code can be utterly terrible and they don't have the expertise for the edge cases, so there would still need to be at least an intermediate developer overseeing these 20 part-time very junior people simply because some of those processes would eventually go haywire and do something dangerous or destroy some data.
People should write code for themselves, and not all code needs to be good quality or have all edge cases covered. there's nothing wrong with someone making a tool for their job and handling the edge cases as they occur.
IMHO the win-win approach would be to have apps designed by employees with drag&dropping UI and wizards for setting the logic rules, and then to let ML analyze it and generate the high quality code based on that.
What I experience is that the small companies gets merged into a bigger company as they are no longer competitive with companies which have automated processes.
Anyway, the other argument about salaries is more interesting. Most people seem to agree that there's a huge untapped crowd of qualified developers in small / mid sized US cities who would love to join $BIGCO but the only reason they aren't is because it involves relocation. As an example, a Sr Dev in Orlando, FL makes $100-120k in total comp while one in SF / NYC makes $350k+. I limit my search to Sr Devs because I assume college kids are happy to move to exciting cities like NYC / SF / Seattle on fat relocation checks.
My suspicion is that supply and demand have converged already and big tech has mined out the supply of talented devs in the US already. The other datapoint here is that companies have made it as easy as possible for folks to move by opening dev centers wherever there's talent - NYC as a tech hub wasn't a thing in 2012, but it's huge now for all the people who don't want to leave the east coast. Boston is pretty big. Colorado, Austin as well.
The only way supply of devs is increasing here is if:
* Sr Devs who did not move to tech hubs because they preferred to stay where they are. (Personally think this is unlikely)
* Qualified Bootcamp graduates
* CS Enrollments hitting pretty high numbers, so maybe we'll start graduating lots of CS folks.
* Immigration reform / Outsourcing
* Interviewing change so we skip the algo problem solving shenanigans.
I personally think if big tech wants to hire in the US and still pay lower $ than they currently do, the only lever they have left to pull is the interviewing format / bar.
You would be surprised at how many people value their hometown or where they have settled. Technical only equates to high aspiration in SF. There are smaller slower more steady tech companies (probably using the Microsoft stack) outside of the tech hubs that offer stable jobs with decent pay and good work life balance. Being a software engineer in SF means constantly learning new tech and 'keeping up' but if you're not building a massively scalable consumer facing product that doesn't matter so much. In SF even B2B SaaS is built like this but it doesn't have to be.
No, it does not.
> You would be surprised at how many people value their hometown or where they have settled.
I would love to see actual data about this rather than articles from hometown newspapers and posts by hometown residents, enthusiastically praising their way of life. It's easy to argue the counterpoint as well, right?
1). There are many jobs that have to be done in person
2). Many people prefer to live in large cities and accept the downsides in order to get the benefits
So, next time you want to make a claim like this, can you share anything objective about this? Thanks.
Maybe you consider Kubernetes an old technology by now? Maybe you consider React.js an old technology by now? What about docker? How about ES6?
People at slower tech firms are still building working B2B web services with ES5 jQuery and ASP.NET. The engineers there have been working with jQuery since it's inception. They know it inside and out and have the skill and depth of knowledge to work around the drawbacks and design flaws.
This is from my experience working at smaller tech firms. I've moved to the city now and I can see the difference in tech and I can feel the difference in attitude too. I'm not going to link you a study or any data because no one is out there studying this stuff. This is opinion not science.
>1). There are many jobs that have to be done in person 2). Many people prefer to live in large cities and accept the downsides in order to get the benefits
Both of these statements are true but I don't see how they are relevant? I'm not denying either of these facts but it doesn't stop the small town engineers from existing.
My father (and many others like him) has been programming in C at a prominent SV company for the past 15 years (OK, it's based in the South Bay). I know many people in SF doing similar jobs, just coding away in Java or C++. Those people come to work, do their work, then go home. They don't tweet, or write Medium posts, or have dark green GitHub activity profiles. They don't work with you, so they don't talk to you. You aren't aware they exist. However, these people build many of the systems that make our day-to-day lives possible.
> I'm not denying either of these facts but it doesn't stop the small town engineers from existing.
I'm not saying they don't exist, I'm just saying that there just aren't that many of them.
I have a lot of respect for people like your father. That's why I wanted to represent the small town devs who are similar in many ways. I personally am a little sick of the whole scene and constant newness.
The fact that big tech has not yet changed the format of the interview I believe shows that this convergence has not yet happened. This system is designed with tolerance for false negatives (qualified grads who will be rejected). At some point if these companies had a true demand for more graduates, they would revise the way they evaluated candidates to limit the number of qualified rejections.
My SO and I have $30k left of student loans to pay off and then we're throwing our entire salary at buying a house before our city becomes even more expensive. We're hoping to be able to buy that house before the end of 2021.
Luckily we aren't in Austin or we'd be screwed already, but many parts of our city are already too expensive to own property in unless you're making $200k/yr or if you're comfortable leveraging more of your salary towards housing.
Now, we could move and I could keep my salary since I work for a fully remote company, but my SO's job is here in the city and she wouldn't make the same salary in a smaller market. We'd also be leaving our friends/family just to save some money on housing costs so the benefits aren't really worth it. It's dumb to have a really nice house in the middle of nowhere if we don't have visitors to share and enjoy it with.
So even with my healthy salary in a lower cost of living city, we're still struggling to get ahead due to student loan debt, healthcare costs, and housing costs. I would love to move to Europe and even take a slight pay cut to live in a more cohesive society, but from what I've researched, getting a visa without having $$$ in assets is difficult.
Idk where that was all going, but thanks for listening :)
Obviously there are big, possibly insurmountable, obstacles related to the cost of onboarding/learning bespoke tech stacks, the need to preserve trade secrets and serial dependencies that require work to be performed quickly.