Hacker News new | past | comments | ask | show | jobs | submit login
Coding Jobs and GPT-4
45 points by bb1234 on April 13, 2023 | hide | past | favorite | 55 comments
Do you believe that GPT-4 will significantly reduce the demand for coders? If not, why not? I am curious to get your sense of this. I am not a professional coder. I write code for scientific work. But I am really curious.



A few thoughts based on my experience with GPT4 so far.

1. For now, not by much. It will make us more productive but the world is not short of projects with endless backlogs so huge nobody can ever drain them. Maybe some of those tickets are low value or obsolete, but a lot of them represent real bugs or ideas for improvement that the cost of software development makes prohibitive to reach.

2. It may change the nature of the job a lot. It could hurt developers who are very heads down and ideas constrained. If what you love most about coding is the thrill of finding the perfect algorithmic implementation, then, well, AI may reduce the enjoyment of the job. If what you love about coding is seeing your ideas come alive, then AI can increase your enjoyment of the job.

3. Whilst some people claim they ideate and get inspired by talking to AI, my own experience has been that GPT4 is a rather conservative and predictable sort of personality. Perhaps it's the training and maybe via the API you can ramp up the temperature and get more creativity out of it, but whenever I've asked it for ideas or tried to bounce ideas around, I tend to get milquetoast yes-man type results, or the ideas it comes up with are generic and obvious. Also with each day that passes it's getting easier to notice the lack of "AI invents/solves something new" type stories. There was the one where it came up with a word game, which was pretty cool, and we know it can invent stories. But it doesn't yet seem to be producing an abundance of new ideas for things like new features, business innovations, etc. Maybe it does for other people. Maybe it will start doing it for me soon. But for now, it doesn't seem able to do that.

So - programmers who just want to play code golf, be ready for that to become more of a hobby than a job. But for programmers who always wished they had more hours in the day to get through all their ideas, it will enable them to create a lot more value and that value will in turn create demand for yet more value-add on top of that. So it should be a virtuous circle, in theory.


> So - programmers who just want to play code golf, be ready for that to become more of a hobby than a job. But for programmers who always wished they had more hours in the day to get through all their ideas, it will enable them to create a lot more value and that value will in turn create demand for yet more value-add on top of that. So it should be a virtuous circle, in theory.

But to me, this also implies a vicious circle of yet more crappy, barely-good-enough, bloated, inefficient software. I wish this wasn't the case. I recognize the upside; more real-world problems will be solved. But we really need to do something about the excesses that lead to things like the old (Electron-based) Microsoft Teams.


Here's a trick I've been experimenting with lately - try making a GUI in some high level and English-like but formal language. FXML works well (the description language of JavaFX) probably because it's relatively noise-free, but XAML would probably also work.

Then ask GPT-4 to convert it to SwiftUI. It actually does it. Or try Jetpack Compose: same thing.

This leads to an obvious thought - part of why people use heavy abstractions is because the cost of native development is too high. You could pay development teams to write a WPF UI, a Cocoa UI, a GTK UI, an Android UI and a UIKit UI for your app but even VC funded firms flush with cash can't justify that level of extravagance. It's too hard and expensive to hire devs, so when you do, the opportunity cost of them porting stuff between endless UI native APIs just can't make sense. AI changes that equation completely. GPT-4 is cheap for what it does. Suddenly, auto-porting your app into 5 different native APIs simultaneously and then assigning the resulting bugs to a bug-fixing bot doesn't seem unimaginable.

Now, for better or worse, even if AI reaches the point where it can do that, it doesn't mean Teams won't be slow. People use Electron for a lot of reasons, not just cost of duplicated development (skills, libraries, Windows lacking a competitive and non-deprecated UI toolkit etc). But it's interesting to think about ideas that we instinctively write off today due to the massive cost of software developer time.


Going a bit against the grain here I think but: I think tools like GPT-4 and beyond will increase the value of software engineers for companies as time goes on.

Folks building software can be more ambitious with the things they create, and it can become easier to plug in another software engineer to your team if tooling can make them more predictable in their output.

I see people here saying that increased efficiency is going to reduce their team's headcount which makes zero sense to me. I've never been on a team where there wasn't enough work to be done, it's usually a problem of too much work and needing to choose which is most important.

That being said, as someone who's been heavily using GPT-4 in their work and personal projects I feel like the gains are minimal. It's really great when I'm exploring an API I was unfamiliar with before, but for stuff I've been doing daily for years it's really not much help at all. Even places where ChatGPT shines, it takes some discernment to sift through the bullshit.


I've been casually coding to assist my work for nearly 20 years now in various fields. It's helped me a lot recently when it came to blowing off rust when I was trying to use python.

The value I have found from it has been quite high, I have been more confident about being able to offer ideas, knowing I wouldn't have to spend quite as much time refreshing my language skills for languages i have not used in a while.

I agree with your last point,when I first started playing with chat GPT I thought I'd throw some of my already fixed problems and see how it handled them. It didn't have the best of answers (tripping over problems I also had initially) or wasn't much better than what I had already made.


As a 'professional coder', I've been very much pondering - and experimentinng - with how best to use AI coding tools.

For big, complex, programs I work on (multiple C programs up to 100k LoC) it's hard to frame a question in such a way that it doesn't need to know details of the codebase. For peripheral stuff it's pretty useful as a quick way to get some cut'n'paste code ('write me code to read the load average on linux and log it to a file every second' - I know exactly how to do that, but it's a faster typer).

So initial take is it's a very useful new tool - so sort-of a corollary, people who learn how to use it quickly and well are going to have an advantage in the short term. And even if/when everyone uses it, some people are going to be better at using it (i.e. asking the right question - like I find myself groping around for, when asking it about UK company law earlier today).


> For peripheral stuff it's pretty useful as a quick way to get some cut'n'paste code ('write me code to read the load average on linux and log it to a file every second' - I know exactly how to do that, but it's a faster typer).

For public APIs that might be the killer feature; a better way to look at doc (or rather, rely less on doc) if it can look at different code bases and find snippets of code using a popular API.


It definitely works well for that stuff, albeit it tends to be easily confused by APIs that have evolved over time. I find it frequently giving me code that uses APIs that don't exist, and when I point that out, it apologises, tells me that API was removed in version xyz and then gives working code instead.


Yes, could well be you've hit it there (i.e. stackoverflow has had its day).


Not for a while. They seem decent at generating scaffolding or algorithms, but like for any of the projects I've done recently, it's "figure out what this big piece of mangled code does, take some requirements or high-level idea that's not fully complete and probably has some contradictions, work with people to flesh those out, figure out a way to cram that in, which layer of the code it should be in, or which layers, with some implicit tribal understanding what is allowed to change and what isn't." AI may be able to help with some of that, but it won't be a complete replacement for quite some time.

Also the quarterly planning process is "figure out the ten things you want to get done out of the hundred in your backlog", and the quarterly execution process is "land the one thing you can get done of the ten commitments you made." So there's no lack additional work to be done even if AI takes over some chunk of it.


"figure out what this big piece of mangled code does"

This is my use case for GPT-4. It usually does it better than most humans, including those who wrote the code and those who have maintained the codebase for years.


Building software has gotten easier and easier over time, and there's been other step changes in the past. If you understand this history, you can understand what the future might be like.

Before Stackoverflow, software research was rather difficult. You had books that acted like tomes of reference materials. You had maybe some tribal knowledge internally on how to do something. But even in the early 2000s something like MSDN was something you might search on a CD ROM! So the time to get a "basic Windows app" up and running was probably twice as long as it would be today. I'd say there's a similar thing with Github, languages with pkg management, with libraries that do every little thing, whereas when I got started in software even the C std lib might be sketchy and even then its very limited.

When Stackoverflow came out, there was a kind of similar panic that we forget about. Every problem would be solved on stackoverflow, and you just copy-paste everything from stackoverflow. It'll get way too easy to launch an app. Yet the reality is there's even more developers, with higher salaries than everywhere, with software in even more niches in our lives.

The current "panic" about software engineering going away is also happening at the same time as layoffs, so its hard to know how that colors things.


I don't know what it will look like, but remember that the job of a "coder" is not to "write code" but to solve business problems.

The programming profession has always demanded that developers operate at higher and higher levels of abstraction and produce more/bigger/faster. I expect LLMs to continue that trend - a tool to get more done at a bigger scale.


Exactly. Just understanding the problem space is 90% of the work. That said, LLMs can do a lot more than code...


Large language models will change the way we write code, but not make code disappear. We will always need code to execute software.

But we don't use punched cards anymore. We don't write low level machine code anymore, unless we must or we are into that. We don't have to use complex and difficult programming languages when we can do the same in a few lines of Python that abstract everything. And now we don't have to write most of the high level code as we can have a loose conversation with the machine.

I'm expecting that the demand for good coders will stay high, to use these new AIs, fix the bugs, check the outputs, and build much more ambitious projects.

However the demand for cheap coders that write simple applications will disappear and be replaced by machines. The humans will do other activities instead, in software development or not.


The correct way (I think) to think of these talking computers is as the third great User Interface paradigm:

    I    CLI
    II   GUI
    III  Human language
Computer programming just isn't that hard. (I maintain that if you can solve a Sudoku puzzle you can program a computer.) The market for programmers was based on people being unwilling to learn CLI or GUI interfaces but this consideration doesn't affect Linguistic UI systems because, of course, people learn to talk automatically as children.

Ergo, I suspect that not only will GPT et. al. drastically reduce demand for coders, it will reduce demand for e.g. computer languages, frameworks, stacks, etc. What I mean is that questions like "which programming language to use?" will become obsolete.


Personally, I think no-code tools are much more of an existential threat than LLMs. Or rather, they should be. Instead, despite the prevalence of tools that seek to democratize software authorship, the demand for software engineers has yet to abate.

As for why, my best theory is that writing code isn’t the hard part, but in fact one of the easier parts of the job, much like how drafting isn’t really the hard part when it comes to engineering or architecture.


The question I ask myself: Is there a large and still-unmet demand for software engineering worldwide. AKA is the market supply-constrained by a large-enough amount?

If so, I imagine engineers will be more productive with AI (and wages will go down, or at least stop growing somewhat), but the demand for software engineers will stay strong.

If not, then engineers being more productive would mean fewer engineers can meet the global demand for software engineering work and I would expect to see the demand for software engineers reduced.


Development has gotten much faster since the 2000s: better languages, better IDEs, better libraries, better design practices, better documentation, better online resources, better (faster, less buggy) target platforms. I remember when I was first writing code in the early 2010s writing Objective-C in Xcode 3 with manual memory management. Developer productivity has exploded more than most people realize.

And yet developers have always been in demand. We just have a lot more programs now, and a lot of very-similar and/or niche programs. All the random libraries on npm, Rust, Haskell, etc. the various crypto-currencies, static/dynamic webpage builders, 3 separate JavaScript runtimes. People start businesses and actually get funding for these libraries. And it seems like a lot of companies want almost the exact same product, some "business solution" or "cloud solution", but they have specific reasons existing solutions aren't acceptable (performance? security? some feature?), so they pay $200k+ salaries for developers to build them.

But will this always be the case? Even ignoring GPT4, we don't really need these developers who are working on niche and similar projects; we are having a harder time getting hired right now with the economic downturn. And still, it's entirely possible this growth has a limit, and GPT4 will make developers turn ideas into working products faster than people can come up with them.


GPT is good at boilerplate and is very bad at understanding and fleshing out complicated requirements, coming from N stakeholders with varying opinions and personalities. So I don't think GPT is a threat to human engineers in the "essential complexity" space.

Then, how much time do we collectively spend on boilerplate? Stackoverflow + the rich collection of open source libraries and frameworks have been doing a great job minimizing boilerplate for typical architectures. E.g., building a Flask app today vs building with Apache mod_perl in 2000 is like a >90% reduction in boilerplate.

So I don't think LLMs are much of a threat to engineers.


It will be make it quicker, cheaper and more efficient to do a bad job and fill a codebase with boilerplate.


Anyone have experiences using GPT-4 with functional languages? There's lots of examples with procedural languages that automates much of the boilerplate but I'm wondering how effective it is when the source is more terse and concept-dense.

I don't do a lot of functional programming myself but would prefer to. I'm guessing that the output might be be one arbitrarily selected from the described domain but could just as well have been composed completely differently. If it's able to output multiple distinct compositions and we can choose one that suits the team's way of thinking/working that would be pretty cool.

What would be awesome is if it could find the shared characteristics across domains and synthesize a solution in the posed domain that was only ever used in other ones.


I wouldn't be surprised if by making code 5x cheaper to write, aggregate demand for code goes up by at least 5x, and possibly more. The history of computing is full of technologies that made it cheaper to create software (compilers, debuggers, high level languages, version control, increasingly powerful hardware, widely-available open source frameworks, etc...) and as programmers became more efficient and effective, the scope of problems they've been tasked with has kept growing. I think there is a good chance that LLM coding assistance will be a similar story.


I can see it going a couple different ways:

1) The dichotomy between frontend vs backend dev evaporates entirely. Because GPT-x can churn out code so quickly, all devs will be expected to be fullstack and the job breakdown will be more like 20-30% coding (assisted by these tools) with the rest of the time spent on higher level concerns like architecture, security (and perhaps lots of meetings...) etc.

2) The vast majority of dev jobs simply get folded into business functions. So your product managers, accountants, etc will increasingly be expected to do prompt engineering work as part of the job. This change will probably take a while to happen and needs more work to occur on hooking up these tools to other tools like compilers and whatnot, but I'm pretty sure startups will emerge to tackle this. Companies that aren't developing any real tech as their main product/service simply won't need many devs anymore, although they might keep a few on staff here and there just in case the others need someone to step in from time to time. The only places that will need sizable numbers of actual devs will be the ones building brand new software - think new database sytems, compilers, device drivers, OS kernel work etc. - basically the really cutting edge stuff for which original research + thinking is needed. Since these tools tend to be trained on existing knowledge out there I'm not sure if they can be used to build brand new technologies from scratch... at least for now. But who knows what future iterations of these tools will lead to!


The impact likely begins in 2024 or 2025. I think a team of 5 may become a team of 2 or 3. In that sense, yes, I think it can have a very real impact to salaries and available opportunities.


Who knows what will happen with GPT5, but speculating about just GPT4 I think there might be a little initial reduction/reshuffling but it will then increase demand for coders substantially. Just like all previous coding productivity gains have.

The reason is fairly simple - software isn't done eating the world.

Make it easier and cheaper to write software and two things happen 1) more problems are worth solving with software and 2) all existing software must improve or get eaten by a competitor who uses productivity gains to improve.


That's a good point. Reminds me of Jevon's Paradox: https://en.wikipedia.org/wiki/Jevons_paradox


Prompt engineering will be interesting to follow for the quantity output. If one gets good at asking the right questions the system will provide you with better detail.

But that's on a technical level.

On the cognitive and reflective matter it will not be as interesting or job-overtaking.

Writing code is 1/6 of the job. The rest is communication, debugging and testing. That's not for GPT.

It has made me a better programmer on the technical level but that was the easy part :)


I'd worry about GPT-4-like apps replacing programmers about the time that users get good at writing complete, accurate specs. I.E., never.


GPTs will augment human developers rather than replacing them outright. Software is a gas[1], so rather than having fewer developers, we'll just have more software.

[1] https://blog.codinghorror.com/software-its-a-gas/


Writing code is one thing, owning it and having responsibility is another. But can we build tooling for this to on top of LLMs too? This won't happen overnight but it will certainly happen gradually, I don't see why not. By owning it, I mean, upgrading libraries, fixing bugs, writing tests, running tests, fixing tests, and maybe even adding features.

It's totally possible to make LLMs write actually good code that can be maintained using normal software engineering practices.

Of course there will still be a need for a human touch, such as knowing limitations of LLMs and possible workarounds and understanding business requirements. Such productivity boost may even create more jobs as products will become cheaper and faster to create, hence more new markets will be explored. The pie might also get bigger not smaller.


GTP-4 acts as a reminder the value is not in programming but in the things you create with.

In an ideal world, you even not coding to achieve things that help others. Today it's required but it's not the point of a valuable product.


Are there fewer engineers since AutoCAD was invented and they didn't need to draw everything out by hand? Yes, but not by much and they just have much higher expectations.

I feel like this is more programmers realizing they have to become the 10x they were told was a myth to survive. Which is kinda true! BUT what also is true using AI tools will make them that if they learn to use them, now they need to know security inside and out, systems design etc. Higher level concepts way above a code monkey's pay grade, now will be expectations, all the theory side is going to be much more important.


It's a strange question. That "professional coder" term is such a misconception... i mean does anyone know of any experienced dev that consider themselves a coder?


Demand for coders like React monkeys and java bloats, yes. Demand for actual software engineers with a strong penchant for system design, no.


Actually I've found the "React monkeys" you dismiss to be the people who work with product & UX on what the actually product being developed should do. The "backend people" who write API plumbing and DB tables are the ones I'd worry about.


That's a good point. However, it could also happen that a good chunk of frontend work gets folded directly into the product owner/manager role. Hard to say which way things could go but it's interesting to speculate.


Yes, while I agree AI code gen will be disruptive, I have a very hard time believing PMs or UX will soon be able to "talk to AI" and get an actual logical product out of that. The subset I've worked with certainly wouldn't. Feels like possibly the engineers will do more PM work.


i might be wrong but i disagree with a lot of commenter's saying how they are not hiring juniors anymore or their seniors are 10x more productive even if they are just boilerplate programmers they are still more useful than tools like gpt and always will be (my opinion) my guess is that instead of juniors they will be called coding assistants.


It will reduce the demand for sure. - Faster learning, debugging, template code generation, optimizing, etc. - Faster documentation, meeting notes, everything non-core things that engineer engage in.

It will also increase the opportunities. - Small companies and individuals will learn to be 10x more efficient. - Many, many newer business and side hustles will pop up in matter of days.


> will significantly reduce the demand for coders?

Define "coders".

A slightly cynical view: yes, it will reduce the demand for "writers of code in a single language and framework, following the docs and without much other understanding". No, it will not reduce the demand for proper software engineers, who understand the inside-out of the systems they work with, from hardware to kernel to libs and whatever stack they're using, and who will become more productive and will likely be able to deliver more from their huge backlogs.

Explanation: my impression (totally an opinion, but I've heard some people roughly agreeing with me) is that in the last 10-12 years a whole lot of people entered the software development profession attracted by salary, hype, easy money (crypto?), or whatever else, maybe through a dev bootcamp or just by learning on the internet. That's not a problem per se, some of the most skilled devs I know are self taught. But there're a number of developers that, even after a lot of years, have only a limited understanding of how their systems work.

Python programmers that know Django but have no idea what a syscall is, or how a process works in their operating system of choice, or how the Django ORM translates those classes into some SQL. They live in the IDE, somebody else is taking care of the deployment after they push, they just make changes and addition to an existing app. That's it.

Frontend programmers that know some toolkit like React or Angular with a few libraries, that can write some piece of Typescript, but that often have no idea about some Javascript basics, the browser, http. Again, they contribute to an existing project, adding files, changing things, but somebody else is in charge of deployment, and they won't be able to setup a new project from scratch, if needed.

A lot of those people still had/have good salaries, maybe not the best, but the market was tilted in their favour; if top tier talent flew towards big tech, there were positions to be filled in other sectors. But I think that even big tech wasn't immune to the coding bootcamp effect.

It seems to me that when money was cheap and the work to be done was a lot, it made total sense to add whatever kind of help could be found on the market. Now, this could change; if software engineers can move quicker with the help of AIs, follow-the-docs-only programmers may be become far less useful.

Just my 2c.


We just went through layoffs and the remaining employees were told to become more productive by utilizing GPT. So in my employer’s case the reduction is already happening - though it’s probably caused more by GPT hype rather than the GPT-4 itself.


No, a coder is responsible. A tool is not.


Yes it absolutely decimate demand for junior coders. My firm has basically frozen hiring, and all existing devs are expected to boost their productivity using one of the all the various AI's or get fired.

We expect the economy to get way worse as de-dollarization seems to be looming on the horizon, and preparing now.


while chatgpt and bing are useful, they absolutely do not replace the pipeline of jr devs one needs to someday have sr devs with deep domain knowledge. IMHO it's a great 'search' technology, but it shouldn't be taken for granted or assumed correct.

Also, re de-dollarization. This pops up every 5 years or so when some autocratic regime gets tired of operating under the us dollar hegemony, but I don't see any other options that don't have their own major issues. The euro? That currency is fraying at the seams from trying to stuff high and low performing countries (i.e. Germany and Italy) together without enough control over each country's finances. China? They've got to free float and stop manipulating the yuan first.

Literally nobody is as trustworthy as the USD, no matter how badly Moscow, Beijing or Tehran might wish it to not be so.


This time is definitely different because its the executives talking about seriously in way that's a first for me in my current role.


> Yes it absolutely decimate demand for junior coders. My firm has basically frozen hiring

What pipelines were you recruiting from?

I see AI decimating "become an engineer in 10 weeks bootcamp" types of coders, but proper engineers? The demand is still exceptionally strong, especially now that Twitter and Meta released a lot of talent on the market at the same time.

> We expect the economy to get way worse as de-dollarization seems to be looming on the horizon, and preparing now.

Economists and computer scientists confirmed it: The de-dollarization will happen the same year as the year of the Linux desktop!


I hope you are right, this is the first time I've been asked to start research into de-dollarization and factoring that into our business plans.


I was afraid to hear stories like this. It's already pretty bad that less and less companies are willing to take the (originally completely obligatory) burden of teaching new junior programmers. It feels like we really are doing way too little to raise any next generation of any shape.


>de-dollarization

American social media has been surprisingly loud in their predictions about the total imminent collapse of the US dollar and economy since February 2022.

As a European I would like cheaper American products and services caused by a weakened US dollar. But I don't see it happening yet. The US dollar is still strong, American companies are still competitive, and the S&P500 is a lucrative ingredient in most European pension funds.


Maybe but I think France's Macron has spooked a lot people who do global strategy. That's the only big thing I can think of in the news, that might be why I've been asked to start de-dollarization research and related business research.


Macron has no say in anything outside France. Even his closest allies, fellow EU members, have unanimously told him to STFU with that sucking up to China from what I've seen, including their biggest economy, Germany. Macron's time would be better spent dealing with the riots sweeping his country than trying to distract from them by pissing on the United States and sucking up to China. Your global strategists may not be the finest in the business if they think Macron talking shit is going to make even the slightest global difference.


Where are future devs experienced with the company's code supposed to come from if not new hires?


We all know GPT is going to improve, the question is how much and when. Some strategics advisors are saying we might not even need the existing engineers to maintain current levels of productivity.


No. GPT outputs text, it does not think, it can't negotiate trade-offs of solutions, it cannot communicate with stakeholders about risks or technical issues of current implementations, it may not understand the context of future/other work other people are doing, etc. An analogy that might work (apologies if this is off base): GPT might output helpful text for a grant you may be writing but I assume there is a whole set of many other activities like proposing the project, schmoozing the Dean (is that a thing?, lol), managing funding, materials, running experiments, etc. for scientific work than the act of typing up a grant proposal.

Coding (physically typing a solution) is a small percentage of a professional software engineer's time. Tools already exist to generate code, to find answers (stack overflow), etc. GPT will be another tool to help developers but much like stack overflow answers, you have to do some diverse research to make sure you're not getting BS or tangential but useless information. GPT can be confidently and completely wrong.

Then there are the legality concerns of sending Intellectual Property to a giant void that may output that IP to other parties. Its not a concern of mine, personally, but its a concern for the legal entity (aka corporation) that employs me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: