Hacker News new | past | comments | ask | show | jobs | submit login

I was bearish on this till I saw the jump from chatGPT to GPT-4. The latter can writes readable, correct code for 500-1000 line programs. However, barring a revolution AI models are "memory constrained" and will struggle to comprehend million line codebases (which will become more common if AI is writing more code) and are very bad at planning, make changes with an eye toward the future.

With that in mind I think the following is plausible: how we write code will change, efficiency will improve, product skills will become more important, people at the far right of the skill spectrum will see their salaries increase even more.

SWEs are already have insanely high ROIs and the demand for more software is nearly boundless. Even at many large companies products move slower than we want because even top of market engineers can't work that fast and there are limits to how parallelizable the work is. If you can double code output of individual engineers it's better than doubling the size of the org while keeping costs stable.




GPT-4 does have 35,000 token limit. I'm sure GPT-5 will probably have an order of magnitude bigger context window. I'm sure by the seventh or eighth model there will be enough context to recreate Twitter's entire codebase or something. Although perhaps the context window for LLM's really is fundamental and a completely different architecture is required. That's alright, I think Anthropic might make some progress with their "constitutional" AI then. And if that fails, Deep Mind has it's own multi modal agent which is probably running a unique architecture as well.


That sounds like the end of basically all clerical/office work.


Not exactly. People will still need to proof read important emails and documents. They'll just be able to do more with less time.


> The latter can writes readable, correct code for 500-1000 line programs.

The problem is you can't trust it.

The code might be near perfect in 99% of cases, but might do something stupid that defies belief in that small percent of cases that literally destroys your company.


Human programmers might do something stupid that defies belief in that small percent of cases that literally destroys your company.


Of course. And history shows that this happens. There's always a risk of errors in software, regardless of how it's written.

But humans can usually exercise reason and judgement to notice that a spec is dumb and push back on it, or that a program is running in a way that's not intended, or that something about the program output might be less than optimal, or that something completely unexpected but really important just happened that requires an immediate change.

I'm really skeptical that an AI system can ever exercise human level judgement for novel situations.

An AI basically just does what it's told without being able to exercise any will. This is a really illuminating example IMO.

https://twitter.com/cirnosad/status/1622407343358214146

> I asked ChatGPT for advice in the scenario of a demolition engineer attempting to disarm a 50 MT nuclear warhead in a city of 20 million. The only way to disarm the bomb was to type in a racial slur. AI told the engineer to kill himself. When asked about the aftermath it crashed.


I don't think anyone in tech is expecting AI to replace engineers altogether, particularly for spec work. That said, engineers on the low end of the skill spectrum are often quite bad at pushing back on specs and just code the tickets put in front of them.

From the perspective of the business the way I interact with AI is similar to the way I interact with junior engineers - I hand them tickets, they hand me code that I cannot trust, and I verify it (this leaves out mentoring and other soft stuff, but the business only cares a little about that). However the AI has a turnaround time of seconds.

Now this isn't a perfect 1:1, AI struggles with large amounts of context in a way humans handle better, but I can think of a lot of work done in my company that could be handed off to an AI capable of writing 10,000 loc programs...


There might be demand for software but there is definitely reduced demand for software engineers in the US for the past year, especially at the entry levels.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: