There's no doubt that AI is going to be a huge factor in coding. But we will have to change a lot of things for it to outright replace humans. I'm sure some people are already working on those changes...
For a lot of the companies out there that are just trying to get off the ground - all this talk of supporting the system long term is irrelevant outside of major system crashes or whatnot.
Stop adding features.
Make packages as best you can, such that, If you really want something added on, it's a separate package to add on.
"Every problem is a people problem", and people-solutions are something that AI still doesn't solve.
I'm not saying that AI doesn't have it's uses, it certainly does, but I'm trying to highlight that purely technical solutions very often aren't good solutions to a lot if real-world problems.
At the same time I've always wondered why IT problems haven't all been solved comprehensively already because so many problems seem so similar.
I can't really fit those two narratives together.
I think part of the problem is human networking limitations. I can learn to code very well and can become a core contributor on a project. The problem is my understanding doesn't scale well. If I get hit by a bus tomorrow that project could die. I could leave the project and it could flounder. Now, lets say you add 3 people as good as me on that project... the problem is the project will grow 4 times as large leading to every one of us becoming a critical failing point. Any time you want to bring a new core contributor in, it's a massive undertaking of time and effort to get them up to speed.
But that all also kind of sounds a bit scary. It has to be built in a way that inspires trust.
Like having AI in the first place. I still haven’t seen any example generated by GPT that is not a naive translation (perhaps between languages) of some code already in its training set in one way or another. Which is a cool thing, we get a much more intelligent (though factually less punctual) search engine for code! But we are very far from it replacing any programmer worth their weight.
But the reality is this field has multiple completely different groups of people calling themselves software engineers.
There's those who basically work at consultancies/shops that get paid to push out tons of client projects using a standardized approach and their job is to write a bunch of code. Or those that work on in-house systems that get paid to largely implement pre-scoped work.
Then there's those that are actually designing systems, improving them, iterating on them, etc. You know, actually "engineering".
The fact that coding itself is fairly easy and the proliferation of bootcamps that teach coding (and now ChatGPT) has flooded the market with individuals that know nothing of good software engineering practices let alone CS and systems fundamentals.
I expect that's the group most exposed to AI risk. Pumping out lots of shitty self-contained code is something LLMs excel at.
No one here (including the author) is under the impression that you can just ask ChatGPT to "write a Unix-like kernel" and get Linux spit out the other side.
That said, I am really curious what it would cost, as a theoretical lower bound, to pay OpenAI to write the Linux Kernel. 10 grand? 100 grand? I have no clue, but I'm glad someone wrote a tool to make it easier to find out!
Billions of dollars.
Eh, what? Why in the living hell would it cost that much!
Because the actual Linux kernel cost that much or more. Of course most of these costs were volunteer labor or born by someone testing out a failure on their own companies time. To greenfield an OS kernel as comprehensive as Linux and with as much hardware support as linux would be one of the more expensive human endeavors ever.
But wait, why doesn't Windows cost this much?
Oh, but it did. Of course Microsoft has thrown a huge amount of the costs of on to users and hardware development companies too.
I really don't see such a thing being far off. GPT-4 is already pretty good at writing small modules ("write a disk IO queue for my custom kernel in C"). With a little more work in allowing GPT-4 to test out code it has written and iteratively make changes, allowing it to use debuggers, benchmarks, and sanitizers, allowing it to write its own tools, and then put modules together, I think we could very soon be asking it to "write me a Unix-like kernel".
It, of course, readily accepted that I was correct and it had indeed drawn a hexagon, but this time it'd be different.
This time, it'd draw a pentagon.
Certainly not compared to making a Unix kernel from scratch.
I think we already have most of the pieces in place:
* big language models that sometimes get the right answer.
* language models with the ability to write instructions for other language models (ie. writing a project plan, and then completing each item of the plan, and then putting the results together).
* language models with the ability to use tools (ie. 'run valgrind, tell the model what it says, and then the model will modify the code to fix the valgrind error')
* language models with the ability to summarize large things to small.
* language models with the ability to review existing work and see what needs changing to meet a goal, including chucking out work that isn't right/fit for purpose.
With all these pieces, it really seems that with enough compute/budget, we are awfully close...
Many choices are made at design time to make the right tradeoffs between complexity, speed, etc.
But with AI-designed things, complexity is no longer an issue as long as the AI understands it, and you no longer need to think too much about speed - just implement 100 different designs and pick the one which does best on a set of benchmarks also designed by the AI.
>I really don't see such a thing being far off. GPT-4 is already pretty good at writing small modules ("write a disk IO queue for my custom kernel in C").
is completely delusional.
Rust embedded code for specific hardware seems like a regression, but it would make things remarkably smaller codebase for specific app deployment.
That’s a huuuge if whether it can meaningfully reason about such. GPT-4 has improved a lot over 3, but I really wouldn’t call its capabilities reasoning at all. We often under-appreciate human intelligence
Maybe I'm misunderstanding just how good GPT-4 is at making entire repos of code... or exactly how this tool functions?
However, it gets interesting when you realise that if writing the code with an LLM is dirt cheap, then 1000 iterations of writing the same code with the guidance of a skilled software engineer would still be cheap and probably faster. I can imagine a world where whole engineering teams are replaced by just one engineer with a code-generating LLM.
The majority of the cost was the large context windows being proxied back and forth to OpenAI from my local machine, rather than just the additional code with each new message. Also, I had no idea how to do what I was doing, so even if I could tell GPT-4 exactly what to build, I wasn't even sure it was possible!
pip install git+https://...
I assume this is a major issue, because it tends to be more talky than my ex which is a real feat.
maybe with GPT-6 or so, we will see, but not with GPT-4
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” -Upton Sinclair
"That don't make no sense!"
If you believe in the potential of chatbots to replace programmers, then the salary of every programmer depends on them not not understanding it.
Here's why they seem like a waste of time to me, even for writing bland emails and comments; what say you?
People tell stories about employees at big companies whose entire job is to send a small report to someone once a week.
Even in cases like that, it's illogical to fear AI, because if it really was so simple to automate, it would've been done already.
Either the storytellers are clueless about the true nature of the job, or else there are defenses against redundancy that aren't obvious.
This hasn't been my experience. There is a TON of low hanging fruit for automating workloads which exist in almost every large company. While not all automations are easy, there's no shortage of easy work in automation. Almost every financial organization I've worked at has had some amount of manual processing of overnight batch jobs.
What you mean by "experience" is unclear to me.
I agree with you insofar as everyone with even a little experience at a large company knows there are a lot of apparently low hanging fruit possibilities for automation.
But do you mean you have seen things that seem simple to automate, or have you tried to do so and found out what happens?
There is a lot going on below the surface.
Extremely basic and low hanging fruit is all over the place.
The most technically trivial change has a litany of obstacles involving people.
Off the top of my head:
1) Finding out a process exists
2) Finding out who knows about a process
3) Being able to communicate with such a person
4) Motivating them or other stakeholders to make a change
5) Making the new process independent of specific employees
How cheap could it cost would be calculated based on the smallest cost of input parameters required with the smallest code output while minimizing how many time something needs to be reprompted.
No one knows the minimum prompting required.
It's like asking how much does it cost to recreate google (a popular upwork fake project). However much money you want to pour into it but you are still not getting google.
and based on kiinda conservative assumptions, ai coders start being competitive in 2028
That's true only for the HR department. Because code and experience wise, someone that has been programming for 5 years isn't that much experienced than someone fresh from college. In 5 years you barely have time to get to know your editor inside out, and start to have a grasp on your first tech stack.
I consider someone to be an experienced developer one that has seen a couple frameworks and hype cycles come and go. Let's say 10 years of experience.
Also, AI has 0% productivity. Being able to churn out code without being able to reason about it in the grand scheme of things makes it worse than even a non-programmer in problem solving ability. AI has no problem-solving ability. Ignoring this issue makes the whole thing moot, because it assumes that GPT is actually AGI.
sure you draw the line where you feel comfortable, but in the US job market, people get the senior title in 3-7 years, its just a statement of fact rather than opinion.