Hacker News new | past | comments | ask | show | jobs | submit login
AI writing code will make software engineers more valuable (davnicwil.com)
43 points by davnicwil on July 23, 2020 | hide | past | favorite | 47 comments



I agree with the sentiment that they only augment our capabilities, but then we end up with another problem, which is that the people who write the software are not fully aware of how it all works. Relevant to my argument is Jonathan Blow's `Preventing The collapse of Civilization` talk in which he discusses the disappearance of knowledge as generations pass: https://www.youtube.com/watch?v=pW-SOdj4Kkk&t=3407s


I think that's unavoidable. There is always a level at which your understanding of your tools gets fuzzy and incomplete. Do you understand everything about what your compiler is doing? Your OS? Your could provider? The hardware any of them are running on? The majority of us are standing on the shoulders of the few that do understand those things and provide them to us to use.

And if you mean that we would not understand what the software we write is doing to produce the result it does, aren't we already there with machine learning?


On the other hand, the better you understand the compiler/OS/hardware, the better software you'll be able to write. Just like medical doctors could theoretically do their job without a deep knowledge of, say, organic chemistry, I can imagine a near future where software development means tweaking the inputs to GPT-3 (or some other AI) based on a deep knowledge of the layers beneath it: sort of a "computer doctor".


Cargo-Cult programming is already happening at a not-insignificant scale due to the vast amount of sample code/tutorial/Q&A online. There could be lots of nuances in that snippet you just copied from a SO comment, which "just works". GPT-3 based tools could help you generate a working (or barely working) sample fairly easily. But the decision to just ship that code as is, or to understand it and tune it for your specific needs still mostly depends on the developer.


> the people who write the software are not fully aware of how it all works.

Is this much different from now? I have no idea how most of the libraries I use are implemented.


I've found that I typically don't need to know how it works if my use case is common and the library is well documented, but when either of those doesn't hold it can be very helpful to read through the implementation in the source to understand how best to implement. But I think the black box would be less of an issue when interfacing with documented third party libraries, than with internally developed services and libraries, particularly in smaller orgs. Team A 'generated' Service A and Team B needs to integrate it in their 'generated' Service B, it seems that could get messy and would be tough to test or troubleshoot. Possibly an additional AI tool specifically for compatibility and integration could solve that problem.


You don't know how other peoples' libraries work, and that's fine.

I think in this case the argument is that the author of the library also doesn't know how it works. Which means they can't fix it.


In addition to this, if the source code is available you could potentially take a peek and generally understand what's going on. But I can imagine that we could have a generation of developers who know little about nuts and bolts that underpin how their software works. Perhaps this is now already the case in certain domains. E.g. you could be working on a Jupyter notebook and be effective without being aware of what's happening behind the scenes. I think is qualitatively different as in this example you could be working at such a high level of abstraction that the nuts and bolts are not something you'd even be aware of. Whereas if you're writing a Java program and you bring in some third party libraries you could potentially look up that library. But more importantly you're still relatively close to the metal.


"since code is just text" . . .

No. It really isn't. I haven't gotten to play around with GPT-3 yet and I am sure it is very advanced, but code is extremely fragile in a way human language is not. I only say this as someone who started a company trying to use AI to generate code and banged my head against the wall until credit limits and negative bank balances forced me to quit.

I estimated we could write 7 PhD theses if we solved the technical hurdles that would get us to code good enough for a product someone would pay for.


"code is extremely fragile in a way human language is not"

Very well put. Change a single character in a working complex program and it may start doing something completely different, or, much worse, subtly different.


Another thing we all learned the hard way when people tried "model-driven development" (that is, code that built other code from UML diagrams and flowcharts) is that writing code the first time is one thing. Modifying it later is something else entirely.


In fairness, this does apply to natural language too.

"Let's eat, grandma!"

"Let's eat grandma!"

The diffference is that we have no notion of getting people to blindly and literally follow instructions generated by AI. People in the execution loop creates an implicit layer of sanity-checking, plus language is inherently ambiguous and the reader will tend to interpret things in sensible ways even if the writer didn't understand fully.


Isn't that because compilers aren't written to cope with variations though - that rigour is necessary because humans can't deal with ambiguity. A compiler written using AI could happily understand what 'int', 'itn', 'it', 'integer', 'IntyMcintyFace', and every conceivable variation mean, and still compile them all to the same machine code. Humans don't want that in a language because it makes it hard to use. AI doesn't care.


I disagree with this. I think humans excel at ambiguity (they also excel at getting the intended meaning wrong, of course). Computers on the other hand take instructions literally. You could train them to probabilistically guess what the misspelling means, but whether they'll be better than a human remains to be seen (I personally doubt it but this can be tested).

What irks me about the assertion that "code is text" is that it's false. Code has a textual representation, which some people (not me!) argue is not even the best one; what's clear is that text is just a representation, not the only one and it's not directly code. To have an AI learn to "type" code as a string of words and characters seems obtuse if the goal is to have AI generated software. AI could operate at a different level, why bother with typing characters? It seems to me the wrong level of abstraction, akin to designing a robot hand and driving it with an AI to physically use a keyboard as a way to write code.


> To have an AI learn to "type" code as a string of words and characters seems obtuse if the goal is to have AI generated software.

It sure does seem obtuse. An AI generated computer architecture sounds more reasonable. Why stop at AST or byte code?


>AI could operate at a different level, why bother with typing characters?

Because if it does something wrong, you have to be able to find out what it is.


Actually I think this illustrates what is wrong in the idea of AI-generated code.

If you feel uneasy about AI-generated binary code ("I want to be able to debug it if something goes wrong!") you should feel equally-uneasy about AI-generated high-level language. The changes that it'll be broken in subtle ways are likely to be very similar, and I don't see good reason to believe that debugging AI-generated Haskell is going to be that much easier than debugging AI-generated executables.


You can generate human readable data out of what the AI generates. As an input representation, text made of words doesn't seem optimal for an AI.


You can just decompile whatever it is the AI writes.


Yeah, keep hearing about how AI will make programmers obsolete. Good luck teaching an AI how to interpret the product managers' super vague requests and understand the context in one sentence.


> Yeah, keep hearing about how AI will make programmers obsolete. Good luck teaching an AI how to interpret the product managers' super vague requests and understand the context in one sentence.

This is the point the article was trying to make! Coding is not the same thing as understanding requests and translating them in to software. Someone who is doing that (e.g. a software engineer) will be able to make use of a bot that can "code".


For what it’s worth, people said the same thing about Chess, then Go, then Starcraft. In these battles, AI seems to win with enough time.


These are all games with rules and absolute information. Product managers that can't express themselves is different.

Making an AI that can generate code from requirements is probably difficult, but manageable. Making an AI that asks the right questions, gets stakeholders to agree on something reasonable, and create solid specifications from that, is probably a long time from now.


The absolute information requirement got dropped when the AI started playing Starcraft with fog of war. It then has to decide how and when to scout, which is pretty cool


I only disagree in the aspect that fragile isn't a strong enough word to describe what you're talking about.

Plus, correct me if I'm wrong, AI still isn't at a problem solving capacity yet. It's still just a fast acting statistical machine that fits "round enough" square pegs into round holes.


Agree. It may happen soon, but GPT-3 isn’t it. One of the biggest problems is that it doesn’t have any idea when it is wrong. This is a big problem, even in the human domain, but especially with AI

https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.h...


It can easily do copy paste programing, in the interpreter languages.


Would be curious what it comes up with when writing J :p


This is an interesting nuance, and you make a great point, but I actually just meant this as literally as possible and wasn't trying to get into the details of GPT-3 specifically - I just referenced it as an introduction to the topic.

That's actually kind of the point - I think AI writing code is something that will actually add value to the skills Software Engineers have (TLDR, much more than writing code), and the article is a discussion of why that is, not if technologies like this work but when.

If they can be made to work is also definitely an interesting discussion, though, and somewhat predicates the stuff I talk about in this article.


I can think of two immediate uses for the stable version of whatever comes after GPT-3.

A lot of code I write is actually solving the problem, and if I can have some tools to make my code better then I will definitely take them. Having a tool that can suggest certain patterns that are statistically likely to be the right approach would be great. Or maybe I'm using a tricky API that, if misused, is a security vulnerability and a tool can catch then for me because it was trained on many other uses of that API. Or maybe there's a known database of security vulnerabilities with writeups it could be trained on. There's no replacement for domain knowledge, but not everyone is a domain expert and it's better to have tools for this kind of stuff than not. I'd certainly feel happy to have something that _usually_ suggests something correct than have no help at all.

A lot of the code I write is also not actually solving a problem, but writing boilerplate nonsense because in a software system that's complicated enough, I can't always "just" solve the problem. Maybe that means generating some data types a certain way because the library I'm using expects them to be there. Or sometimes there's a common (but verbose) pattern for using an API where I can have a bespoke template generated within the context of the other code, then put markers in my editor where it thinks I should double-check that the code is right. There's probably all kinds of other "smart templating/codegen" applications I haven't thought of.

I already have tools to help me out with this in the form of autocompletion, templates, static analysis tools, and refactoring tools. But they operate in a different space. I'm excited for a more probabilistic set of new tools to help me do things better and automate some of the boilerplate I need to write today. They may not always be correct, but tools could be built around them to suggest to me when I need to double-check something before simply committing the code. All of this would make me more productive and probably allow for more developers to work in specific domains where they can't today.


> The hard part is solving the problem appropriately. Implementing that solution in code once you have it is comparatively easy.

I don't think this is true, otherwise we would see a lot more business models where development is largely outsourced for a cheap price and in-house work is mainly solution architects producing technical documents.

the theory and the code are two ends of the same problem, which is why it's hard to separate them. having a developer that can do it end to end (translate a problem into code) is much better, even if at a premium.

as for AI and code generation, this just seems like the natural progression of high level languages. modern languages already do a lot of work to translate our simple human code into complex machine code.


> I don't think this is true, otherwise we would see a lot more business models where development is largely outsourced for a cheap price and in-house work is mainly solution architects producing technical documents.

But we did see a lot of that. It was exactly what all that outsourcing to India was, before it got unfashionable again.

The problem with the model wasn't that it would be too hard to produce code for the design, it was that you'd need a finished design to ship to the sweatshop. That rules out any sort of iterative development models, so you're stuck with a slow waterfall.


I can’t wait to be working on AI-created bugs tracked by AI-created JIRA tickets. /s


My own understanding of the difference between AI and AGI is that the former is just a bag of tools we build to solve problems efficiently and the latter would be the unifying tool and that perhaps even though we don't understand it there's hope that some consciousness would arise from the complexity. So until we get on with AGI human software developers would be very much needed to do the plumbing and maintenance of these interconnected AI systems. Once/if AGI comes to fruition it should be able to get a meta understanding beyond the sum of all its parts and would be able to grow and maintain itself without supervision form us (maybe not at the beginning but eventually). It would still need engineers to maintain the physical aspect of it, but that would no longer be programming as we know it now. And AGI is far far away if not an illusion that we can pull it through so I'd not worry much for now.


It's going to be impossible to tell if AI will replace humans for a very long time. Demand for software is increasing at a rate that will mean new engineers are needed for decades yet. We're not even close to "peak engineer". AI might chill that demand a little but it won't stop it. When demand for new software starts to slow (maybe never?) will it become obvious what the impact of AI on developer jobs actually is.

This is similar to the effect of robots on the automotive industry. Since they were introduced to car manufacturing in the 80s there have been vast numbers of robots implemented in every car factory - and yet the number of people employed making cars is still going up. The reason is because the demand for cars is also increasing at a staggering rate. If it levels out (which won't be for a long time as we're only just starting with electric cars) only then will we actually see the full effect.


Agree, pretty much every software tooling improvement (eg. compilers, IDEs, higher level languages, docker, full-featured frameworks, AWS, Ansible/Terraform/Salt, package management, etc.) has only led to more software engineers instead of less.


That could be because of the ever increasing use of software in many aspects of our day to day lives. Initially computers took up huge space in labs scattered around world. Now billions of people have a computer on their hands in the form of smartphones. A couple of decade ago it was the standard to dial a number to order a taxi or takeaway food. Now you software for those use cases. And many many more. It could be the case that there's an increasing demand for software engineers because more and more software is being built.


There is a lot of fear around programmers losing their jobs due to this or that innovation. With a 35+ year perspective a few things have remained consistent: software jobs are not decreasing, their pay is; new innovations will always command higher salaries for adoption.

Basically, as a programmer, you will always have a job with a salary above average (even thought they are decreasing), but to get the huge salaries you need to be on the cutting edge (or become a "Director of _____" which is where the lower-performing engineers with the best bragging skills end up).


My worry is this - Today: GPT3(Human fed / English) -> Code -> Interpretation / execution. and with time: GPT 3(Human fed / English) -> Executable !!

Focus will shift to make "(Human fed / English)" better to create better executable. Am I too optimistic?


of course, this may not apply to say embedded systems / device drivers etc.



It will be similar to offshoring. Yeah, if you can make a precise spec or if you programmers are not willing or capable to think for themselves then it fits.

Let me make quonn‘s law:

Any software job that cannot be offshored currently cannot be replaced by AI in the future.


It might be true TDD, where your job is to write good tests or make good assumptions, and pick the right AIs to pass the tests and generate the code. You'd pass the test as an argument into a function, the AI.


This post is jumping the GPT-3 gun a bit. The current demos are more proof-of-concept and don't imply that engineers would be replaced regardless.


I didn't want to focus too much on GPT-3, actually I tried to avoid discussing it beyond using it as an introduction to the topic. It's more of a general discussion of why I think AI writing code is a good thing for Software Engineers.

In a way that's kind of the point - the implementation details and the capabilities of GPT-3 or any other model aren't that relevant because whether AI writing code will make Software Engineers obsolete isn't a question of whether it will work, it's more a question of the extent to which the job of a Software Engineer is to write code per-se, which in my view it's not.


I think it will filter copy paste programmers, and that is for good.


Because there is always a market for even better code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: