
AI writing code will make software engineers more valuable - davnicwil
https://davnicwil.com/ai-writing-code-makes-software-engineers-more-valuable/
======
chansiky
I agree with the sentiment that they only augment our capabilities, but then
we end up with another problem, which is that the people who write the
software are not fully aware of how it all works. Relevant to my argument is
Jonathan Blow's `Preventing The collapse of Civilization` talk in which he
discusses the disappearance of knowledge as generations pass:
[https://www.youtube.com/watch?v=pW-
SOdj4Kkk&t=3407s](https://www.youtube.com/watch?v=pW-SOdj4Kkk&t=3407s)

~~~
MattGaiser
> the people who write the software are not fully aware of how it all works.

Is this much different from now? I have no idea how most of the libraries I
use are implemented.

~~~
mrkeen
You don't know how other peoples' libraries work, and that's fine.

I think in this case the argument is that the author of the library also
doesn't know how it works. Which means they can't fix it.

~~~
chosenbreed37
In addition to this, if the source code is available you could potentially
take a peek and generally understand what's going on. But I can imagine that
we could have a generation of developers who know little about nuts and bolts
that underpin how their software works. Perhaps this is now already the case
in certain domains. E.g. you could be working on a Jupyter notebook and be
effective without being aware of what's happening behind the scenes. I think
is qualitatively different as in this example you could be working at such a
high level of abstraction that the nuts and bolts are not something you'd even
be aware of. Whereas if you're writing a Java program and you bring in some
third party libraries you could potentially look up that library. But more
importantly you're still relatively close to the metal.

------
jboggan
"since code is just text" . . .

No. It really isn't. I haven't gotten to play around with GPT-3 yet and I am
sure it is very advanced, but code is extremely fragile in a way human
language is not. I only say this as someone who started a company trying to
use AI to generate code and banged my head against the wall until credit
limits and negative bank balances forced me to quit.

I estimated we could write 7 PhD theses if we solved the technical hurdles
that would get us to code good enough for a product someone would pay for.

~~~
emiliobumachar
"code is extremely fragile in a way human language is not"

Very well put. Change _a single character_ in a working complex program and it
may start doing something completely different, or, much worse, subtly
different.

~~~
onion2k
Isn't that because compilers aren't written to cope with variations though -
that rigour is necessary because _humans_ can't deal with ambiguity. A
compiler written using AI could happily understand what 'int', 'itn', 'it',
'integer', 'IntyMcintyFace', and every conceivable variation mean, and still
compile them all to the same machine code. Humans don't want that in a
language because it makes it hard to use. AI doesn't care.

~~~
the_af
I disagree with this. I think humans excel at ambiguity (they also excel at
getting the intended meaning wrong, of course). Computers on the other hand
take instructions literally. You could train them to probabilistically guess
what the misspelling means, but whether they'll be better than a human remains
to be seen (I personally doubt it but this can be tested).

What irks me about the assertion that "code is text" is that it's false. Code
has a textual representation, which some people (not me!) argue is not even
the best one; what's clear is that text is just a representation, not the only
one and it's not _directly_ code. To have an AI learn to "type" code as a
string of words and characters seems obtuse if the goal is to have AI
generated software. AI could operate at a different level, why bother with
typing characters? It seems to me the wrong level of abstraction, akin to
designing a robot hand and driving it with an AI to physically use a keyboard
as a way to write code.

~~~
carlmr
>AI could operate at a different level, why bother with typing characters?

Because if it does something wrong, you have to be able to find out what it
is.

~~~
virgilp
Actually I think this illustrates what is wrong in the idea of AI-generated
code.

If you feel uneasy about AI-generated binary code ("I want to be able to debug
it if something goes wrong!") you should feel equally-uneasy about AI-
generated high-level language. The changes that it'll be broken in subtle ways
are likely to be very similar, and I don't see good reason to believe that
debugging AI-generated Haskell is going to be that much easier than debugging
AI-generated executables.

------
phillipcarter
I can think of two immediate uses for the stable version of whatever comes
after GPT-3.

A lot of code I write is actually solving the problem, and if I can have some
tools to make my code better then I will definitely take them. Having a tool
that can suggest certain patterns that are statistically likely to be the
right approach would be great. Or maybe I'm using a tricky API that, if
misused, is a security vulnerability and a tool can catch then for me because
it was trained on many other uses of that API. Or maybe there's a known
database of security vulnerabilities with writeups it could be trained on.
There's no replacement for domain knowledge, but not everyone is a domain
expert and it's better to have tools for this kind of stuff than not. I'd
certainly feel happy to have something that _usually_ suggests something
correct than have no help at all.

A lot of the code I write is also not actually solving a problem, but writing
boilerplate nonsense because in a software system that's complicated enough, I
can't always "just" solve the problem. Maybe that means generating some data
types a certain way because the library I'm using expects them to be there. Or
sometimes there's a common (but verbose) pattern for using an API where I can
have a bespoke template generated within the context of the other code, then
put markers in my editor where it thinks I should double-check that the code
is right. There's probably all kinds of other "smart templating/codegen"
applications I haven't thought of.

I already have tools to help me out with this in the form of autocompletion,
templates, static analysis tools, and refactoring tools. But they operate in a
different space. I'm excited for a more probabilistic set of new tools to help
me do things better and automate some of the boilerplate I need to write
today. They may not always be correct, but tools could be built around them to
suggest to me when I need to double-check something before simply committing
the code. All of this would make me more productive and probably allow for
more developers to work in specific domains where they can't today.

------
disambiguation
> The hard part is solving the problem appropriately. Implementing that
> solution in code once you have it is comparatively easy.

I don't think this is true, otherwise we would see a lot more business models
where development is largely outsourced for a cheap price and in-house work is
mainly solution architects producing technical documents.

the theory and the code are two ends of the same problem, which is why it's
hard to separate them. having a developer that can do it end to end (translate
a problem into code) is much better, even if at a premium.

as for AI and code generation, this just seems like the natural progression of
high level languages. modern languages already do a lot of work to translate
our simple human code into complex machine code.

~~~
jusssi
> I don't think this is true, otherwise we would see a lot more business
> models where development is largely outsourced for a cheap price and in-
> house work is mainly solution architects producing technical documents.

But we did see a lot of that. It was exactly what all that outsourcing to
India was, before it got unfashionable again.

The problem with the model wasn't that it would be too hard to produce code
for the design, it was that you'd need a finished design to ship to the
sweatshop. That rules out any sort of iterative development models, so you're
stuck with a slow waterfall.

------
pmiller2
I can’t wait to be working on AI-created bugs tracked by AI-created JIRA
tickets. /s

------
grugagag
My own understanding of the difference between AI and AGI is that the former
is just a bag of tools we build to solve problems efficiently and the latter
would be the unifying tool and that perhaps even though we don't understand it
there's hope that some consciousness would arise from the complexity. So until
we get on with AGI human software developers would be very much needed to do
the plumbing and maintenance of these interconnected AI systems. Once/if AGI
comes to fruition it should be able to get a meta understanding beyond the sum
of all its parts and would be able to grow and maintain itself without
supervision form us (maybe not at the beginning but eventually). It would
still need engineers to maintain the physical aspect of it, but that would no
longer be programming as we know it now. And AGI is far far away if not an
illusion that we can pull it through so I'd not worry much for now.

------
onion2k
It's going to be impossible to tell if AI will replace humans for a very long
time. Demand for software is increasing at a rate that will mean new engineers
are needed for decades yet. We're not even close to "peak engineer". AI might
chill that demand a little but it won't stop it. When demand for new software
starts to slow (maybe never?) will it become obvious what the impact of AI on
developer jobs actually is.

This is similar to the effect of robots on the automotive industry. Since they
were introduced to car manufacturing in the 80s there have been vast numbers
of robots implemented in every car factory - and yet the number of people
employed making cars is still going up. The reason is because the demand for
cars is also increasing at a staggering rate. If it levels out (which won't be
for a long time as we're only just starting with electric cars) only then will
we actually see the full effect.

------
randtrain34
Agree, pretty much every software tooling improvement (eg. compilers, IDEs,
higher level languages, docker, full-featured frameworks, AWS,
Ansible/Terraform/Salt, package management, etc.) has only led to more
software engineers instead of less.

~~~
chosenbreed37
That could be because of the ever increasing use of software in many aspects
of our day to day lives. Initially computers took up huge space in labs
scattered around world. Now billions of people have a computer on their hands
in the form of smartphones. A couple of decade ago it was the standard to dial
a number to order a taxi or takeaway food. Now you software for those use
cases. And many many more. It could be the case that there's an increasing
demand for software engineers because more and more software is being built.

------
staycoolboy
There is a lot of fear around programmers losing their jobs due to this or
that innovation. With a 35+ year perspective a few things have remained
consistent: software jobs are not decreasing, their pay is; new innovations
will always command higher salaries for adoption.

Basically, as a programmer, you will always have a job with a salary above
average (even thought they are decreasing), but to get the huge salaries you
need to be on the cutting edge (or become a "Director of _____" which is where
the lower-performing engineers with the best bragging skills end up).

------
shekade
My worry is this - Today: GPT3(Human fed / English) -> Code -> Interpretation
/ execution. and with time: GPT 3(Human fed / English) -> Executable !!

Focus will shift to make "(Human fed / English)" better to create better
executable. Am I too optimistic?

~~~
shekade
of course, this may not apply to say embedded systems / device drivers etc.

------
agustif
[https://ai.facebook.com/blog/deep-learning-to-translate-
betw...](https://ai.facebook.com/blog/deep-learning-to-translate-between-
programming-languages) related on homepage now

------
quonn
It will be similar to offshoring. Yeah, if you can make a precise spec or if
you programmers are not willing or capable to think for themselves then it
fits.

Let me make quonn‘s law:

Any software job that cannot be offshored currently cannot be replaced by AI
in the future.

------
deltron3030
It might be true TDD, where your job is to write good tests or make good
assumptions, and pick the right AIs to pass the tests and generate the code.
You'd pass the test as an argument into a function, the AI.

------
minimaxir
This post is jumping the GPT-3 gun a bit. The current demos are more proof-of-
concept and don't imply that engineers would be replaced regardless.

~~~
davnicwil
I didn't want to focus too much on GPT-3, actually I tried to avoid discussing
it beyond using it as an introduction to the topic. It's more of a general
discussion of why I think AI writing code is a good thing for Software
Engineers.

In a way that's kind of the point - the implementation details and the
capabilities of GPT-3 or any other model aren't that relevant because whether
AI writing code will make Software Engineers obsolete isn't a question of
whether it will _work_ , it's more a question of the extent to which the job
of a Software Engineer is to write code per-se, which in my view it's not.

------
master_yoda_1
I think it will filter copy paste programmers, and that is for good.

------
m3kw9
Because there is always a market for even better code.

