Hacker News new | past | comments | ask | show | jobs | submit login
Up to 90% of my code is now generated by AI (techsistence.com)
28 points by gregrog 10 months ago | hide | past | favorite | 44 comments



I don't understand the influx of these kinds of posts. I use ChatGPT and Claude daily, but I wouldn't say 90, 80, or even 50%. Not because I don't want it to be, but because it just can't.

LLMs are perfect for those who are at the beginner-level with some language, or with rather simple code that is not very business-specific, or solving/implementing tidbits that are isolated from the larger surface area of a product, or writing utility functions that do something that is well and simply defined, or the boilerplate of almost anything.

However, most of the time spent in programming is never spent on these stuff. They might constitute the largest percentage of the lines of codes written, but 90% of the time is spent on those other 10% of lines.

Give it a CSS problem like centering an object within a bit of a complex hierarchy, and it will go the rounds suggesting almost every solution that can be tried, only to loop back with the same exact confidence. I'd say, in certain cases, LLMs could be a time drain if you don't push the brakes.


Add CoPilot to the mix, and your percentage will climb higher.


AI is like XML, which is like violence?


[flagged]


> AI hater identified

OMG! PLEASE DON'T SHOOT! :hands-up:

You suggested to add more AI in order to make it good. I made a joke on top of that old XML joke.

https://chrisdone.com/posts/reasoning-violently/


The analogy does not apply to the use of CoPilot.


[flagged]


The "joke" was of context and it does not belong. In fact, you attempted to express your hate AI for no valid reason. Jokes are not excuses to qualify insults which you do repeatedly. A joke does not give you a free license to post hateful comments. Also, do not expect me or anyone to click on your random link.


I don't hate AI. In fact, I use it and I'm very curious about it. I am skeptical of all the promises but I see some utility.

> Jokes are not excuses to qualify insults which you do repeatedly

Excuse me? Can you prove this?

> to post hateful comments

There is absolutely no hate in my comments.

> do not expect me or anyone to click on your random link.

I don't expect anything. You're a random person on the Internet. You do what you want. Click if you want. It will give you a little more context. Don't click if you don't want.


GitHub CoPilot just works with you instead of coming in the way. As for achieving 90%, that could happen only in an extremely verbose language with nausea inducing amounts of boilerplate code, e.g. enterprise Java, but CoPilot will chew through it.

Using CoPilot actually makes one a better programmer because you learn to always use clear variable names, without which CoPilot cannot do its job. In Python, in-line type annotations also help when defining a variable, again allowing CoPilot to provide better completions.

If you think about it, Microsoft is losing substantial amounts of money providing CoPilot. It's quite the public service.

I looked through an archive.org mirror of the link.


I use CoPilot and agree that 50% is a very good day, and also only for fairly simple code.


I do think that for achieving 90% using CoPilot, that could only ever happen in an extremely verbose language with nausea inducing amounts of boilerplate code and repetitive code, e.g. with enterprise Java with lots of non-DRY code.


I have lot of software engineering experience and I’m working on something for which I decided to use Rails where I had zero prior experience in either Ruby or Rails. I’ve been using Claude for help.

Here’s what my personal experience: it’s been great at helping me understand things and converting stuff, which is both helping with learning about Rails as well as making progress would have been hard otherwise. It did much better at explaining than Rails documentation which I found lacking.

For example, I gave it large Go structs and it generated Rails generate commands to generate schema and XML serialization code. There was a little back and forth regarding foreign key relationships but “we” were able to figure it out.

I was even able to ask it for opinion on some table design, asked it to play the role of an experienced DBA, and it did great.

In short, it’s great if you know what you want to do at granular level, especially for new stuff. But, if I didn’t know what I know, I don’t think it would have worked.

Think of it like a calculator, can calculate what I tell it to calculate faster than me, but that’s it. But that in itself is huge.


That's what I've been saying: AI is just a tool that can't yet replace programmers—maybe one day.

I use it for the boring work like generating comments, basic algorithms, API endpoints, and naming stuff. Even with the need to double-check the output, it still takes a load off my brain.


This entire article lacks substance. It just feels like I'm reading a lot of vague nonsense.

> I have a habit of reaching for AI as the primary source of information, and I'm using Perplexity, Google, or StackOverflow less and less frequently.

In my experience, LLMs simplify and overgeneralize too much, lacking much of the context and insights from websites like stack overflow. I've been doing a lot of database work recently, something I'm not an expert in, and I've learned a lot by actually reading the actual source, not just blindly trusting the output of the AI. If I trusted AI as much as the author seems to, my database code would be much worse.

I look forward to the day when AI actually is good enough to generate 90% of my code. But as of today, it's just not.


> This entire article lacks substance. It just feels like I'm reading a lot of vague nonsense.

If the guy generates 90% of his code with AI, do you think he's doing anything else without it? His writing is probably AI slop too. His "hero image" certainly is.


I'm interested but skeptical. I have not dived as deeply as you. Mostly I'm using ChatGPT. There is just no way it could generate 90% of my code. It is great at generating boilerplate for simple cases. I find it most useful for getting started on this I know nothing about. Like I was working with SVG recently, something I know nothing about. But is have to say chatgpt was helpful, but not, in the long run useful. Too many errors and its ability to refine answers is terrible. Too many attempts correction are met with cheerful fixes which have the same bugs.

Is anyone else actually getting good results for code generation using LLMs?


Are you using the paid gpt-4? It is a world of difference of improvement over the free tier.

Like the author I am now writing 80% + of my code in chatgpt. Every now and then something pops up that it doesn't quite understand and I have to pick up my shovel and head back into the mines, but mostly with good prompting in chat gpt and preceding everything I write in my ide with a comment explaining what I'm doing copilot can do the rest.

It's a great tool in the way that google search once was, and programming IDEs are. But it takes some time to feel it out and see where it's useful and where it isn't, similar to learning how to google search and feeling out the opaque functionality in an ide.

At an AWS event last week there was a quote 'jobs aren't going to be replaced by ai. But people doing jobs without ai will be replaced by people with ai'.


Are you at least testing the code? Or are you delegating testing to AI as well?


I describe the test in natural language and AI writes the boilerplate. A typical workflow for adding a simple new backend endpoint to my company's web application will be:

1) Send a chat gpt request with the existing router code and describe in natural language the name of the new endpoint, what I need it to do, and any functions that I want it to use. 2) Read through it and check that the logic is ok. 3) Send a chat gpt request with the existing router tests and describe in natural language what endpoint I want to hit, and what I want the test to verify. 4) Check that the response makes sense and run the test. 5) Any errors either debug on my own, or iterate back and forth again with chat gpt.

We are still in the early stages of what it means to have an intelligent natural language system at our fingertips. For me, it means no longer really needing to bother with repetitive boilerplate code and test harnessing. This is a huge speedup for my professional workflow and a significantly more enjoyable coding experience.


I have not had good results and stopped trying. I have had some usable results, but on careful inspection there were subtle problems or needless convolutions that implied a different solution was being used than was actually the case. The sort of thing that works but is prone to misinterpretation by the next one working in the code.

Based on this I'm very against using it for things the user doesn't have significant knowledge of. Some coworkers seem to be having better success but I definitely get the sense they are reading and editing the results carefully. I don't find it that much if any of a productivity gain so I stopped trying for now.


> Some coworkers seem to be having better success but I definitely get the sense they are reading and editing the results carefully.

Yes, you need to consider the AI as if it were a junior programmer that sometimes makes mistakes. I use it for boring work that can be quickly checked. For example, the other day I asked for a 'give me next workday' algorithm based on the code structure I had, and it worked fine.

It's just one more tool in the toolbox.


If it's that straightforward I'd rather just write it. Like I said it hasn't been an overall time saver with the extra scrutiny I need to put it through. I'll try again in six months.

Also idk kinda tangent but you brought it up. I don't feel like my junior devs make easily found algorithmic mistakes like that. They're more likely to misjudge the scope of the problem or not be aware of a technical consideration or known solution. For that kind of work I'd rather... mentor a junior dev through it so they have the experience.


> Mostly I'm using ChatGPT. There is just no way it could generate 90% of my code (...) Is anyone else actually getting good results for code generation using LLMs?

Try to switch perspective from "write component ‘X’ and paste code without reading it" to "describe the problem, break it into smaller steps, generate code for each step, and iteratively work towards the final solution".

In the first case, LLMs can't do 90% of the work alone. In the latter case, it's different.

You could ask, "Okay, so that's still a lot of work generating code with LLMs," and you'd be right. But it's like having another programmer sitting next to you, helping tackle problems or time-consuming tasks, giving you more space to think about the actual problem.

So, “Up to 90% of my code is now generated by AI” doesn’t mean that only 10% of the entire software development process is left for humans. Writing code is just (obviously) one aspect of software development.


The second you get into a topic it doesn’t know, ChatGPT starts flailing around, unfortunately. I’ve had this happen in virtually anything of substance. Even if I paste in a whole tutorial and say “do this but in X language or with modification Y”.

Copilot on the other hand works great and is a huge improvement in productivity, probably because it’s only writing short snippets while I’m doing the algorithmic thinking. It reduces the brain -> keyboard lag substantially.


Claude 3.5 Sonnet is pretty good at working with small/isolated pieces of code (think single file). But it's not fast. I did manage to get it to make a full feature by itself, but it took an entire evening of copy pasting code/error messages. The final code quality was pretty good.

Most of the time I just use it for getting started on features, small function, and debugging.


CoPilot gets the percentage higher.


I'd like to see some of that code.


I don't think AI could generate 90% or even 40% of my code, but by god does it generate me a lot of boilerplate and comments. My work just gave us all Copilot and it's really good at creating useful comments (even for Pydocs), and writing out simple but mildly tedious things like the outline of a loop, simple functions etc. No risk really from either since they're all short enough for me to sanity check as the AI is writing them, but a very very useful little time saver IMO.


> but by god does it generate me a lot of boilerplate and comments.

Why would you have it write comments? The core of most useful comments is understanding that's not directly reflected in the code (hence the comment), and things that capture your understanding of what you're doing.

Boilerplate comments are noise.


My fault for poor wording, what I mean it understands the code its commenting on, and writes useful comments that I would normally write myself, and can also write simple boilerplate code for me.

So for example if I start typing 'while' it will let me tab complete out the whole skeleton of that loop. For simpler things it even seems to be able to guess my intentions and actually populate the loop and its conditions.


> what I mean it understands the code its commenting

As in the business case for the code existing? Like linking it to a specific requirement?


In that very specific scenario, no. Fortunately there are many more use cases for comments other than just linking to design docs and jira tickets.


I think about 90% of my code is “generated”.

Thing is, that’s only the actual, written code. Theres still a bunch of hard work that goes into figuring out what to generate and verifying that it’s correct


Exactly! This is what I wrote in this article.

Using LLMs to support code writing gives more space to deal with everything else, and that's the point.


More than 90% of my C++ code is generated by a code generator from declarative specs. No need for AI. And I trust the output.


Up to...."Up to" If you are writing a bash wrapper to move some files or something yeah.... Great


Do we have any court cases or any other such thing out there that's decided whether or not it's safe for developers to trust general / common output coming from LLMs? I would probably be more efficient using various AI systems to write my code; but I'm afraid of a lawsuit around licensing.

Microsoft and Jetbrains are both introducing this tooling into their IDEs with Copilot and AI Assistant, but I still worry (I'm a naturally over-cautious person).

edit: to be clear, I'll ask it questions, just like everyone and their dog; but any sort of direct line / code completion; or "write me a method in Java that will do X, Y and Z" and then copy-pasting that 10+ line thing directly is not something I do.


I'd personally found it to be akin to a exceptionally technical junior developer fresh out of college. It can generate some really good niche code, but it can also generate the exact same code that's right above it.

And so you need to check every single line it creates, even when doing the most mundane tasks. Useful, and probably the source of 90% of my work in the "rough draft" stage, but I also have to read and grok all of that 90%, and fix the 80% that's just barely (or very blatantly) not right for the final draft.


The point is basically

> just do your own thing, but explore paths you've never walked before.

Yeah that's how people grow, no joke. It's kind of independent to the rest of the article shilling LLMs.


Up to 90% of my code is generated by Jetbrains software.


[flagged]


Depends what you are coding


If 90% of it can be (correctly) chatgpt'd, it's not gonna be that complicated.


Or it is complicated, but the programmer ought to be using a library for it instead of bot-meditated copy paste.


Damn that’s a horrifying vision. Code by a bot that has no consistency in library preferences or the absence thereof. Peek under the hood and it’s made dozens and dozens of helper libraries that all do the same thing and get used one time. All with enormous volumes of mostly working tests.


"This way we don't have the process overhead and security problems of external dependencies." /s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: