Hacker Newsnew | past | comments | ask | show | jobs | submit | lkjdsklf's commentslogin

> I think it would typically have taken you longer.

That's actually highly doubtful to me.

Tons of studies and writing about how reading and debugging code is wildly more time consuming than writing it. That time goes up even more when you're not the one that wrote the code in the first place. It's why we've spent decades on how to write readable/maintainable code.

So either all this shit about reading/maintaining code being difficult was lies and we've spent decades wasting our time or AIs can only improve productivity if you stop verifying/debugging code.

So I find it very unlikely that it would have taken more than a couple hours to just write it the first time.


The same way mediocre men have been elevated for thousands of years.

A combination of being in the right place at the right time and connections to people with money


Templates don’t have to be complicated.

Just very basic type substitution is one of the most useful uses of templates and is useful in pretty much all software

They’re also useful when you can’t use virtual dispatch. Concepts help a lot in making that tolerable.

Sure they can get stupid complicated and ugly as hell, but you don’t have to do that. Even their basic form is very useful

That said, RAII is probably the must useful thing


> assuming they do the research and know what they are doing.

This is the assumption that has almost always failed and thus has lead to the banning of AI code altogether in a lot of projects.


They do not.

At my company, I use them all the time with the fancy models and everything. Preplanning does not solve the problem they're describing.

When claude is doing a complex task, it will regularly lose track of the rules (in either the .rules stuff or CLAUDE.md) and break conventions.

It follows it most of the time, but not all of the time.


But all work isn't done by LLMs at the moment and we can't be sure that it will be so the question is ridiculous.

Maybe one day it will be.. And then people can reevaluate their stance then. Until that time, it's entirely reasonable to hold the position that you just don't

This is especially true with how LLM generated code may affect licensing and other things. There's a lot of unknowns there and it's entirely reasonable to not want to risk your projects license over some contributions.

I use them all the time at work because, rightly or wrongly, my company has decided that's the direction they want to go.

For open source, I'm not going to make that choice for them. If they explicitly allow for LLM generated code, then I'll use it, but if not I'm not going to assume that the project maintainers are willing to deal with the potential issues it creates.

For my own open source projects, I'm not interested in using LLM generated code. I mostly work on open source projects that I enjoy or in a specific area that I want to learn more about. The fact that it's functional software is great, but is only one of many goals of the project. AI generated code runs counter to all the other goals I have.


Basically all of my actual programming work has been done by LLMs since January. My team actually demoed a PoC last week to hook up Codex to our Slack channel to become our first level on-call, and in the case of a defect (e.g. a pagerduty alert, or a question that suggests something is broken), go debug, push a fix for review, and suggest any mitigations. Prior to that, I basically pushed for my team to do the same with copy/paste to a prompt so we could iterate on building its debugging skills.

People might still code by hand as a hobby, but I'd be surprised if nearly all professional coding isn't being done by LLMs within the next year or two. It's clear that doing it by hand would mostly be because you enjoy the process. I expect people that are more focused on the output will adopt LLMs for hobby work as well.


Sounds like a company on the verge of creating a mess that will require a rewrite in a year or so. Maybe an llm can do it.

I suspect this is more true than most people think. Today's bad code will be cleaned up by tomorrow's agents.

The other factor that gets glossed over is that llms create a financial incentive to create cleaner code, with tests, because the agent that you pay for will be more efficient when the code is easier to understand, and has clear patterns for extensibility. When I do code with llms, a big part of it is demonstration, i.e. pseudocoding a pattern/structure, asking the model if it understands, and then having it complete the pattern. I've had a lot of success with this approach.


> llms create a financial incentive to create cleaner code, with tests, because the agent that you pay for will be more efficient when the code is easier to understand, and has clear patterns for extensibility

Right, this is the kind of discussion we're having on my team: suddenly all of the already good engineering practices like good observability, clear tests with high coverage, clean design, etc. act as a massive force multiplier and are that much more important. They're also easier to do if you prioritize it. We should be seeing quality go up. It's trivial to explore the solution space with throwaway PoCs, collect real data to drive your design, do all of those "nice to have" cleanups, etc. The people who assume LLM = slop are participating in a bizarre form of cope. Garbage in, garbage out; quality in, quality out. Just accept that coding per se is not going to be a profession for long. Leverage new tools to learn more, do more, etc. This should be an exciting time for programmers.


> It's clear that doing it by hand would mostly be because you enjoy the process.

This will not happen until companies decide to care about quality again. They don't want employees spending time on anything "extra" unless it also makes them significantly more money.


> It's clear that doing it by hand would mostly be because you enjoy the process.

This is gaslighting. We're only a few years into coding agents being a thing. Look at the history of human innovation and tell me that I'm unreasonable for suspecting that there is an iceberg worth of unmitigated externalities lurking beneath the surface that haven't yet been brought to light. In time they might. Like PFAS, ozone holes, global warming.


[dead]


Ultimately you always have to trust people to be judicious, but that's why it doesn't make any changes itself. Only suggests mitigations (and my team knows what actions are safe, has context for recent changes, etc). It's not entirely a black box though. e.g. I've prompted it to collect and provide a concrete evidence chain (relevant commands+output, code paths) along with competing hypotheses as it works. Same as humans should be doing as they debug (e.g. don't just say "it's this"; paste your evidence as you go and be precise about what you know vs what you believe).

That's sounds like the perfect recipe for turning a small problem into a much larger one. 'on call' is where you want your quality people, not your silicon slop generator.

> There already are LLMs with open weights that are better at code than state of the art closed source models from a year ago.

A year ago, the "state of the art" models were total turds. So this isn't exactly good news

Not to mention the performance of local LLMs makes them utterly unusable unless you have multiple tens of thousands to invest in hardware (and that was before the recent price spike). If you're using commodity hardware, they're just awful to use.


I have similar types of bindings. I just found a keyboard that can use ZMK. There's quite a few out there.

ZMK (or it's free software cousin QMK) are super flexible and you can create lots of custom behaviors for keys (tap/hold behaviors, double press, layering, etc...). It takes some time and effort to learn how to set it all up. Some of the more complicated behaviors require using their dsl for mapping the keys instead of their GUI editor. Considering the ridiculous amount of hours I spend at my computer using a keyboard, I felt it was worth the investment in learning.


It’s still some time in the 2000s and will be for the next 974 years

Not according to common usage: https://en.wikipedia.org/wiki/2000s

tmux/screen is literally less work to use than this thing.

You need to learn to type less than a dozen total characters including the command.

Not to mention a lot of terminals automatically integrate with tmux so you don’t have to do anything but open the terminal.

Sure, different tools for different people. And if you want to use a new fangled triangular wheel they just invented, no one’s going to stop you

It’s still a triangular wheel at the end of the day


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: