Hacker Newsnew | past | comments | ask | show | jobs | submit | stack_framer's commentslogin

Wow, this is a new low I did not expect to see. "Just get high and you'll be able to code with an LLM." Preceded by, "I know it's terrible."

I'm wholly unwilling to relinquish my health and my morals to "AI" so I can "ship faster." What a pathetic existence that would be.


I appreciate your honesty. For what it's worth, this is not a commercial endeavor and is all motivated entirely by scratching my own personal itches. I'm not being paid to do this.

One of the best programmers I know personally is constantly under the influence of marijuana. As "immature" as it may sound, she's still extremely aware of what she's doing and is able to work in an environment I would give up in after 2 weeks. The kind of environment that denies 1 day PTO for your birthday because of a deadline (hint, every week is a deadline).

I do not smoke myself, but it made me realize how little I know regarding THC and CBD


> I do not smoke myself, but it made me realize how little I know regarding THC and CBD

Long-term use causes the psychedelic part of THC effects to diminish over time. At some point, only a mild depressant effect remains - somewhat similar to chamomile. It does have some effect on intelligence and short-term memory, but if the alternative is to be too stressed to think at all, it might be better to just smoke.

Obviously, if possible, psychotherapy or a prescription from a psychiatrist (or better yet, a change of environment) would be better (in the latter case, it depends on the prescribed drug, of course), but THC is not that bad an alternative where it's legal.


Natural results when incentives are on "shippng" and not "quality". People find ways to ship out stuff faster. And perhaps drown their apprehensions about quality.

Welcome to Taylorism. Not just for assembly line workers anymore.

> wait six months.

I mourn having to repeatedly hear this never-quite-true promise that an amazing future of perfect code from agentic whatevers will come to fruition, and it's still just six months away. "Oh yes, we know we said it was coming six, twelve, and eighteen months ago, but this time we pinky swear it's just six months away!"

I remember when I first got access to the internet. It was revolutionary. I wanted to be online all the time, playing games, chatting with friends, and discovering new things. It shaped my desire to study computer science and learn to develop software! I could see and experience the value of the internet immediately. It's utility was never "six months away," and I didn't have to be compelled to use it—I was eager to use it of my own volition as often as possible.

LLM coding doesn't feel revolutionary or exciting like this. It's a mandate from the top. It's my know-nothing boss telling me to "find ways to use AI so we can move faster." It's my boss's know-nothing boss conducting Culture Amp surveys about AI usage, but ignoring the feedback that 95% of Copilot's PR comments are useless noise: "The name of this unit test could be improved." It's waiting for code to be slopped onto my screen, so I can go over it with a fine-toothed comb and find all the bugs—and there are always bugs.

Here's what I hope is six months away: The death of AI hype.


This feels right when you're looking forwards. The perfect AI bot is definitely not 6 months away. It'll take a lot longer than that to get something that doesn't get things wrong a lot of the time. That's not especially interesting or challenging though. It's obvious.

What's much more interesting is looking back 6, 12, 18, or 24 months. 6 months ago was ChatGPT 5, 12 months ago was GPT 4.5, 18 months ago was 4o, and 24 months ago ChatGPT 3.5 was released (the first one). If you've been following closely you'll have seen incredible changes between each of them. Not to get to perfect, because that's not really a reasonable goal, but definite big leaps forward each time. A couple of years ago one-shotting a basic tic tac toe wasn't really possible. Now though, you can one-shot a fairly complex web app. It won't be perfect, or even good by a lot of measures compared to human written software, but it will work.

I think the comparison to the internet is a good one. I wrote my first website in 1997, and saw the rapid iteration of websites and browsers back then. It felt amazing, and fast. AI feels the same to me. But given the fact that browsers still aren't good in a lot of ways I think it's fair to say AI will take a similarly long time. That doesn't mean the innovations along the way aren't freaking cool though.


ChatGPT 3.5 was almost 40 months ago, not 24. GPT 4.5 was supposed to be 5 but was not noticeably better than 4o. GPT 5 was a flop. Remember the hype around Gemini 3? What happened to that? Go back and read the blog posts from November when Opus 4.5 came out; even the biggest boosters weren't hyping it up as much as they are now.

It's pretty obvious the change of pace is slowing down and there isn't a lot of evidence that shipping a better harness and post-training on using said harness is going to get us to the magical place where all SWE is automated that all these CEOs have promised.


Wait, you're completely skipping the emergence of reasoning models, though? 4.5 was slower and moderately better than 4o, o3 was dramatically stronger than 4o and GPT5 was basically a light iteration on that.

What's happening now is training models for long-running tasks that use tools, taking hours at a time. The latest models like 4.6 and 5.3 are starting to make good on this. If you're not using models that are wired into tools and allowed to iterate for a while, then you're not getting to see the current frontier of abilities.

(EG if you're just using models to do general knowledge Q&A, then sure, there's only so much better you can get at that and models tapered off there long ago. But the vision is to use agents to perform a substantial fraction of white-collar work, there are well-defined research programmes to get there, and there is stead progress.)


> Wait, you're completely skipping the emergence of reasoning models, though?

o1 was something like 16-18 months ago. o3 was kinda better, and GPT 5 was considered a flop because it was basically just o3 again.

I’ve used all the latest models in tools like Claude code and codex, and I guess I’m just not seeing the improvement? I’m not even working on anything particularly technically complex, but I still have to constantly babysit these things.

Where are the long-running tasks? Cursor’s browser that didn’t even compile? Claude’s C compiler that had gcc as an oracle and still performs worse than gcc without any optimizations? Yeah I’m completely unimpressed at this point given the promises these people have been making for years now. I’m not surprised that given enough constraints they can kinda sorta dump out some code that resembles something else in their training data.


Fair enough, I guess I'm misremembering the timeline, but saying "It's taken 3 years, not 2!" doesn't really change the point I'm making very much. The road from what ChatGPT 3.5 could do to what Codex 5.3 can do represents an amazing pace of change.

I am not claiming it's perfect, or even particularly good at some tasks (pelicans on bicycles for example), but anyone claiming it isn't a mind-blowing achievement in a staggeringly short time is just kidding themselves. It is.


Yeah, but humans still had to work to create those websites, it increased jobs, didn't replace them (this is happening). This will devalue all labor that has anything to do with i/o on computers, if not outright replace a lot of it. Who cares if it can't write perfect code, the owners of the software companies never cared about good code, they care about making money. They make plenty of money off slop, and they'll make even more if they don't have to have humans create the slop.

The job market will get flooded with the unemployed (it already is) with fewer jobs to replace the ones that were automated, those remaining jobs will get reduced to minimum wages whenever and wherever possible. 25% of new college grads cannot find employment. Soon young people will be so poor that you'll beg to fight in a war. Give it 5-10 years.

This isn't a hard future to game theory out, its not pretty if we maintain this fast track of progress in ML that minimally requires humans. Notice how the ruling class has increased the salaries for certain types of ML engineers, they know what's at stake. These businessmen make decisions based on expected value calculated from complex models, they aren't giving billion dollar pay packages to engineers because its trendy. We should use our own mental models to predict where this is going, and prevent it from happening however possible.

THE word ''Luddite'' continues to be applied with contempt to anyone with doubts about technology, especially the nuclear kind. Luddites today are no longer faced with human factory owners and vulnerable machines. As well-known President and unintentional Luddite D. D. Eisenhower prophesied when he left office, there is now a permanent power establishment of admirals, generals and corporate CEO's, up against whom us average poor bastards are completely outclassed, although Ike didn't put it quite that way. We are all supposed to keep tranquil and allow it to go on, even though, because of the data revolution, it becomes every day less possible to fool any of the people any of the time. If our world survives, the next great challenge to watch out for will come - you heard it here first - when the curves of research and development in artificial intelligence, molecular biology and robotics all converge. Oboy. It will be amazing and unpredictable, and even the biggest of brass, let us devoutly hope, are going to be caught flat-footed. It is certainly something for all good Luddites to look forward to if, God willing, we should live so long. Meantime, as Americans, we can take comfort, however minimal and cold, from Lord Byron's mischievously improvised song, in which he, like other observers of the time, saw clear identification between the first Luddites and our own revolutionary origins. It begins:[0]

https://archive.nytimes.com/www.nytimes.com/books/97/05/18/r...


Something I'm finding odd is this seemingly perpetually repeating claim that the latest thing that came out actually works, unlike the last thing that obviously didn't quite work.

Then next month, of course, latest thing becomes last thing, and suddenly it's again obvious that actually it didn't quite work.

It's like running on a treadmill towards a dangling carrot or something. It's simultaneously always here in front of our faces but also not here in actual hand, obviously.

The tools are good and improving. They work for certain things, some of the time, with various need for manual stewarding in the hands of people who really know what they're doing. This is real.

But it remains an absolutely epic leap from here to the idea that writing code per se is a skill nobody needs any more.

More broadly, I don't even really understand what that could possibly mean on a practical level, as code is just instructions for what the software should do. You can express instructions on a higher level, and tooling keeps making that more and more possible (AI and otherwise), but in the end what does it mean to abstract fully away from the instruction in the detail? It seems really clear that will never be able to result in getting software that does what you want in a precise way rather than some probabilistic approximation which must be continually corrected.

I think the real craft of software such that there is one is constructing systems of deterministic logic flows to make things happen in precisely the way we want them to. Whatever happens to tooling, or what exactly we call code or whatever, that won't change.


that's a good take

> getting software that does what you want

so then we become PMs?


> an amazing future of perfect code from agentic whatevers will come to fruition...

Nobody credible is promising you a perfect future. But, a better future, yes! If you do not see it, then know this. You have your head firmly planted in the sand and are intentionally refusing to see what is coming. You may not like it. You may not want it. But it is coming and you will either have to adapt or become irrelevant.

Does Copilot spit out useless PR comments. 100% yes! Are there tools that are better than Copilot? 100% yes! These tools are not perfect. But even with their imperfections, they are very useful. You have to learn to harness them for their strengths and build processes to address their weaknesses. And yes, all of this requires learning and experimentation. Without that, you will not get good results and you will complain about these tools not being good.


> But it is coming and you will either have to adapt or become irrelevant.

I heard it will be here in six months. I guess I don't have much time to adapt! :)


>It's utility was never "six months away,"

6 months ago is when my coding became 100% done by AI. The utility already has been there for a while.

>I didn't have to be compelled to use it—I was eager to use it of my own volition as often as possible.

The difference is that you were a kid then with an open mind and now your world view has fixed into a certain way the world works and how things should be done.


> your world view has fixed into a certain way the world works

Yeah, it's weird. I'm fixated on not having bugs in my code. :)


AI can help with that too by automatically fixing bugs.

I find it such a strange cycle to tell AI to write some code, then tell it to fix the bugs in that code. Why didn't the AI just not include those bugs the first time it wrote the code?!

We do the same with humans, so it isn't strange to me. It requires superhuman ability to always get it right first try.

Can you point to the most optimistic six month projections that you have seen?

I have encountered a lot of people say it will be better in six months, and every six months It has been.

I have also seen a few predictions that say 'in a year or two they will be able to do a job completely. I am sceptical, but I would say such claims are rare. Dario Amodei has been about the only prominent voice that I have encountered that puts such abilities on a very short timeframe, and he still points to more than a year.

The practical use of AI has certainly increased a lot in the last six months.

So I guess what I'm asking is more specifics on what you feel was claimed, by whom, and how much did they fall short?

Without that supporting evidence you could just be being annoyed by the failure of claims that exist in your imagination.


If you've only experienced MS Copilot I invite you to try the latest models through Codex (free deals ongoing), Claude Code, or Opencode. You may be surprised, for better or worse. What kind of software do you do?

> LLM coding doesn't feel revolutionary or exciting like this.

Maybe you’re just older.


Older than whom?

Than your younger self that got all excited about the Internet.

Truer words were never spoken. You have imparted wisdom upon me today. :)

If that’s you’re best response, a snarky and unfunny comment that would make a GenZ guy blush, I’m not surprised you can’t fathom that age is a factor.

  > it's still just six months away
Reminds me of another "just around the corner" promise...[0]

I think it is one thing for the average person to buy into the promises but I've yet to understand why that happens here. Or why that happens within our community of programmers. It is one thing for non-experts to fall for obtuse speculative claims, but it is another for experts. I'm excited for autonomous vehicles, but in 2016 is was laughable to think they're around the corner and only 10 years later does such a feat seem to start looking like it's actually a few years away.

Why do we only evaluate people/claims on their hits and not their misses? It just encourages people to say anything and everything, because eventually one will be right. It's 6 months away because eventually it will actually be 6 months away. But is it 6 months away because it is actually 6 months away or because we want it to be? I thought the vibe coder's motto is "I just care that it works." Honestly, I think that's the problem. Everyone care's about if it works or not and that's the primary concern of all sides of the conversation here. So is it 6 months away because it is 6 months away or is it 6 months away because you've convinced yourself it is 6 months away. You got good reasons for believing that, you got the evidence, but evidence for a claim is meaningless without comparing to evidence that counters the claim.

[0] https://en.wikipedia.org/wiki/List_of_predictions_for_autono...


you’re probably not doing it right.

I’ve been programming since 1984.

OP basically described my current role with scary precision.

I mostly review the AI’s code, fix the plan before it starts, and nudge it in the right direction.

Each new model version needs less nudging — planning, architecture, security, all of it.

There’s an upside.

There’s something addictive about thinking of something and having it materialize within an hour.

I can run faster and farther than I ever could before.

I’ve rediscovered that I just like building things — imagining them and watching them come alive — even if I’m not laying every brick myself anymore.

But the pace is brutal.

My gut tells me this window, where we still get to meaningfully participate in the process, is short.

That part is sad, and I do mourn it quite a bit.

If you think this is just hype, you’re doing it wrong.


The state of the art is moving so rapidly that, yeah, Copilot by Microsoft using gpt-5-mini:low is not going to be very good. And there are many places where AI has been implemented poorly, generally by people who have the distribution to force it upon many people. There are also plenty of people who use vibe coding tools and produce utterly atrocious codebases. That doesn't preclude the existence of effective AI tools, and people who are good at using them.

Well said!

I wish there was a way to straighten myself out after I drag with the mouse or use the A/D keys to move left or right. I was expecting to be able to turn left or right, and weave through the servers (or whatever they are), as opposed to drifting left/right and eventually getting stuck.

What a great idea; good luck! Also, it's nice to read a hardware story on HN (we need more breaks from AI this and AI that).

How can I listen to your music?

Comments about Elon on HN have become exhaustingly cringe; dripping with devout derision, reeking of righteous reproach, and smacking of sanctimonious seething.

Apparently we must all gnash our teeth at the mere mention of "that man" or anything associated with him. It's as plebeian as it is predictable.

I'm sure this will now be downvoted into oblivion and I'll be accused of "defending an avowed racist" or some other such nonsense.


I want AI that I can command with the least possible effort, in the simplest terms, and it flawlessly does exactly what I said.

I want AI that responds instantaneously, and in a manner perfectly suited to my particular learning style.

I want AI so elegant in its form and function that I completely take it for granted.

What I'm getting instead is something clunky, slow, and flawed. So excuse me while I remain firmly in the anti-AI crowd.


Funny that he mentions people not pivoting away from COBOL. My neighbors work for a bank, programming in COBOL every day. When I moved in and met them 14 years ago, I wondered how much longer they would be able to keep that up.

They're still doing it.


The market can stay irrational longer than you can stay solvent


it sounds like these people are staying solvent as long as the market stays irrational.


to be fair, that cobol program has been working for probably 30 years (maybe even longer than that) - thats unusually reliable and long-lived for a software project.

the only real contender in this regard is the win32 api, and actually that did get used in enterprise for a long time too before the major shift to cloud and linux in the mid 2010s.

ultimately the proof is in the real-world use, even if its ugly to look at... id say, even as someone who is a big fan of linux, if i were given a 30 year old obscure software stack that did nothing but work, i would be very hesitant to touch it too!


> the only real contender in this regard

I would like to add the business core functions of SAP R/3 (1992). Much of the code created for it in the early 90s still lives in the current SAP S/4HANA software.


It still needs continual software maintenance though. The developers still making their money in COBOL make it because it doesn't just keep working untouched. (Just about no software does.)


There is a great deal of death in a language.


I really like this idea, but I get distracted when the vertical bars don't line up!

  +--------+
  |        |
  | ASCII! |
   |       |
  +--------+


The very first step says...

"Click Rub with Silk to strip the fluid from the rod."

...but the button text is GENERATE FRICTION, not Rub with Silk.


There are a few small editing issues, but rest assured that the concepts are all correct.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: