Hacker Newsnew | past | comments | ask | show | jobs | submit | callc's commentslogin

Is this sarcasm?

Dismissing concerns of issues that affect people today with the promise of some solution that may or may not happen to the degree you think it may address the real problems of today is really not cool.

On other note, Chicago in winter may be icy driving. It may be harder to convince Chicagoans to join the “revolution” that sunny day Californians (or any other non-icy weather areas)


“It is a crime to commit crimes using our product or service”

Yeah… crimes are crimes.


Specifically identifying road signs, traffic lights, and dead traffic lights is a narrow problem that has feasible solutions. To the point where we can reasonably say “yeah, this sub-component basically works perfectly.”

Compared to the overall self-driving problem which is very much not a super easy problem.


Job replacement and AI slop are both legitimate reason that people have negative opinions on AI

Amongst many other legitimate reasons.


> Then there's library and framework churn. AI models aren't good with this (as evidenced by the hours I wanted trying to get any model to help me through a webpack4 to webpack5 upgrade. There was no retained context and no understanding, so it kept telling me to ado webpack4 things that don't work in 5).

I experienced this too asking LLM to help with a problem with a particular Arduino board. Even with being a very popular microcontroller, it is probably giving blended answers from the 15 other types of Arduino boards, not the one I have.



Those charts are horribly misleading. There's random articles you can find [0], or just use steamDB to see that (now that steam requires games to disclose ai usage) there's like 10k+ games made that use AI. Also you can see that there's jump in the number of games released from 2023-2024.

Companies like lovable are reporting millions of projects that are basically slop apps. They're just not released as real products with an independent company.

The data is misleading - it's like saying high-quality phone cameras had no impact on the video industry. Just look at how much of network tv is filmed with iphone cameras. At best you might have some ads, and some minor projects using it, but nothing big. Completely ignoring that youtube or tiktok are built off of people's phone cameras and their revenue rivals major networks.

I am sorry, I just don't want to have this conversation about AI and it's impact for the millionth time because it just devolves into semantics, word games, etc. It's just so tiring.

[0] https://www.gamesradar.com/platforms/pc-gaming/steams-slop-p...

[1] https://steamdb.info/stats/releases/


Any metric that measures the amount of software delivered.

The link at the bottom of the post (https://mikelovesrobots.substack.com/p/wheres-the-shovelware...) goes over this exactly.

> Businesses, on the other hand, announce headcount reductions due to AI and of course nobody believes them.

It’s an excuse. It’s the dream peddled by AI companies: automate intelligence so you can fire your human workers.

Look at the graphs in the post, then revisit claims about AI productivity.

The data doesn’t lie. AI peddlers do.


Given the amount of progress in AI coding in the last 3 years, are you seriously confident that AI won't increase programming productivity in the next three?

This reminds me of the people who said that we shouldn't raise the alarm when only a few hundred people in this country (the UK) got Covid. What's a few hundred people? A few weeks later, everyone knew somebody who did.


Okay, so if and when that happens, get excited about it _then_?

Re the Covid metaphor; that only works because Covid was the pandemic that did break out. It is arguably the first one in a century to do so. Most putative pandemics actually come to very little (see SARS1, various candidate pandemic flus, the mpox outbreak, various Ebola outbreaks, and so on). Not to say we shouldn’t be alarmed by them, of course, but “one thing really blew up, therefore all things will blow up” isn’t a reasonable thought process.


AI codegen isn't comparable to a highly-infectious disease: it's been a lot more than a few weeks. I don't think your analogy is apt: it reads more like rhetoric to me. (Unless I've missed the point entirely.)

https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

From my perspective, it's not the worst analogy. In both cases, some people were forecasting an exponential trend into the future and sounding an alarm, while most people seemed to be discounting the exponential effect. Covid's doubling time was ~3 days, whereas the AI capabilities doubling time seems to be about 7 months.

I think disagreement in threads like this often can trace back to a miscommunication about the state today / historically versus. Skeptics are usually saying: capabilities are not good _today_ (or worse: capabilities were not good six months ago when I last tested it. See: this OP which is pre-Opus 4.5). Capabilities forecasters are saying: given the trend, what will things be like in 2026-2027?


The "COVID-19's doubling time was ≈3 days" figure was the output of an epidemiological model, based on solid and empirically-validated theory, based on hundreds of years of observations of diseases. "AI capabilities' doubling time seems to be about 7 months" is based on meaningless benchmarks, corporate marketing copy, and subjective reports contradicted by observational evidence of the same events. There's no compelling reason to believe that any of this is real, and plenty of reason to believe it's largely fraudulent. (Models from 2, 3, 4 years ago based on the "it's fraud" concept are still showing high predictive power today, whereas the models of the "capabilities forecasters" have been repeatedly adjusted.)

But what?

Give some concrete examples of why current LLM/AI is disruptive technology like digital cameras.

That’s the whole point of the article. Show the obvious gains.


falcor's point is that we will see this in 5 to 10 years.

Exactly. I'm arguing that what we should be focused on at this relatively early stage is not the amount of output but the rate of innovation.

It's important to note that we're now arguing about the level of quality of something that was a "ha, ha, interesting" in a sidenote by Andrej Karpathy 10 years ago [0], and then became a "ha, ha, useful for weekend projects" in his tweet from a year ago. I'm looking forward to reading what he'll be saying in the next few years.

[0] https://karpathy.github.io/2015/05/21/rnn-effectiveness/

[1] https://x.com/karpathy/status/1886192184808149383?s=20


Why so long?

If AI had such obvious gains, why not accelerate that timeline to 6 months?

Take the average time to make a simple app, divide by the supposed productivity speed up, and this should be the time we see a wave of AI coded apps.

As time goes on, the only conclusion we can reach (especially looking at the data) is that the productivity gains are not substantial.


> Why so long?

Because in the beginning of a new technology, the advantages of the technology benefit only the direct users of the technology (the programmers in this case).

However, after a while, the corporations see the benefit and will force their employees into an efficiency battle, until the benefit has shifted mostly away from the employees and towards their bosses.

After this efficiency battle, the benefits will become observable from a macro perspective.


GPT3 was released in May 2020. Its been nearly 5 years.

The first digital camera was released in around 1975? Digital cameras overtook film camera sales in 2005, 30 years later.

Why is gpt3 relevant? I can't recall anyone using gpt3 directly to generate code. The closest would probably be Tabnine's autocompletion, which I think first used gpt2, but I can't recall any robust generation of full functions (let alone programs) before late 2022 with the original GitHub copilot.

This gives me hope that we will finally see some competition to the Android/iOS duopoly.

Same here.

It’s amazing how much intense of a Scrooge McDuck vibes we’re getting from the MBA executive class.

Crank the screws, tighten the belt, offshore, increase profits at all costs. The next generations are going to have it rough since these elites have intentionally hoarded prosperity at the expense of their countrymen


This CLAUDE.md dance feels like herding cats. Except we’re herding a really good autocorrect encyclopedic parrot. Sans intelligence

Relating / personifying LLM to an engineer doesn’t work out

Maybe the best though model currently is just “good way to automate trivial text modifications” and “encyclopedic ramblings”


unfair characterization.

think about how this thing is interacting with your codebase. it can read one file at a time. sections of files.

in this UX, is it ergonomic to go hunting for patterns and conventions? if u have to linearly process every single thing u look at every time you do something, how are you supposed to have “peripheral vision”? if you have amnesia, how do you continue to do good work in a codebase given you’re a skilled engineer?

it is different from you. that is OK. it doesn’t mean its stupid. it means it needs different accomodations to perform as well as you do. accomodations IRL exist for a reason, different people work differently and have different strengths and weaknesses. just like humans, you get the most out of them if you meet and work with them from where they’re at.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: