Hacker Newsnew | past | comments | ask | show | jobs | submit | ternwer's commentslogin

I think most are looking for both.

AI/LLM knowledge without programming knowledge can make a mess.

Programming knowledge without AI/LLM knowledge can also make a mess.


> AI/LLM knowledge without programming knowledge can make a mess.

That makes sense.

> Programming knowledge without AI/LLM knowledge can also make a mess.

How? I'd imagine that most typically means continuing to program by hand. But even someone like that would probably know enough to not mindlessly let an LLM agent go to town.


> How? I'd imagine that most typically means continuing to program by hand.

I think the use of LLMs is assumed by that statement. The point is that even experienced programmers can get poor results if they're not aware of the tech's limitations and best-practices. It doesn't mean you get poor results by default.

There is a lot of hype around the tech right now; plenty of it overblown, but a lot of it also perfectly warranted. It's not going to make you "ten times more productive" outside of maybe laying the very first building blocks on a green field; the infamous first 80% that only take 20% of the time anyway. But it does allow you to spend a lot more time designing and drafting, and a lot less time actually implementing, which, if you were spec-driven to begin with, has always been little more than a formality in the first place.

For me, the actual mental work never happened while writing code; it happened well in advance. My workflow hasn't changed that much; I'm just not the one who writes the code anymore, but I'm still very much the one who designs it.


Yes, I've seen many people become _too_ hands-off after an initial success with LLMs, and get bitten by not understanding the system.

Hirers, above, are more focused on the opposite side, though: engineers who try AI once, see a mess or hallucinations, and decide it's useless. There is some learning to figure out how to wield it.


"How?" <- It shows a lack of curiosity?

"probably know enough" <- that's exactly the point of the question, is the candidate clueless about AI/LLM.


> "How?" <- It shows a lack of curiosity?

We're talking about a codebase, here. How does "lack of curiosity" about LLMS "make a mess"

> "probably know enough" <- that's exactly the point of the question, is the candidate clueless about AI/LLM.

Probably knows enough about what's a good vs bad change. If you're "clueless about AI/LLM" but know a bad change when you see one, how do you "make a mess?"

It's 2026, even a developer who's never touched an LLM before has heard about LLM hallucinations. If you've got programming knowledge, you should know how to make changes (e.g. you're not going to commit 200 files for a tiny change, because you know that doesn't smell right), which should guard against "making a mess."

My point it doesn't seem reasonable to assume symmetry here. That if you don't know both things, you'll make a mess. That also implies everything built before 2022 was a mess, because those developers new programming but not LLMs, which is an unreasonable claim to make.


I was too cute in trying to be terse, but I meant a mess while using AI:

> [Employers], above, are more focused on the opposite side, though: engineers who try AI once, see a mess or hallucinations, and decide it's useless. There is some learning to figure out how to wield it.


HN often avoids politics, but they were some of the most upvoted stories recently:

https://news.ycombinator.com/item?id=47188697

https://news.ycombinator.com/item?id=47189650


Motorola phones are Chinese, aren't they? They mention being a Lenovo company in the article.

Yes, Motorola is a Chinese company.

This is a shame for the American old mobile phone industry. There was always potential in the brand and the phones they produce that deserved to be saved.

Now hopefully Lenovo does it justice, unlike Thinkpad which they have milked and diluted everything out of.


How have they been hostile to open weight models and research? Just because they don't release models themselves?

Note that they are still releasing interesting research


Why? What has their PR department done? Most people are quite critical of a lot of their messaging, it's their actions that seem worth encouraging

Feel free to judge them by their actions rather than intentions. This situation being an example.

This effectively is cancelling, isn't it?

You're implying cancelling quietly would be better. But the department would just use a different supplier. This seems like the action someone would take if they cared about the issue.


I think he's more pragmatic than that.

So you think we should never support them doing something "positive"? What incentive does that give?

His actions seem pretty consistent with a belief that AI will be significant and societally-changing in the future. You can disagree with that belief but it's different to him being a liar.

The $200m is not the risk here. They threatened labelling Anthropic as a supply chain risk, which would be genuinely damaging.

> The DoW is the largest employer in America, and a staggering number of companies have random subsidiaries that do work for it.

> All of those companies would now have faced this compliance nightmare. [to not use Anthropic in any of their business or suppliers]

... which would impact Anthropic's primary customer base (businesses). Even for those not directly affected, it adds uncertainty in the brand.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: