Hacker Newsnew | past | comments | ask | show | jobs | submit | blharr's commentslogin

I don't get the point of this argument. Yes, the people who buy pinhole cameras are creeps too

why is someone protecting their property/children creep?

Pinhole cameras ain't for that

AI is a big driver of literal direct physical corruption. Language and knowledge is forever tainted because of an outpouring of AI generated spam. Evaluating resumes is more difficult now because you can't tell real impacts versus fabricated hallucinations. Open source projects are overwhelmed with AI generated PRs...

Any corruption is emboldened by AI, it's a catalyst of the problem, doesn't seem anywhere close to potentially being a fix


The breeze is more like a 2 ton harvester expertly engineered to knock your tree down.

What's stopping the price from being extremely low? Plenty might pay $1 to take a bundle of 1000 items of clothing, pick through it and find 20 items they like, then destroy the 980 other items.

Isn't that still better than it all getting destroyed?

Also if someone is buying new goods for pennies on the dollar, I'd expect them to find some value in more than 2% of the stock.


Except being destroyed locally in a controlled or regulated process and shipped to be destroyed overseas with more relaxed regulations are not equal footprint wise

sounds like an improvement on the current system, no?

980 items being shipped then destroyed versus 1000 items being destroyed.

It's 980(x+y) instead of 1000(x)

I have no knowledge of the tradeoffs, but I might also imagine that the method of destruction could be worse: incineration vs landfill vs thrown in the ocean, etc


Not quite when you are burning a ton of fuel to ship them overseas to arrive in the burn pit.

Maybe, but it could also just be self promotion by the owner of this 'agent'. They've set it up to contribute to a bunch of open source big projects. They probably want the ability to say "I've contributed XX PRs for large open source projects"

I think most of it is them not having a way to know if the contribution was completely autonomous or directed by a human.

So they're defaulting to being as explanatory as possible because they don't want to give a rudely abrupt reply even if the poster is abusing AI


That post also looks suspiciously like AI slop


Yep. Two latest comments are full of LLM tells, plus an LLM-generated Show HN.

As usual with modern Claudes and GPT-5s, the output repeats and overemphasizes jargon from the input tokens without clarifying or switching up the wording.


Sure, but the ID solution is an "if everyone just gives up their privacy / anonymity / sensitive data" and the mechanism is by denial of service

In fact its worse. Every site must also implement this security check. Or everyone must agree to just use sites and services that follow this policy. Otherwise anyone can just use another, often 'less safe' website.


I'm not advocating for that either, I'm only pointing out that "if everyone just" is a collective action problem that is a non-solution because it doesn't describe the mechanism by which everyone does something.

Your example confuses the locus of control. The platform is making the choice and relies on user inaction rather than action. Users as a whole basically always descend gradients, and if they like / are addicted to the service, they'll descend with enough momentum to carry them over one-time friction like an ID check. The null hypothesis is they continue using the service. For it to be an "if everyone just" answer, it would be "if everyone just decided to stop using these extremely sticky services" because that is the de facto choice they are presented with. And it similarly suffers from an "if everyone just" lack of plausible mechanism.

The point of calling out non-solutions masquerading as solutions is to keep people's energy focused on possible but unstated solutions, rather than spending time blaming people for behavior largely determined by myriad immovable circumstances.


The above post is an example of the LLM providing a bad description of the code. "Local first" with its default support being for OpenAI and Anthropic models... that makes it local... third?

Can you provide examples in the wild of LLMs creating good descriptions of code?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: