AI is a big driver of literal direct physical corruption. Language and knowledge is forever tainted because of an outpouring of AI generated spam. Evaluating resumes is more difficult now because you can't tell real impacts versus fabricated hallucinations. Open source projects are overwhelmed with AI generated PRs...
Any corruption is emboldened by AI, it's a catalyst of the problem, doesn't seem anywhere close to potentially being a fix
What's stopping the price from being extremely low? Plenty might pay $1 to take a bundle of 1000 items of clothing, pick through it and find 20 items they like, then destroy the 980 other items.
Except being destroyed locally in a controlled or regulated process and shipped to be destroyed overseas with more relaxed regulations are not equal footprint wise
980 items being shipped then destroyed versus 1000 items being destroyed.
It's 980(x+y) instead of 1000(x)
I have no knowledge of the tradeoffs, but I might also imagine that the method of destruction could be worse: incineration vs landfill vs thrown in the ocean, etc
Maybe, but it could also just be self promotion by the owner of this 'agent'. They've set it up to contribute to a bunch of open source big projects. They probably want the ability to say "I've contributed XX PRs for large open source projects"
Yep. Two latest comments are full of LLM tells, plus an LLM-generated Show HN.
As usual with modern Claudes and GPT-5s, the output repeats and overemphasizes jargon from the input tokens without clarifying or switching up the wording.
Sure, but the ID solution is an "if everyone just gives up their privacy / anonymity / sensitive data" and the mechanism is by denial of service
In fact its worse. Every site must also implement this security check. Or everyone must agree to just use sites and services that follow this policy. Otherwise anyone can just use another, often 'less safe' website.
I'm not advocating for that either, I'm only pointing out that "if everyone just" is a collective action problem that is a non-solution because it doesn't describe the mechanism by which everyone does something.
Your example confuses the locus of control. The platform is making the choice and relies on user inaction rather than action. Users as a whole basically always descend gradients, and if they like / are addicted to the service, they'll descend with enough momentum to carry them over one-time friction like an ID check. The null hypothesis is they continue using the service. For it to be an "if everyone just" answer, it would be "if everyone just decided to stop using these extremely sticky services" because that is the de facto choice they are presented with. And it similarly suffers from an "if everyone just" lack of plausible mechanism.
The point of calling out non-solutions masquerading as solutions is to keep people's energy focused on possible but unstated solutions, rather than spending time blaming people for behavior largely determined by myriad immovable circumstances.
The above post is an example of the LLM providing a bad description of the code. "Local first" with its default support being for OpenAI and Anthropic models... that makes it local... third?
Can you provide examples in the wild of LLMs creating good descriptions of code?
reply