Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I don't like the idea of it being trained on anything I've written or created[1].

I'm more curious about this part - because I'm the opposite. I'm ecstatic that having open source code on github over the last ~12 years means that you have contributed to machine intelligence. What a wild thought and how super cool that everyone who contributed to the internet also gets to contribute to AI.

I can't imagine being upset that your output into the world is accelerated and incorporated into what eventually will be AGI. It's an honor, really.

And obviously I disagree with the rest of the post. But trying to add something useful to the discussion - why are so many people offended that AI is learning from them? Do they see it some other way? As exploitation? But you are the one who put the content out there! If you didn't want others to see and use and learn from your content, it has always been rule #1 that you shouldn't put it online. Is this not drilled into everyone by this point in time?

> These tools will improve ... I don't think this is a good thing.

First time I've heard this one. Who doesn't want the tools to improve? You don't want to have an option to be excluded from the AI training that you were so upset about? You don't want to have an option to get paid for being included in training? You don't want models developed that don't use copyrighted content? You don't want anything to be better?

You want ChatGPT to continue hallucinating forever? You don't want OpenAI to reduce the harm that comes from it hallucinating?

I find it hard to believe that people who are so viciously against AI really mean everything they say. To so confidently state you want the world to halt progress on the most innovative and useful inventions of our time that can measurably improve people's lives is a hot take indeed.



>Do they see it some other way? As exploitation? But you are the one who put the content out there! If you didn't want others to see and use and learn from your content,

The point is that "others" learning from things is fine. The problem starts when it's a machine acting on behalf of a faceless mega-corporation. It's like the tragedy of the commons, except instead of originating from an invisible mass of takers, it mostly comes from a small amount of known actors like Microsoft and Google. No shit that people will rally against those.

> First time I've heard this one. Who doesn't want the tools to improve?

The reasoning is also given: improving machine learning has an environmental cost. It's also used to deskill jobs (which I don't think is a great argument, although it's not implausible for inequality to massively increase due to deployment of ML). The author also says AI is used to muddy public discourse, which is an understandable fear given that text generation can be used to fake e.g. some grassroots movement.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: