Hacker News new | past | comments | ask | show | jobs | submit login

OpenAI should be compelled to release their models under (e.g) GPLv3. That's it. They can keep their services/profits/deals/etc to fund research, but all products of that research must be openly available.

No escape hatch excuse of "because safety!" We already have a safety mechanism -- it's called government. It's a well-established, representative body with powers, laws, policies, practices, agencies/institutions, etc. whose express purpose is to protect and serve via democratically elected officials.

We the people decide how to regulate our society's technology & safety, not OpenAI, and sure as hell not Microsoft. So OpenAI needs a reality check, I say!




Should there also be some enforcement of sticking to non-profit charter, and avoiding self-dealing and other conflict-of-interest behavior?

If so, how do you enforce that against what might be demonstrably misaligned/colluding/rogue leadership?


Yes, regulators should enforce our regulations, if that's your question. Force the nonprofit to not profit; prevent frauds from defrauding.

In this case, a nonprofit took donations to create open AI for all of humanity. Instead, they "opened" their AI exclusively to themselves wearing a mustache, and enriched themselves. Then they had the balls to rationalize their actions by telling everyone that "it's for your own good." Their behavior is so shockingly brazen that it's almost admirable. So yeah, we should throw the book at them. Hard.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: