Stated intent and actual intent are usually different, I expect they intended the opposite all along but were just riding the open source ethical AI wave to profit.
Ive talked to a few people formerly at OpenAI a few years ago - the deviation from the original mission was a “boiling frog” process that is best demarcated when the Anthropic founders left. The core team doing the actual science and engineering work did legitimately believe in the open research mission. But when funding was hard to find, things kind of broke.
I strongly doubt it. Sam was pretty damn rich before OpenAI. He's written a lot of stuff about AI and how he worried someone would do it. The original plan was roughly build the basis for ethical AI, have someone else build on top of it, which somehow pivoted into everyone using their API.
If they just wanted money, they could have cut out a lot of the ethical filters and choke out competitors. Google didn't stop NSFW. Twitter, Reddit, Tumblr, etc didn't. AI is bound to be used for NSFW among other things, but they've set the standards to make it ethical.
I think eventually they did let loose to try to keep ahead of competitors. This probably pissed off the board and led to the drama recently? Just speculation. Because the new models are nowhere near as anal as the initial release.
> If they just wanted money, they could have cut out a lot of the ethical filters and choke out competitors.
I think you are wrong here, the safety filters (“safety” and “ethics” for AI are labels for boundaries and concerns of different ideological factions, and Altman and OpenAI are deep in the “safety” side—which has the nost money and power behind it so “safety” is also becoming the generic term for AI boundaries) are an important oart if the PR and regulatory lobbying effort, which is key to OpenAI’s long term moneymaking plan, given the absence of a durable moat for commercial AI.
> If they just wanted money, they could have cut out a lot of the ethical filters and choke out competitors. Google didn't stop NSFW. Twitter, Reddit, Tumblr, etc didn't.
Neither, in practice, has OpenAI—there are whole communitids built around using OpenAI's models for very, very NSFW purposes. They've prevented casual NSFW, to oresent the image they want to thr government and interest grouoa whose support they want as they lovby to shape regulation of AI. avoiding being a target for things like the 404 Media campaign against CivitAI where the NSFW is more readily visible.