For a lot of AI researchers, OpenAI has been a huge disappointment. We had hope that OpenAI would be the company to democratize AI with good open source work, transparency, no PR bullshit (aka DeepMind), and evangelism. That they would develop in the open, and perhaps even do research in the open. You know, kind of like the name says.
It all started out okay with their release of OpenAI Gym, tutorials, leaderboards, and competitions around that. That was when Karpathy was still there. Over time, many projects have become abandoned, poorly maintained, or just disappeared . And many projects they promised never happened . OpenAI became just another research lab obsessed with publishing papers in closed (!) journals, indistinguishable from Google AI, DeepMind, FAIR, MSR, and the many others.
There is nothing open or different about them. Most paper code is not published, and even when it is, it's just the typical poorly written and unmaintained research code that you see from other labs. None of their infrastructure is open source either, because it's needed to maintain their competitive advantage to train models and publish research papers. GPT-3 being offered as a paid API to a select number of people is latest joke in a long series of other jokes. All of this would be fine, if it was not for the name and branding of being a transparent and good-willed nonprofit company. It is just misleading and that rubs many people the wrong way, as if the whole "open" thing was just a PR stunt.
HuggingFace  these days is pretty much what OpenAI should have been, but only time will tell what happens.
A nuanced perspective would look at the arguments as to why OpenAI is doing the things they are doing. For example:
* OpenAI publishes in closed journals (actually conference proceedings) because that is where all the cutting edge research is published and reviewed. I cannot recall an OpenAI paper that wasn't available either via arXiv or their website, despite being published in a closed journal. What is the alternative here? Where should they go for quality peer-review? Yes you can argue the peer review at top conferences is not quality, but is worse quality than no peer-review or peer review from open-access no-name journals?
* How does OpenAI make money? How much are they bringing in? How much does it cost to support things like the OpenAI Gym, etc.? How much does it cost OpenAI in terms of bandwidth to host pre-trained versions of GPT-3? At some point a company needs to make money and prioritize resources - they can't give everything away for free in perpetuity.
I don't think these questions have obvious answers - there is give and take.
Organizations are constantly making decisions that are trading off certain values for others, i.e. openness vs safety/expediency/funding. But if they use the word open in their name, signalling to people that is one of their foundational values, people will expect them to pick openness even when it's not necessarily the easiest, safest, most expedient, or most profitable choice. They expect them to pick openness when it's hard.
OpenAI started as a non-profit.
I didn't know this was even possible/legal. Start as a non-profit for all the tax advantages and convert to for-profit once you've got a saleable product? Maybe startups should start doing this
Ok. So whats the point of OpenAI then ?
Open source has very specific definitions OSI has a good process to determine whether a project meets them. I doubt openAI can meet them
The sad thing is: OpenAI might be just the foreshadow of the power step function segment of the future, disenfranchising those who are on the wrong side of the API much more that we see today (e.g. somewhat limited to the gig economy).