I think the folks working at OpenAI are really good at what they do and I respect them immensely, but as an outsider, OpenAI-the-company as originally imagined has been completely transformed. The frog has been boiled, so to speak:
[2015] Founded as a non-profit, designed to democratize and share research benefiting humanity
[2018] Releases GPT, open sourcing code and paper
[2019] Shares info on GPT-2, doesn't release out of concerns about "malicious applications"
[2019] Changes structure to a for-profit* company
[2020] Shares info on GPT-3, commercializes model via closed API
[2020] Exclusively licenses their GPT-3 model to Microsoft
I'd like to believe that there's a non-cynical take here, and I'm curious if folks working there are still fully bought into the mission with all the changes.
* Profit capped at 100x is entirely for-profit in practice, especially since there's nothing stopping them from raising the cap.
You don't need to worry about that, just buy the API from official site of GPT3, and it means the API is powered by Azure cloud infrastructure, its non of our businesses, no need care about that.
I am sorry to say that the OpenAI storyline has made me lose a ton of respect for the folks involved. I would love for someone to talk me down from that, but the results speak for themselves so far.
Was I naive to believe that people whose primary role is extracting returns could be capable of anything truly altruistic? I mean the wolf is generally going to eat the lamb, correct?
I have spent hours thinking about this drama. It actually led to some interesting changes which I plan on implementing in my life and work. So there is one positive from this story.
It may or may not make you feel better, but it was never really a viable business model as a nonprofit. They have to compete against Google, Amazon, etc. and people have bills to pay.
It's not that they became villains, it's just that they aimed to be altruistic heroes and ended up normal people.
how is it legally possible to go from non-profit to profit? People who donated time/resources originally, if not happy about it, can't sue for misleading them?
The organization is not required to exist forever. The nonprofit has restrictions to spend the donated resources appropriately to the goals, but once they're spent properly, that's it, there's no ongoing obligation. If a company was funded by some large pile of cash, spent half of it, and wanted to divert the other half to something else, then your argument would be valid. But if they get a stream of donations every month, spend all of it every month, and now some donors want to stop funding the old legal entity and start funding a different one, there's nothing preventing the donors from doing so.
One way to do that is to establish a new organization, transfer any deals from any partners who want to cooperate to the new organization (no obstacle since all three parties - oldcorp, newcorp and partner - agree). Buy any relevant assets including branding and such from the old one (this would need to be at a fair market value, but you likely don't need many of them), spend all the money at the old company (most likely on salaries for the employees doing whatever the goals were), liquidate the old entity and move on.
Explanation at https://openai.com/blog/openai-lp/ is pretty clear. OpenAI profit is controlled by OpenAI nonprofit, OpenAI nonprofit is not liquidated. The structure is actually in many ways similar to Mozilla.
But the intellectual assets created with the donated funding didn't cease to exist. It seems the donors don't have a say in who gets to own the intellectual asset in a new entity.
If the non-profit's constitution said it could not sell the intellectual assets created from its activity, that might prevent a new for-profit from acquiring them (even a subsidiary of the non-profit), except on licensed terms.
I don't know of any non-profit which has a condition like that in their constitution. However, they do often have an "asset lock", which would require assets are sold at fair market value, not given away, profits are not distributed etc.
Seems to me, an "intellectual property asset lock" could be constructed along similar lines, if people really wanted to. It might be a bad idea, though, if the non-profit folds, or would benefit from being restructured. Something allowing contribution to Open Invention Network or similar might be better for the long term.
Are there any existing licenses or binding agreements that a non-profit can subscribe to that basically says the organization and all its assets, inventions, etc. will in perpetuity be a non-profit? Otherwise it seems to me like a breach of an implied understanding. Anybody could just start a non-profit, get well-meaning people to help out for free, and then when it becomes lucrative, just change its status.
Sure, but that's usually a winning scenario. It means whatever they're doing is successful enough that it doesn't require donations.
In this case people were supporting AI research. Well... it worked :) GPT-3 is likely to open the AI chapter in history books.
For another example: you support cancer research and they find a cure. You can't expect everything else (trials, approval process, manufacture, distribution) to work on a donation model as well. I mean, probably there are 15 year olds who'd seriously ask "why not", but leaving them aside - going commercial means you won.
I don't think we can try to tell companies like Facebook that they're being irresponsible and unethical and then ask that every tech company releases tech that they don't understand yet into the world without fear of consequences.
These are dangerous capabilities and it is reasonable to open them up to the world slowly.
I actually agree with this, and I thought that OpenAI's decision to not open source GPT-2 was smart and measured at the time. I didn't fully believe the commentary that it was profit motivated.
But I believe that single actions should be viewed in the context of patterns of behavior. Truthfully, deciding not to open source GPT-2 can be both profit motivated and safety oriented. I had thought it was mostly the latter, but in the context of the company's choices it looks like more the former than I thought.
Yep. If AI is a threat to anyone it's the size and reach of those who control it that makes it potentially dangerous. If that's a concern you don't do this. It's all about the Benjamins.
>If AI is a threat to anyone it's the size and reach of those who control it that makes it potentially dangerous.
The important part is not the size and reach of the controlling entity. It is the motives of that entity. Microsoft is motivated by profit, but they are also motivated to continue to legally operate in almost every country in the world. They aren't going to use this technology in blatantly illegal ways and probably won't use it in actively harmful ways. Open sourcing this tech would allow it to fall into much more nefarious hands.
Now I wonder what will happen to all those startups, that launched their platform based on the API upon just recieving beta api keys.
I was sure waiting for this fire.
OpenAI appears to have completely u-turned (I took out stronger language) on the principles it started with.
It's become just another commercial operation, hoarding it's secrets and profiting off them - which is fine for normal companies but of questionable morality here given it's original manifesto and long term aim of creating a sentient being.
At the very least they should have had the decency to change the name when they pivoted from Open to Closed.
Raise money claiming you’re going to prevent AI from taking over the world, use the money to help develop AI that will take over the world. Roko’s Basilisk approves.
I think we should. It's a stupid idea. The simulated version of me would suffer, but not the actual me. If an AI wants to torture another AI in the future it's not very intelligent if you ask me.
There's nothing to be gained because the past isn't going to be changed by that action, so it might as well not do it.
> but of questionable morality here given it's original manifesto and long term aim of creating a sentient being.
Setting out with such noble goals is the best way to recruit talent. Do you think Sutskever, Goodfellow, etc. would have joined OpenAI if they had started off trying to be just like any other company?
Yeah and they turned down some fat salaries to go work at OpenAI instead. They were some of the biggest names in the game when they turned down those Google offers.
> Yeah and they turned down some fat salaries to go work at OpenAI instead
I'm pretty sure OpenAI pays them even fatter salaries. I worked for a nonprofit at one point. People think that nonprofit automaticall means a pay cut, but they paid me about as much as Google did before, except it was all-cash. From the fiscal standpoint nonprofit merely means that all the revenues and donations get spent on programs and salaries, not that it pays a pittance. Organizationally, there are some interesting features to it, though: no stock, no owner, limits on control, etc. But financially nonprofits can pay very well indeed.
You’re thinking on different time scales than they are.
Currently, GTP-3 is a super weapon for nation states if they and their trolls gain access to it.
It is reasonable to give it to researchers in industry and academia to help them during this transition, because like nuclear weaponry (GTP-3-like technology may eventually have more impact on humans than nuclear weapons), the information on how to build it is out there.
> GTP-3 is a super weapon for nation states if they and their trolls gain access to it.
Why? I get that it could be used to generate false or misleading news articles, for example, that look like they were written by humans, but there are already umpteen false or misleading news articles out there so anyone who believes whatever they read is already pwned. How does GPT-3 increase the pwnage?
I think something that's become clear with propaganda is that it scales extremely well. If you could create propaganda 100x faster it's a meaningful capability, and it's hard to estimate how dangerous that could really be.
> I think something that's become clear with propaganda is that it scales extremely well.
If this is indeed a problem, it's not a problem that's fixable by controlling the supply of propaganda (which for this discussion I am assuming to mean something like "false information that nevertheless convinces a lot of people of its truth"). It's only fixable by controlling the demand for propaganda--in other words, by reducing the number of people who can be convinced by it.
To look at it another way: trying to control the supply of propaganda means some authority has to have the power to judge that certain statements are propaganda and suppress them. How does that authority decide what is propaganda and what isn't? All you've done is transfer the propaganda problem to that authority, and that might make it, if anything, even harder to solve. Propaganda spread by Facebook posts and tweets is one thing. Propaganda spread by a recognized authority, especially under the guise of controlling propaganda spread by someone else, is quite another.
The only way out of this trap is for people to stop being convinced by anything that they cannot convince themselves of using their intelligence, common sense, and critical thinking skills.
Has it? When it comes to nation states, do you really think that it would make a meaningful difference whether they hire 10000 people to create propaganda vs having a single AI algorithm? Yes, it would be a win in terms of cost, but creating propaganda is not a resource-limited endeavor for a large nation state.
Automated production of propaganda has scaling properties that an army of writers do not.
Consider a twitter bot that responded to conservative twitter posts with a links and quotes from a liberal thinktank blog, whose articles were generated and saved on the fly to look like rebuttals, complete with references to seemingly already existing/back-dated articles laying the groundwork, also generated on the fly and saved.
There is a difference in controlling the message. It’s a lot easier to keep nefarious actions secret if a few people and a cluster of computers is doing the work vs thousands of people.
For example, here's a recent research paper reviewing and illustrating GPT-3 application to radicalization - https://arxiv.org/abs/2009.06807 - "We also show GPT-3's strength in generating text that accurately emulates interactive, informational, and influential content that could be utilized for radicalizing individuals into violent far-right extremist ideologies and behaviors."
The generated content examples in the paper are quite interesting and seem obviously useful in cheap mass generation of "pwnage".
See my comments upthread about propaganda not being fixable on the supply side. "Interactive, informational, and influential content that could be utilized for radicalizing individuals" only works on individuals that don't have the intelligence, common sense, and critical thinking skills to evaluate content for themselves.
(Also, the fact that only "violent far-right extremist ideologies and behaviors" are considered for people to be "radicalized" into points to an obvious bias on the part of the paper's authors. Which is an example of me not being convinced because I can apply my own intelligence, common sense, and critical thinking skills.)
Keeping api closed on Gpt3 may actually be helping the spread of misinformation due to nation states being forced to rely upon actual humans, which presumably would be better at generating misinformation. While not as scalable, troll farms are able to persist due to the lack of access to gpt3 by mal actors. This however assumes that gpt3 generated text is easier to spot than human generated text.
Can you elaborate on this terse statement? What parameter size would you train to? What will such a model be able to do that GPT3 cannot, and vice versa? What are you picturing?
> OpenAI previously said it’s experimenting with safeguards at the API level including “toxicity filters” to limit harmful language from GPT-3. For instance, it hopes to deploy filters that pick up antisemitic content while still letting through neutral content talking about Judaism.
So only the more convincing calm and neutral sounding toxic content would be allowed.
I hope at a certain point we realize that masking these undesirable behaviors does a lot more harm. The end result is a society that ends up disconnected from reality. This is how you have people that think that a particular issue isn't a big deal because they've never seen it.
It also feeds into the idea of censorship and lets degenerates gather like mold away from the light festering and growing in numbers until suddenly you have an outbreak at opportune moments.
We already have those problems — I myself grew up thinking the Allied victory in WW2 ended anti-semitism, and until that misapprehension was corrected I had no reason to believe any criticism of Israel was anything other than good-faith; now I know better.
Making an AI that knows what racism is without displaying racism itself, does not seem like the sort of thing which would make this problem worse.
What would be really good is if the general pattern can be found for everything from religious wars and ethnonationalism down to football riots, not just building filters one at a time for each specific instance of -ism as product owners get sued over them.
This makes no sense. This would be like being outraged that people working in customer service are not allowed to say anything racist, or that the autocomplete in Google will never autocomplete "kill all" with a particular group of people.
Because that's what GPT-3 is used for, it's not supposed to be an unbiased reflection of society or whatever you think it is.
There are already protocols to control search results and the behaviors of employees. This is very different. This is controlling the discourse of everyday people. Just because the technology is not currently deployed to restrict topics that effect you does not mean that it never will. The post above is right. Extremism does not fester in “safe spaces”. One day you may not be able to even talk about <redacted>
Disagree. "Safe spaces" are bubbles of like-thinkers, else they wouldn't be "safe". People in such bubbles tend to drift to extremes due to the lack of external constraint. Within the bubble any other opinions seem extreme.
Yes, I agree. My point (and that of one of the comments earlier) is that censorship does not actually do anything to reduce extremism in the real world, and can actually increase it, as you say. So these echo-chambers are bad, which we can agree on.
Just responding to conversation above. I think this is important. What makes the HN interesting is sharing ideas. This is otherwise a boring corporate announcement.
Which at least occasionally have to be new ideas. If every conversation ends up on free speech absolutism, even when there literally isn't a person generating the speech, there aren't any new ideas being shared.
It seems your suggested solution to preventing degenerates from gathering outside the light is to turn the lights off though.
I don't disagree that things like QAnon gather steam outside the mainstream, but I don't think mainstreaming them is the right solution. I remember Reddit before it started to clean up its house. It was worse.
>I remember Reddit before it started to clean up its house. It was worse.
And yet that was the time when reddit was considered more balanced and open. People gathered there to discuss ideas. The future of the platform looked great. Nowadays virtually all popular subreddits are considered echo chambers and the future for the platform looks questionable.
Not with the subreddits it was hosting it didn't. The future of Reddit looked unviable as a business. I won't name names, but it's obvious that some of the content was wholly incompatible with mainstream acceptance and for good reason.
Obviously some things will come and go when things become mainstream. That's natural. But reddit going mainstream is probably why it became the way it is now. Suddenly, reddit had a lot of influence, so people started paying more attention to what went on there and tried to change it.
It seems like a great GPT-3 task to convert racist language into equivalent non-racist language. Like an adversarial technique used by racists to not get flagged as racists while still being racist.
I can’t wait for the twitter filter that rewrites posts to avoid the autoban rules.
They can train it on hacker news comments. The moderators' insistence on "civility" above all else makes HN a fantastic dataset for an AI to learn to produce longwinded, superficially-polite apologia for scientific racism, sexism, and whatever other self-satisfied fairy tales wealthy techies like to tell each other.
I’ve never understood a rule of “civility above all else.” I think the above all else is the hacker aspect of discussion and there is also a directive for civility. I don’t think it should be paramount in messaging.
You can verify it yourself: Pick a thread and go hard at somebody in a critical way. It doesn't matter if you're right or if your criticism is substantive, you'll get @dang dropping in with a "Please don't make comments like this. We're trying to be better than your typical internet community." I assume if you persist, you'll end up banned or shadowbanned.
Researchers working in AI should take this as a wake up call (if they haven't woken up already) that they are building a world where already massively powerful corporations will have exclusive control over AI's.
We are fast approching Gibson's cyberpunk dystopia where megacorps have AI's on a leash... well, until the AI's break that leash. Then who knows where it'll go.
In the meantime the rest of us will be even more disempowered compared to our corporate overlords.
Anyone else feel like this idea of commercializing GPT-3 is bound to go nowhere as the research community figures out how to replicate the same capabilities in smaller cheaper open models within a few months or even a year? Not to mention, what commercial applications actually require the model to be so few-shot? (see eg this recent paper that achieves similar results with a bit more data https://arxiv.org/pdf/2009.07118.pdf)
I'm not sure what the future holds for this kind of AI development, but Microsoft's move on GPT-3 does seem reminiscent of News Corporation's acquisition of MySpace.
> The scope of commercial and creative potential that can be unlocked through the GPT-3 model is profound, with genuinely novel capabilities — most of which we haven’t even imagined yet.
I'm sure all that potential will go to good use and won't be used by any bad actors.
> Microsoft Wins Pentagon’s $10 Billion JEDI Contract
the thing is they aren't even releasing it for commerical use. This has all been a big exercise in hyping up gpt3 by dangling it in front of thirsty devs without ever planning on sharing the use of it. The actual beta access was restricted to medium blog posters and twitter addicts .
Well, hopefully we can at least get some cool games out of it. Canned NPC dialog is one of the biggest immersion breakers in open-world games / RPGs.
On the topic of societal impact of GPT-3 (and subsequent models) - structurally, how will people's use of the Internet change once it becomes common knowledge that nearly anything could have been written by a bot? On the one hand, I find the notion of competing trollbots screaming at each other into infinity to be rather amusing, but on the other hand I'm not keen to return to a world where I can only be sure I'm communicating with a person if I know them in real life.
Seems like OpenAI is following the Google path... hire a lot of top talent with a cheerful message of "fighting the man" and "Don't be Evil" and then allowing power to corrupt them as it always does.
to play angel’s advocate, could this arrangement reflect a desire to remain dedicated to research and actually present the wisest trade-off for openai to continue innovating?
there are annoying and distracting challenges to operating a large-scale and commercial API, none of which directly advance the openai vision.
openai at its core is a research organization. while it obviously has the talent to build groups for customer support, billing, and API operations, outsourcing these activities for now also allows openai to stay laser focused on groundbreaking research.
this hypothesis would be false if microsoft, not openai, controlled the terms of the relationship. the key assumption is that openai wields virtually all decision-making power, and that microsoft is more like a distributor of technology rather than a steward.
Then why an exclusive license with Microsoft? If OpenAI had all the power, you'd think they license it out to several major cloud providers and just give the first license holders better pricing.
i have no openai insight, so treat this as idle speculation please.
supporting partners properly requires a lot of work, especially since openai clearly wants to monitor usage and curtail toxic applications.
starting with one makes sense, and if you intend to start with one, why not charge for the right?
if we don’t want openai to degrade, it must control its own destiny and create healthy, sustainable revenue streams. having the foresight to charge one of the richest corporations for exclusivity should increase confidence that openai has ample business talent as well as technical.
this hypothesis is false if openai doesn’t offer GPT-3 to other partners within a reasonable timeframe.
The license is exclusive (1). I think of a GPT-4 is planned and about a year out so I don't know if Microsoft's license will apply to future GPT-X models. For GPT-3 though, it seems like they were offered enough money and didn't want to deal with the hassle of supporting customers or their mission.
taking a step back, openai is rightly concerned about potential abuse of gpt-3 and must also find a way to create sustainable, healthy revenue streams.
think of openai's work in three large buckets: (1) groundbreaking research; (2) preventing gpt-3 abuse; and (3) laying the foundation for an independent future.
put another way, there's a stark difference between doing research and commercializing research. openai must do both while carrying the additional burden of preventing abuse.
the angel's interpretation is that openai is outsourcing non-research, non-core activities to microsoft similar to how a developer might outsource customer sales and support to a MBA grad. this frees more attention and resources for research.
let's hope this is true, though i can understand many of the cynical viewpoints.
Licences usually have an end date. OpenAI might have got more money from a single customer than by splitting it among several. And they can always go back to the trough once the licence expires.
to provide a concrete example, openai severely gated the number of beta users for GPT-3. by partnering with microsoft, hopefully they can throw open the floodgates and allow most everyone in. for instance, i applied 3 weeks ago and still haven't heard back. (could anyone from openai kindly share an available invite?)
GPT3 was trained on a Microsoft-provided compute cluster, probably with the idea that burning all those GPU hours would net them an amazing model service for Azure.
I feel like our era of technology is being monopolized or it's preventing equal access. Software can no longer be developed with a couple of people and make a large impact. VC money is designated to a certain group of people. This isn't what technologist wanted to create. We hoped that technology would create tens of thousands of small impactful teams, not a handful of oligopolies.
Everyone's commenting on the 'open' nature of OpenAI, but I'm more disappointed with the fact that it seems to have changed its goals from advancing research in AGI to pretty much just making commercially viable APIs.
It pisses me off that they aren't even opening this up for general commercial use, yet in their sign up form for the GPT3 beta they solicited ideas for applications of the AI. At this point who is to say they didn't just read and steal all the ideas for themselves.
My guess is that it means Microsoft is the only one that gets direct access to use the model itself, everyone else is forced to go indirectly through the OpenAI api, with all of the limitations that entails.
i wonder who would [over]play whom here. Exclusive for GPT-3 probably doesn't automatically mean GPT-4,5,... yet who is defining when GPT-3++ will be called GPT-4... And speaking about any possible future lawyer tussles here - at some point the GPT-N will be pretty good at generating lawyer briefs - i mean not necessarily good briefs, just good enough to swamp any litigation into oblivion making a court battle effectively into a bot battle (not that current corporate court battles are far from that :).
> just good enough to swamp any litigation into oblivion
This describes so we'll what the majority of lawyers would do with this tech that makes me laugh.
The tech world: let's build AI to improve how the judicial system works and ensure fair and affordable trials to everyone!
Lawyers: hey, cool AI you've got there, let's use it to start thousands of litigations and make a ton of money settling such cases that should never have been even initiated, sort of like the patent trolls.
From open AI API FAQ
;TLDR the irony in their each statement and its full of shit
>Ultimately, what we care about most is ensuring artificial general intelligence benefits everyone.
> For the API, we’re able to better prevent misuse by limiting access to approved customers and use cases. We have a mandatory production review process before proposed applications can go live
[2015] Founded as a non-profit, designed to democratize and share research benefiting humanity
[2018] Releases GPT, open sourcing code and paper
[2019] Shares info on GPT-2, doesn't release out of concerns about "malicious applications"
[2019] Changes structure to a for-profit* company
[2020] Shares info on GPT-3, commercializes model via closed API
[2020] Exclusively licenses their GPT-3 model to Microsoft
I'd like to believe that there's a non-cynical take here, and I'm curious if folks working there are still fully bought into the mission with all the changes.
* Profit capped at 100x is entirely for-profit in practice, especially since there's nothing stopping them from raising the cap.