Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI should now change their name to ClosedAI (reddit.com)
503 points by gitgud on July 20, 2020 | hide | past | favorite | 108 comments



Someone asked for a more nuanced perspective, so here we go.

For a lot of AI researchers, OpenAI has been a huge disappointment. We had hope that OpenAI would be the company to democratize AI with good open source work, transparency, no PR bullshit (aka DeepMind), and evangelism. That they would develop in the open, and perhaps even do research in the open. You know, kind of like the name says.

It all started out okay with their release of OpenAI Gym, tutorials, leaderboards, and competitions around that. That was when Karpathy was still there. Over time, many projects have become abandoned, poorly maintained, or just disappeared [1]. And many projects they promised never happened [2]. OpenAI became just another research lab obsessed with publishing papers in closed (!) journals, indistinguishable from Google AI, DeepMind, FAIR, MSR, and the many others.

There is nothing open or different about them. Most paper code is not published, and even when it is, it's just the typical poorly written and unmaintained research code that you see from other labs. None of their infrastructure is open source either, because it's needed to maintain their competitive advantage to train models and publish research papers. GPT-3 being offered as a paid API to a select number of people is latest joke in a long series of other jokes. All of this would be fine, if it was not for the name and branding of being a transparent and good-willed nonprofit company. It is just misleading and that rubs many people the wrong way, as if the whole "open" thing was just a PR stunt.

HuggingFace [0] these days is pretty much what OpenAI should have been, but only time will tell what happens.

[0] https://huggingface.co/

[1] https://www.reddit.com/r/MachineLearning/comments/aqwcyx/dis...

[2] https://github.com/openai/roboschool/issues/159


I don't see how this is a nuanced perspective - it seems to restate the same complaints/arguments just about every comment makes in these discussions.

A nuanced perspective would look at the arguments as to why OpenAI is doing the things they are doing. For example:

* OpenAI publishes in closed journals (actually conference proceedings) because that is where all the cutting edge research is published and reviewed. I cannot recall an OpenAI paper that wasn't available either via arXiv or their website, despite being published in a closed journal. What is the alternative here? Where should they go for quality peer-review? Yes you can argue the peer review at top conferences is not quality, but is worse quality than no peer-review or peer review from open-access no-name journals?

* How does OpenAI make money? How much are they bringing in? How much does it cost to support things like the OpenAI Gym, etc.? How much does it cost OpenAI in terms of bandwidth to host pre-trained versions of GPT-3? At some point a company needs to make money and prioritize resources - they can't give everything away for free in perpetuity.

I don't think these questions have obvious answers - there is give and take.


It seems like there are a lot of good reasons for every choice they made.

Organizations are constantly making decisions that are trading off certain values for others, i.e. openness vs safety/expediency/funding. But if they use the word open in their name, signalling to people that is one of their foundational values, people will expect them to pick openness even when it's not necessarily the easiest, safest, most expedient, or most profitable choice. They expect them to pick openness when it's hard.


> At some point a company needs to make money

:-/

OpenAI started as a non-profit.


Non-profit does not mean “spends money in perpetuity with no revenue.”


that's not what happened with OpenAI though. They're not a non-profit anymore, they changed to a "controlled profit" (lol) model.

I didn't know this was even possible/legal. Start as a non-profit for all the tax advantages and convert to for-profit once you've got a saleable product? Maybe startups should start doing this


What's the point? If your business doesn't turn a profit then you don't owe business income taxes anyways. Most businesses take several years to reach profitability.


I thought it was capped profit not controlled.


"OpenAI is governed by the board of OpenAI Nonprofit, which consists of OpenAI LP employees Greg Brockman (Chairman & CTO), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Holden Karnofsky, Reid Hoffman, Shivon Zilis, and Tasha McCauley."

https://openai.com/blog/openai-lp/


The NFL is also managed by a non-profit. Just because you're managed by a non-profit doesn't mean your company is also a non-profit.


The NFL is a (nonprofit) trade association for NFL team companies.


Yes, but the computing they are trying to do is expensive, so it makes sense to then try and get some self-sustaining revenue by leveraging their research into a software service. I must admit I don't know about their current status of funding from large companies etc, but I do think it makes sense to try and make a bit of their own money to be more independent.


Non-profit doesn't mean zero revenue.


The Girl Scouts are a non-profit yet they don't give their cookies away for free.


Girl Scouts USA has been a textbook example for decades now of an institution that misuses non-profit status for financial gain.


Good point, but their name does not imply that their cookies are free.


> OpenAI publishes in closed journals (actually conference proceedings) because that is where all the cutting edge research is published and reviewed.

Ok. So whats the point of OpenAI then ?


A bunch of fellow researchers and I started Manifold Computing (https://manifoldcomputing.com), were we’re hoping to do live, open source research and build open source tools. As you said, time will tell but I hope we can do good work this way.


Hugging face and the Spacy team have done some incredible work. Huge fan of both for my NLP projects.


I think this illustrates just how complex the topic is, though. Hugging Face is awesome, and Transformers has done so much to democratize NLP, but does it exists without labs like OpenAI releasing models like GPT-2? The ecosystem is still young and fluid, and as a result, it's super complex. I completely understand the critique of what OpenAI is now vs. what they positioned themselves as early on, but I think it is also a symptom of all the open questions around how ML research is to be done in a way that maximizes community benefit while remaining sustainable.


Hugging Face is also a for-profit startup. I wonder how betrayed this guy is going to feel betrayed when they launch a paid service...


Open doesn’t mean free. Plenty of for profit open source companies are making good money . Redis gitlab, elastic, sentry, docker, mongo Etc

Open source has very specific definitions OSI has a good process to determine whether a project meets them. I doubt openAI can meet them


Are they doing any notable research at all?


As soon as I learned that Sam Altman is involved, I could infer the direction. I cannot recall Sam being activist for anything open - but I do remember him as a the "head of the startup world", aka creating exclusive opportunities for venture capitalist by using and tearing bright eyed talent. I think he was pretty successful at the latter.

The sad thing is: OpenAI might be just the foreshadow of the power step function segment of the future, disenfranchising those who are on the wrong side of the API much more that we see today (e.g. somewhat limited to the gig economy).


What is the business model of HuggingFace?


I believe, having worked a little bit with GPT-2, that OpenAI is intentionally sabotaging access to their AI algorithm and data. They have started this sabotage with GPT-2 and with GPT-3 they simply didn't open source it.

For GPT-2, their repository (https://github.com/openai/gpt-2) is archived. Which is as good as saying that the project is abandoned and will receive no updates. The project already doesn't compile/run properly. This issue (https://github.com/openai/gpt-2/issues/178) could be solved relatively easily by either a one-line code fix, or merging a pull request (https://github.com/openai/gpt-2/pull/244/files). This is not happening and I have a hard time to believe it is on good intentions.

Oh, and by the way, I believe saying "This AI stuff is dangerous to world", the same as politicians saying "We need to check your web history for pedophilia stuff". It's funny how some people don't see the irony in how they oppose one thing and support another that is practically the same.


> "This AI stuff is dangerous to world"

It's not exactly that, their rhetoric is: "This AI stuff is dangerous to world if YOU use it. But when we use it, it is good."

All entities think that they hold the key to good values, while the others are always suspected of hiding vile intentions.


That's a stronger opinion about their views than necessarily is true.

Based on their actions, OpenAI seems to (rightly, in my opinion) view that some people would misuse GPT-3/etc, even if the majority would not.


> [...] that some people would misuse [Insert any tool here]

Following this logic, one would forbid schooling to lads based on the suspicion that they may "misuse" what they will learn to do some evil. One would use the alphabet to write threats to others. One would use a knife to stab others. So how do you police the right to access the knowledge? Should we restrict accessing knowledge to some few elites that some authority should decide who they are?

IMHO knowledge should be accessible to all. The accumulation of the humanity knowledge should not be controlled by an Orwellian entity. <s> Otherwise, let's start a movement demanding to wipe online universities courses from the Internet </s>


They said the same about gpt-2 though, and then open sourced it anyway, just not adding their trained models. Others did the work themselves, and the work didn't end, apparently.


The irony is we're talking about Good vs Evil of potential AI systems. It could be a nuclear arms race but I suspect we would be better for it. When information has been free it has nearly always been of benefit to humanity.


Define misuse.


One obvious example would be to fully automate social engineering email scams. Imagine how much disruption it would cause if spearphishing became as common as robocalls have become post-automation.


If it became that common it would quickly cease to be that effective. Spearfishing works because it's rare, so it doesn't automatically set off your bullshit detector. Most people don't fall for "cheap vi@gra" emails anymore.

Social engineering in general is effective because it's rare enough that people don't feel the need to develop policies and strategies for preventing it.


Sure, but there's no reason to expect the required strategies to be non-disruptive. It's now impossible for anyone not on my contact list to call me, because I won't pick up or listen to their messages - it'd be a tragedy if email became similarly locked down.


Wouldn't the same AI be able to detect that the text was generated by itself ?


For example, convincing and hard to detect fake content generation on Facebook/reddit/etc


> Oh, and by the way, I believe saying "This AI stuff is dangerous to world", the same as politicians saying "We need to check your web history for pedophilia stuff". It's funny how some people don't see the irony in how they oppose one thing and support another that is practically the same.

Some AI/ML applications are clearly dangerous to the world. See other HN comment sections when facial recognition comes up.


So, let me start by saying, facial recognition is absolutely dangerous.

But I'm not seeing anything meaningful being done about this. Some companies are refusing to sell AI to the government, but some, such as ClearView, are openly selling to any government including ones that will use it to hunt down gays or protestors. This cat is out of the bag. Even if comprehensive legislation is passed in the US which limits facial recognition use to law enforcement with a warrant, US law enforcement has repeatedly shown themselves to be above the law, with numerous loopholes to get around warrants. And that only affects the US. China, for example, will have no such compunctions.

Having facial recognition closed source doesn't do anything to prevent bad actors from using it to do harm. It simply means that only those with enough money to buy licenses get to use it--governments and corporations who have repeatedly shown themselves to be the bad guys when it comes to privacy.

The only difference if this is open source is that it puts this power in the hands of people with average incomes, and there are a lot of cases where this could be a good thing. We have seen, for example, pictures taken of police officers hiding their badge numbers illegally at protests in the past few weeks--facial recognition could help unmask these bad apples.


> Some AI/ML applications are clearly dangerous to the world. See other HN comment sections when facial recognition comes up.

I'm pretty ignorant on the background of what's being discussed (what did OpenAI change/do to become more closed?). But if OpenAI really believed this, the right thing to do would be to shut down and spend the money on advocacy. As it is, it seems that they're still releasing machine learning code/models, just not to everyone.


I am not remotely in this field, and have not been following this closely at all. With that being said, what obligation do they have in maintaining GPT-2? Did they have some stated commitment that they walked back, or am I missing something else?


Their charter is here: https://openai.com/charter/

>"We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research."

So, in a perfect world you would not only publish your research but also the code that is fundamental to it. Maintaining or abandoning research code on the other hand is an entirely different (costly) story that's simply an artifact of research focused software development. It is typically abandoned.

Personally, I see a huge flaw in the underlying philosophy. This presumption that this specific organization is or can somehow be benevolent flies in the face of all history. With nuclear weapons, most of the scientists regretted supporting their countries regardless of how beneveloant they thought they were.

In general, any sort of concentrated power tends to corrupt. It takes a very special mindset to understand power and refuse to abuse it. I'm not sure this is something that can be easily learned, trained, or you could expect a large group with access to the power to all adhere to the principles of.


GPT-2 up until a few days ago was the top of the line language model, and as such I think people expect them to keep it functional for a while.


Well, do they want to be taken seriously as an open institutuon and a Charity?


> For GPT-2, their repository is archived

Fun fact: the GPT-3 repo (https://github.com/openai/gpt-3 ) is archived too, but it does use the GitHub archive feature, unlike the GPT-2 repo.


> "This AI stuff is dangerous to world"

Replace "AI" with "fire" and it sounds even more ridiculous as a reason.


Fire is dangerous though. Fire gut-punched California three summers running. The power company cut power to whole regions out of respect for fire.


Not because the wrong people made fire the wrong way. Because fire is unpreventable and people didn't do fireproofing properly.


The problem is that OpenAI's motto is: "Discovering and enacting the path to safe artificial general intelligence."

This does not mix well with very common AI tasks, such as facial recognition, deepfakes and deepnudes. With GPT-3 we are seeing levels of text comprehension and response that we've never seen in ML before.

How can we abuse this? Well, we can conduct text-based gaslighting and manipulation on an unprecedented scale. Imagine choosing a Twitter profile to target, having the AI read every single tweet that person has made, and then DMing them insults or slurs that would most effectively apply to their personality. Imagine those Nigerian prince emails but each with a scam personally tailored to its audience, with any follow up questions answered with auto-generated but believable lies. Imagine GPT-3 being used to message young children on Instagram by the millions and tricking them into giving up personal information or images.

I think, ultimately, there will be no such thing as "safe" AGI, because the intelligence we already have as humans can be and has been used to hurt others. OpenAI faces an existential crisis in this regards, and their best answer so far is simply to control access to the model so they can revoke it when bad actors are caught. This is something they can't do if the model is free and open source as we all want it to be.


I thought the point of OpenAI was to democratize AI access. By giving access to everyone, you are leveling the play field. Making it available both to small, big, good and bad actors.

Right now it's only accessible to some actors. And good and evil are relative by definition.


This is the exact point. Rarely has controlling information and science ever benefit humanity.


An alternative explanation is that, like Google (and every other company out there), the motto doesn't reflect their actual objectives.


Yeah, Open AI is as open as PRC is of the people. Or GDR was democratic. Words are cheap.


> With GPT-3 we are seeing levels of text comprehension and response that we've never seen in ML before.

Is GPT-3 doing text comprehension, or just generation? I mean, can it extract facts/automatically tag text etc? Or "just" (I know how hard that is!) generate text with some similar characteristics/topics?


Yes. If you few-shot prime it with prompts of the form "<paragraph> tag:<tags>" and then just feed it "<novel paragraph> tag:" it'll generate <tags> automatically, just from completion. It does this shockingly well, the paper has specific examples. It can also do fun tricks like condensing a complicated Wikipedia article down to a second grade level, including built in metaphors. An example of this is on the API beta website IIRC.


That doesn't demonstrate comprehension in any interesting sense. You don't necessarily have to comprehend a text in order to generate a plausible nonsense continuation of it.


Sure you do. If I hand you a few sentences in a language you can't comprehend at all, you won't be able to create a plausible continuation.


Never mind that - I wouldn't even be able to take on the role of the Emacs doctor in German (as I don't speak any German). It doesn't follow from that that the Emacs doctor does understand German.


The point is that it's not nonsense, but a useful and contextual continuation which encodes some level of comprehension (unless you mean a different thing by comprehension than I do). Look at this demo for an example: https://beta.openai.com/?demo=2.


Sometimes the continuation makes sense, but this is just by chance. It's just nonsense text in the style of the original. There's nothing that prevents internal contradictions. It's just that often these will be absent by chance, or because the output contains regurgitated chunks of coherent text written by humans.

OpenAI are, like most AI researchers, utterly shameless in encouraging overinterpretation and wishful thinking.


What would your "interesting sense" of comprehension look like?


Something like the usual sense of the word.


One thing that frustrated me about this conversation at the time was that OpenAI explicitly stated right in the initial release announcement that that part of their rationale for the staged release process for GPT-2 was to force the conversation around releasing a powerful capability to the world. Basically a fire drill for the real thing. This was rarely acknowledged or discussed, it was almost entirely a combination of dunking on OpenAI for overhyping the product and excoriating them for interfering with open research.

I don’t know that GPT-3 is at the level of material damage but it’s clearly moving in that direction. Watching someone interact with it that has a good sense for composing the right kind of prompt is a spine tingling experience.

Given how poorly the PULSE demo that hallucinated fuzzy black folks into tan white people was received, it also makes PR sense not to release pretrained models that are obviously going to be easily tricked into saying terrible things. Exposing that as an API gives them the ability to police it a bit and study how the model behaves in the real world.

Or it could just be a giant sellout and cash grab.


> OpenAI explicitly stated right in the initial release announcement that that part of their rationale for the staged release process for GPT-2 was to force the conversation around releasing a powerful capability to the world.

At founding, OpenAI was a non-profit and it said it would share any innovations openly. Now it is a for-profit and only releases teaser data/code.


To be fair, with a multi-terabyte model, the number of people who have the money to make use of the model is probably only a few hundred...


Rather a few thousands at least. Most larger companies have that amount of data (if they can use the model is another question).


Yep, but running inference on it at any reasonable performance requires you to have all of it in GPU RAM - Ie. you need a cluster of ~100 high performance GPU's.


The largest version of GPT-3 is 175B parameters, which is ~350GB. I frequently use 8x and 10x RTX8000 boxes (and can access a 16x JBOG system as well), and the 8x system would have 384GB of VRAM. These sort of systems start off at only ~$60k.


Can anyone point to a more nuanced perspective on this? Ideally not only with regards to OpenAI; it is still quite common that research published in some of the "top" venues does not disclose the underlying source code.

Anecdote: I once pushed researchers to publish their code when I was peer-reviewing and it turned out that the code was super slow in comparison to the benchmark algorithms they compared with (comparison only included accuracy etc, not speed), something that I am sure the authors were aware of, but chose to not report in the paper.


> it is still quite common that research published in some of the "top" venues does not disclose the underlying source code.

The whole point of "OpenAI" was to share their innovations openly. They were a non profit. Now they're a "capped profit" and not sharing what's needed to reproduce their results (full code and training data).

They acquired talent on a bunch of good will and shifted into a for profit.


Here's my alternative view:

If OpenAI had open sourced GPT-3, there would be an equivalently angry thread about how they were endangering democracy/social order/etc, and not being responsible with the powerful tool they had created (see other threads on HN regarding those working in facial recognition).

Both that group, and the one posting here have valid points, and there would be strong, valid critique of their decision no matter which way they chose.


But we do realize that sooner or later a model even more powerful will be released to the public, right?

And as far as protecting democracy, I assure you that the geopolitical enemies of your nation either have similar models, or will have them very soon. There is very significant investment on ML models for text manipulation going on behind closed doors, funded by States.


> But we do realize that sooner or later a model even more powerful will be released to the public, right?

And at that time, we can equally criticize whomever releases it.

> And as far as protecting democracy, I assure you that the geopolitical enemies of your nation either have similar models, or will have them very soon. There is very significant investment on ML models for text manipulation going on behind closed doors, funded by States.

True, but it's a good idea to keep a high bar for developing and using it. There's a large difference between the resources of a nation state and various criminal enterprises.

Per [0], GPT-3 took $12 million to train. That does not include the people with the relevant skills need to train it, and access to the compute hardware.

[0]: https://venturebeat.com/2020/06/01/ai-machine-learning-opena...


I mean, I don't really see the benefit of limiting such a model to such famously reliable and disinterested actors as nation-states and multinational corporations.

The kind of criminals that are going to threaten "democracy/social order/etc..." aren't going to be stopped by a price tag of 12 million dollars, or 120 million dollars for that matter. Just a single Mexican cartel would have the resources to pay for that, to say nothing of criminal groups that work in lockstep with states. The proceeds of crime are 900 billion dollars per year. Training GPT-3 is well within the means of dozens of criminal enterprises.

But sure, we can praise OpenAI for protecting us from small enterprises and individuals, known to democracy, and instead reserving it for the use of megacorps and states, that are known not to threaten democracy or social order.


Said Mexican cartel would also have to find the appropriately skilled engineers and compute. There's a gap between having the money available, and actually executing the project.

This discussion hinges on what level of unpleasantness comes from groups who don't have $12 million in compute resources and the skills to run large scale distributed ML training, but could `git pull` a publicly available model, and hook it up to some Twitter bots.


In this world, if you have money you have access. If a cartel wanted access to compute they can just pay for it. If they want experts, trust me they can pay for it.


I don't necessarily agree with the following with regards to OpenAI, but it is what I have been told by a few professors and other academics in bioinformatics:

Money is terribly tight in academia, and even at hot think tanks like OpenAI. The difference between a working but not very pretty prototype and a polished product you are willing to be associated with may seem like just a bunch of trivial boilerplate such as documentation or tests.

It is indeed boring work (at least for scientists), but a lot of it. And if the intention is to publish and support something on an ongoing basis, and for widespread use, you probably need to invest 5x to 10x as much time and money.

So in the end most code in academia is hidden because it's hideous, not to hide how they hardcoded all the good jokes GTP-3 comes up with.


Such an odd attitude. There’s no way it takes 5-10x engineering effort to write some tests for components of a one-shot python script, as most scientific code is. It’s maybe the easiest type of software to test. Would this type of attitude fly for anything else? Can you imagine if chemists worked like this?

“Yea most chemists don’t like to release their procedures, since taking the trouble of accurately measuring out reagents could be 5-10x the effort and grant money is tight. Plus sometimes they don’t want to go all the way to the cabinet to grab a beaker so they just do the reaction in a coffee mug. The truth is that sometimes the procedure is just too ugly to release. And if they come out of the lab with a new chemical, what does it matter?”


I mean a lot of biology papers are loose on protocol details to be honest, dunno about Chemistry but for Bio it can often be difficult to exactly replicate a method because of lack of details alone.

That said I think this should change given how easy it is these days to share supplemental info online.

As far as ugly code I wouldn't be surprised if a lot of academics won't take the time to fix it up, but that doesn't mean it shouldn't be released IMO. Maybe an open source community would pop up that could help clean this code for them.


It's inevitable that AI research will be more and more closed. AI is a weapon which is potentially much more dangerous than nuclear. It's naive to think that the most dangerous weapon will be accessible to a general public.


AI is dangerous when it is closed and inaccesseble to majority of people.

Lets take deepfake. If it was closed and inaccessible to majority of people it could be used as a powerful weapon (making influence on politics, etc). Now when everyone has access to it and aware that that kind of fakes is not hard to make, it is not a weapon anymore. Of course you can still use it as a weapon, especially in not well developed countries, but even there after some time it will become useless as a weapon too. I think person who made deepfake saved many people lives and that is the best approach to deal with so called "danger of AI".


I don't see how deepfake is less dangerous through it availability. Because everyone is now aware? That doesn't mean this tool has to be open. Also no one is making a proper fact-check nowadays and a deepfake attack can do a lot of harm anyway. Or maybe because you can retaliate with counter-deepfake? Well, that's just ridiculous.

What I see is that deepfake is used for a fake-porn videos with a known actresses or some ordinary people for harassing and blackmailing them.

I don't see how its availability saves lives.


Because people are now aware that that kind of fakes exist and it is much harder to use fakes in more serious things like politics.


There's always OpenCog: https://github.com/opencog


Now is an especially good time to join the OpenCog project. Ben and the folks behind it are just now starting a big push to re-architect big chunks of it, including the AtomSpace. And OpenCogCon was just held as a two day virtual conference, and the entire thing is available on Youtube.

I'd encourage anyone who's interested to join the Opencog Slack and/or Google group and check out what's going on.



I'm not surprised. Since I've seen S. Altman talking in one of his videos about "things that you (the employer) can't tell to your employees" I completely don't trust that guy.


OP refers to GPT-2. OpenAI has since then published both the code and the trained model.


Well, this message didn't age well as soon as GPT-3 was made as an API access model only.

It's getting harder to define 'OpenAI'. It is just another DeepMind, where they release the paper but never the model.


They did, but now there is GPT3 which is closed access API only and I'm not aware of any stated plans to publish that model.

On the plus side, if their claims about how GPT3 was constructed are truthful it should be possible for outside parties to reproduce it-- though at considerable cost.


From the gpt2 code release[1]:

> We are still considering release of the larger models.

Is there something I'm missing?

1. https://github.com/openai/gpt-2


That part of the Readme seems to be out of date, they released the largest GPT-2 model last year https://www.openai.com/blog/gpt-2-1-5b-release/



Look at the time stamps.


One of the early stated goals of OpenAI was to advance AI but do so safely. Can you imagine if scammers and "bad actors" got access to GPT-3? You could create such sophisticated and targeted scams. Things could get extremely ugly fast. The choice to limit access to this seems to be completely in line with their original values.


(2019)


OpenAI should have never been open, named "OpenAI", or advertised itself as being open. At the time of OpenAI's inception, much of the AI risk community deemed it as harmful, although that wasn't spoken out a lot, because it is a delicate affair to criticize misguided effort on AI safety when the status quo was almost no effort being spent on AI safety at all. It is a minor consolation that OpenAI turned out to be less open than initially advertised.

https://web.archive.org/web/20200518081726/https://slatestar...


So "open, like a gate" not "open, like a public resource".


Somewhat random but this debate about the "Open" in OpenAI's name reminds me of OpenVMS which is not open source, but supports "open systems" like POSIX.


Funny how there's so much emotional processing here from the open-source community.

What started as a techno-utopian non-profit project has transformed into a well-branded very much for profit corporation, right under their noses!

I say good for them. Look forward to seeing how the tech plays out.


One good thing about this is: An actually free and open ecosystem will emerge - it will take time and countless people-hours but it will happen and it will prevail.


OpenAI is open as businesses are open.


How are people getting gpt-3 api keys? Might I have one?


Agree


GPT-3 is (the process of) general AI being born in front of our eyes, catching us unprepared, making us realize that the we may be a bit rigid in understanding of what intelligence really means. If primed correctly, can GPT-3 produce worthwhile improvements to its own paper or implementation? Can GPT-3 be made to prime and query itself in a loop, by asking it to produce a better query for itself? Can its knowledge be already used to disrupt existing frontiers of human thought? The simple and safe answer here is that we don't know yet.

In a world full of ideological adversity and cold-blooded corporate competition, slowing down the release of technology which may or may not be capable of a runaway effect (a.k.a. technological singularity) in the hands of an adversary is an intelligent and prudent thing to do.


Nope. The simple and safe answer to all those questions is certainly “no”. GPT-3 not intelligent in the way we most of us would describe intelligence.


I don't think you can make such definitive statements.

GPT-3 is certainly intelligent the way a lot of us would describe intelligence. It can produce content in a way this is indistinguishable from humans.

We don't know what else it can do. We don't know the pace of improvements happening here. There are a lot of open questions.


I have long suspected that they are already being tightly controlled by the U.S. military. They obviously know what is at stake and they logically need to maintain AI supremacy. We are amidst of an AI arms race and it won't stop until we reach singularity or people start nuking each other to thwart a major cleansing operation with slaughter nanobots.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: