Hacker News new | past | comments | ask | show | jobs | submit login

I believe, having worked a little bit with GPT-2, that OpenAI is intentionally sabotaging access to their AI algorithm and data. They have started this sabotage with GPT-2 and with GPT-3 they simply didn't open source it.

For GPT-2, their repository (https://github.com/openai/gpt-2) is archived. Which is as good as saying that the project is abandoned and will receive no updates. The project already doesn't compile/run properly. This issue (https://github.com/openai/gpt-2/issues/178) could be solved relatively easily by either a one-line code fix, or merging a pull request (https://github.com/openai/gpt-2/pull/244/files). This is not happening and I have a hard time to believe it is on good intentions.

Oh, and by the way, I believe saying "This AI stuff is dangerous to world", the same as politicians saying "We need to check your web history for pedophilia stuff". It's funny how some people don't see the irony in how they oppose one thing and support another that is practically the same.




> "This AI stuff is dangerous to world"

It's not exactly that, their rhetoric is: "This AI stuff is dangerous to world if YOU use it. But when we use it, it is good."

All entities think that they hold the key to good values, while the others are always suspected of hiding vile intentions.


That's a stronger opinion about their views than necessarily is true.

Based on their actions, OpenAI seems to (rightly, in my opinion) view that some people would misuse GPT-3/etc, even if the majority would not.


> [...] that some people would misuse [Insert any tool here]

Following this logic, one would forbid schooling to lads based on the suspicion that they may "misuse" what they will learn to do some evil. One would use the alphabet to write threats to others. One would use a knife to stab others. So how do you police the right to access the knowledge? Should we restrict accessing knowledge to some few elites that some authority should decide who they are?

IMHO knowledge should be accessible to all. The accumulation of the humanity knowledge should not be controlled by an Orwellian entity. <s> Otherwise, let's start a movement demanding to wipe online universities courses from the Internet </s>


They said the same about gpt-2 though, and then open sourced it anyway, just not adding their trained models. Others did the work themselves, and the work didn't end, apparently.


The irony is we're talking about Good vs Evil of potential AI systems. It could be a nuclear arms race but I suspect we would be better for it. When information has been free it has nearly always been of benefit to humanity.


Define misuse.


One obvious example would be to fully automate social engineering email scams. Imagine how much disruption it would cause if spearphishing became as common as robocalls have become post-automation.


If it became that common it would quickly cease to be that effective. Spearfishing works because it's rare, so it doesn't automatically set off your bullshit detector. Most people don't fall for "cheap vi@gra" emails anymore.

Social engineering in general is effective because it's rare enough that people don't feel the need to develop policies and strategies for preventing it.


Sure, but there's no reason to expect the required strategies to be non-disruptive. It's now impossible for anyone not on my contact list to call me, because I won't pick up or listen to their messages - it'd be a tragedy if email became similarly locked down.


Wouldn't the same AI be able to detect that the text was generated by itself ?


For example, convincing and hard to detect fake content generation on Facebook/reddit/etc


> Oh, and by the way, I believe saying "This AI stuff is dangerous to world", the same as politicians saying "We need to check your web history for pedophilia stuff". It's funny how some people don't see the irony in how they oppose one thing and support another that is practically the same.

Some AI/ML applications are clearly dangerous to the world. See other HN comment sections when facial recognition comes up.


So, let me start by saying, facial recognition is absolutely dangerous.

But I'm not seeing anything meaningful being done about this. Some companies are refusing to sell AI to the government, but some, such as ClearView, are openly selling to any government including ones that will use it to hunt down gays or protestors. This cat is out of the bag. Even if comprehensive legislation is passed in the US which limits facial recognition use to law enforcement with a warrant, US law enforcement has repeatedly shown themselves to be above the law, with numerous loopholes to get around warrants. And that only affects the US. China, for example, will have no such compunctions.

Having facial recognition closed source doesn't do anything to prevent bad actors from using it to do harm. It simply means that only those with enough money to buy licenses get to use it--governments and corporations who have repeatedly shown themselves to be the bad guys when it comes to privacy.

The only difference if this is open source is that it puts this power in the hands of people with average incomes, and there are a lot of cases where this could be a good thing. We have seen, for example, pictures taken of police officers hiding their badge numbers illegally at protests in the past few weeks--facial recognition could help unmask these bad apples.


> Some AI/ML applications are clearly dangerous to the world. See other HN comment sections when facial recognition comes up.

I'm pretty ignorant on the background of what's being discussed (what did OpenAI change/do to become more closed?). But if OpenAI really believed this, the right thing to do would be to shut down and spend the money on advocacy. As it is, it seems that they're still releasing machine learning code/models, just not to everyone.


I am not remotely in this field, and have not been following this closely at all. With that being said, what obligation do they have in maintaining GPT-2? Did they have some stated commitment that they walked back, or am I missing something else?


Their charter is here: https://openai.com/charter/

>"We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research."

So, in a perfect world you would not only publish your research but also the code that is fundamental to it. Maintaining or abandoning research code on the other hand is an entirely different (costly) story that's simply an artifact of research focused software development. It is typically abandoned.

Personally, I see a huge flaw in the underlying philosophy. This presumption that this specific organization is or can somehow be benevolent flies in the face of all history. With nuclear weapons, most of the scientists regretted supporting their countries regardless of how beneveloant they thought they were.

In general, any sort of concentrated power tends to corrupt. It takes a very special mindset to understand power and refuse to abuse it. I'm not sure this is something that can be easily learned, trained, or you could expect a large group with access to the power to all adhere to the principles of.


GPT-2 up until a few days ago was the top of the line language model, and as such I think people expect them to keep it functional for a while.


Well, do they want to be taken seriously as an open institutuon and a Charity?


> For GPT-2, their repository is archived

Fun fact: the GPT-3 repo (https://github.com/openai/gpt-3 ) is archived too, but it does use the GitHub archive feature, unlike the GPT-2 repo.


> "This AI stuff is dangerous to world"

Replace "AI" with "fire" and it sounds even more ridiculous as a reason.


Fire is dangerous though. Fire gut-punched California three summers running. The power company cut power to whole regions out of respect for fire.


Not because the wrong people made fire the wrong way. Because fire is unpreventable and people didn't do fireproofing properly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: