
What Went Wrong at OpenAI? [audio] - Akababa
https://slate.com/podcasts/what-next-tbd/2020/02/what-went-wrong-at-openai
======
jamestimmins
So it seems that OpenAI realized they needed more compute power than they
could afford, so they started a for-profit arm that could take outside
investment from Microsoft to cover those costs.

This piece suggests that they have since focused (at least partially) on
creating profitable products/services, because they need to show Microsoft
that this investment was worthwhile.

Does anyone with more context know if this is accurate, and if so, why they
changed their approach/focus? What are they working on and is GAI still a
goal?

~~~
andreyk
Person doing PhD in AI here (Ive seen all of OpenAI's research, been to their
office couple times, know some people there) - tbh the piece was a pretty good
summary of a lot of quite common somewhat negative takes on OpenAI within
research community (such as that they largely do research based on scaling up
known ideas, have at times hyped up their work beyond merit, changed their
tune to be for profit which is weird given they want to work in the public
interest, and that despite calling themselves OpenAI they publish and open
source code much less frequently than most labs -- and with profit incentive
they will likely publish and open source even less). The original article also
presented the positive side (OpenAI is a pretty daring endeavor to try to get
AGI by scaling up known techniques as they are, and people there do seem to
have their heart in the right place) .

~~~
sytelus
I feel OpenAI has stayed fairly close to its mission and principles. I'm not
sure about why so much secrecy, they could have just NDAed and let the
reporter take look at things which is common practice even at Apple. However,
apart from that, they are the only ones explicitly sensitive to safety and
economic issues surrounding AGI. Specifically, AGI is certain to generate the
first trillionaire or even deca-trillionaire and throw the wealth gap between
countries completely out of control. Unlike guns and nukes, it would be a far
more difficult tech to clone for less fortunate groups. I think OpenAI is the
only group actively working against that outcome (i.e. max 100X return
provision) while most others wish to race to AGI first, patent the hell out of
it and become the most powerful richest entity world would have ever seen.
Also, until recently with only 200 folks and limited pay (relatively
speaking), they have amassed extraordinary talent that has given run for its
money to teams 10X larger. When people say that they just try to copy/scale
others innovation, they grossly underestimate very impressive contributions
that has touched virtually everything from supervised, unsupervised, RL, NLP
and robotics.

~~~
nradov
There is nothing at all "certain" about AGI or the economic impacts thereof.
This is completely speculative and based on zero hard data.

------
sdan
The original piece itself was flawed. Like many if not most tech companies,
reporters or other strangers don’t get unfiltered access to the company.
There’s reasons why (unpublished work that may be reported in a bad light).

OpenAI has made huge initiatives in bringing in diversity and really opening
their work (deepmind rarely ever does) and I think that profiting in the way
they set it up only ensures they can do bigger and better research.

~~~
andreyk
Can you elaborate, you think it was flawed because the reporter didn't get
unfiltered access? The piece is still based on tons of interviews and it seems
to me that reporter is pretty aware of conversations going on within AI
research (she attends our conferences and the like), so I thought it was
pretty much a decent summary of common criticisms and positive aspects of
OpenAI (as per my other comment on here).

~~~
sdan
Just wanted to point out: She was saying that it was ironic that at a place
like OpenAI, she doesn't get unfiltered access to their 2nd/3rd floors/eat
lunch with them.

I understand that its a bit ironic, but I think the reporter needs to
understand that strangers (esp. reporters) don't usually get this types of
access regardless of whether it's "openai" and since the reporter is probably
not under a NDA, any company would put tight restrictions of what strangers
see and do.

I agree that OpenAI sometimes overhypes stuff (I think the gpt-2 case was
unintentional) and some of the criticism she points out, but overall the piece
in my mind was more of a hit-piece than showing that OpenAI does try to have a
diverse group of people (fellows/scholars) and was probably the only
nonacademic non-profit before needing large amounts of compute.

~~~
Akababa
I can see your point, but that does beg the question: what niche does a
nonacademic nonprofit fill? It seems to me that's it's not filling a
particular niche at all, but rather sitting somewhere between a corporate lab
and a traditional academic lab. It's for those who are okay with trading off
some of the freedom of academia for some of the resources afforded by
industry.

------
mindgam3
Transcript here:
[https://slate.com/transcripts/cE5Ia2t1d3k2d3hhNlV3dG1xWlMzNX...](https://slate.com/transcripts/cE5Ia2t1d3k2d3hhNlV3dG1xWlMzNXh6cStmditxNDYvSmt2cFFrR3VVVT0=)

~~~
jonas21
Amusingly, the transcript seems to have been generated by an "AI" tool
([https://www.snackable.ai](https://www.snackable.ai)) and gets things wrong
just enough to make it very annoying to read.

~~~
Akababa
It labels almost every paragraph as a different speaker, which makes you
wonder why they bother to try! Does their software really think the podcast
has 20 different participants?

~~~
sdan
That's what I was thinking... I just read it like a dialogue between two
people... made sense to me.

------
bogomipz
I had a question regarding the following passage:

>"So there were two main theories that came out of this initial founding of
the field. One theory was humans are intelligent because we can learn. So if
we can replicate the ability to learn in machines, then we can create machines
that have human intelligence. And the other theory was humans are intelligent
because we have a lot of knowledge. So if we can encode all of our knowledge
into machines, then it will have human intelligence. And so these two
different directions have kind of defined the entire trajectory of the field.
Almost everything that we hear today is actually from this learning branch and
it’s called machine learning or deep learning more recently."

Is there still development of the other branch, the "encode all of our
knowledge into machines, then it will have human intelligence" branch then? If
so what is the branch of AI called then?

~~~
YeGoblynQueenne
I think the "two theories" are meant to be machine learning and knowledge
representation and reasoning (KRR).

Knowledge representation and reasoning is one of the main fields of AI
research, on the same broad level as machine learning or robotics and with its
own journals and conferences (KR 2020 will be held in Rhodes, Greece in
September). It enjoys much less recognition than machine learning in software
development circles because it doesn't receive such broad coverage in the lay
press as machine learning does, but it's an active area of reserach. Google's
Knowledge Graph is probably the best known example of appplications of the
techniques that have originated in research from that field.

I don't really know why the author says that machine learning and KRR are "the
two main theories" in the field. Perhaps she has access to historical
information that I ignore. She says, a little earlier than the passage you
quote that "[AI] was started 70 years ago", which must mean the workshop at
Dartmouth College in 1956, where the term "Artificial Intelligence" was first
introduced (by John McCarthy, perhaps more recognisable as the creator of Lisp
to a programmer audience).

There's sure been many binarily-opposed "camps" in AI, like the symbolicists
vs the connectionists, or the "scruffies" vs the "neats" and so on. While I
recognise the "machine learning vs knowledge representation" as one of the
classic dichotomies, I don't really think it's such an ancient and fundamental
dichotomy as the interviewee makes it sound.

I wonder if the interviewee is mixing up the "ML vs KRR" distinction with a
more subtle distinction between different forms of machine learning. I'm
thinking of Alan Turing's original description of a "learning machine" from
the classic 1950 Mind paper ("Computing machinery and intelligence", where he
introduced the "imitation game"). Turing's learning machine would learn
incrementally, from a small original knowledge base and from human instruction
and from contact with the world, whereas today's machine learning tries to
learn everything from scratch, in an end-to-end, no-human-in-the-loop,
approach. This distinction, "incremental vs all-at-once learning" seems to fit
the interviewee's description of the "two main theories" better.

There's a paper, "The child machine vs the world brain", by the Australian AI
scientist Claud Sammut, that goes into some detail in this distinction, based
on Turing's paper and later developments in data mining and big data:

[https://www.semanticscholar.org/paper/The-Child-Machine-
vs-t...](https://www.semanticscholar.org/paper/The-Child-Machine-vs-the-World-
Brain-Sammut/bde00d8180626e80dd73b3a7869c743b9cbd27a4)

I recommend reading at least its introduction and then digging in to the
references if you're interested in the history of AI in general and machine
learning in particular and different ideas on those subjects that have been
explored and abandoned over the years.

Warning: contains ancient lore.

~~~
bogomipz
Oh wow, thank you for the wonderfully thorough and detailed response. Quick
follow up question - Has KRR not benefitted the same as ML has from advances
in GPUs and cheap fast storage? Is that maybe why you hear less about it than
ML?

I wonder if the journalist was being reductionist simply because having only
two competing branches makes for a slightly more digestible and compelling
narrative for the average reader?

~~~
YeGoblynQueenne
>> Quick follow up question - Has KRR not benefitted the same as ML has from
advances in GPUs and cheap fast storage?

I don't really know- I'm from the machine learning side of things. I study
symbolic machine learning for my PhD and I have a background in logic
programming so I know what KRR is, but I'm no expert and I'm not up-to-date
with the work in the field.

I think the reason the lay press doesn't cover KRR and other AI fields is that
they are, well, fields of academic research and as such not terribly
interesting to the press and to most software developers. My intution is that
most software developers don't really know much about AI in general, not to
mention particular sub-fields of it.

This includes machine learning. It's just deep learning that is an exception
to this (quick reminder: AI ⊋ machine learning ⊋ deep learning). We could
spend a long time discussing the reasons for that but, basically, I think you
don't hear about KRR that much because it's not deep learning :)

------
bogomipz
From the transcript:

>"Pursuing a G.I., particularly with a long term view, was the central mission
of open A.I.. And yeah, there was the traditional Silicon Valley talk of
changing the world, but also this sense that if HDI was done wrong, it could
have very scary consequences."

If the concern was truly avoiding AGI was done wrong which presumably includes
its development being in the hands of a select few tech giants. Wouldn't it be
better to simply wind the operation down rather take money one of those few
tech giants leading in AI development then running a company with motives that
are odds with each other?

Just off the top of my head doesn't it seem that Microsoft with its new
billion dollar investment now stand to benefit from that first billion dollars
of investment made to the non-profit OpenAI more so than anybody else?

------
undershirt
From what I’m able to summarize:

The reporter was invited to do a piece on them, and while visiting had trouble
reconciling their secrecy with their ethos of openness. She was not allowed to
interact with the actual researchers where they were doing their work, and her
lunch was pushed out of the building so she couldn’t overhear their all-hands
meeting. (My take is that their openness extended to the _curated_ fruits of
research, but their process itself was guarded from any communication channel
they couldn't control i.e. the reporter).

This seems related to the second part, where they discuss the pressures toward
profit, from strings attached to corporate investments, which they suggest
would be different under traditional long-term gov investments. And they
talked about the paradox of adding a for-profit branch to a non-profit org,
without resolution.

I’m a bit unsettled recently when listening to podcasts and stories like this
that seem to end on a note of “ _shrug_ , capitalism, isn’t this an
interesting problem?”. I’d be more encouraged to see folks talking about
post–game-theoretic social structures that can categorically solve for these
issues, that can allow us to transition out of capitalistic dynamics rather
than trying to fight them in order to get work done. This seems to be the
rallying call of the nebulous ideas behind “game~b”. Wondering if anyone here
has been seeing that yet.

------
zachware
Why does this wave of coverage speak about OpenAI as if a postmortem?

Maybe it’s just me but companies are not public benefit enterprises, even if
structured in some way as a not for profit.

This wave of coverage and the dialog around it seems to come from the view
that OpenAI somehow owes the world something. When in fact it only owes its
stakeholders none of whom are reporters.

~~~
sytelus
No, OpenAI has very explicitly made this clear:
[https://openai.com/charter/](https://openai.com/charter/).

 _Our primary fiduciary duty is to humanity. We anticipate needing to marshal
substantial resources to fulfill our mission, but will always diligently act
to minimize conflicts of interest among our employees and stakeholders that
could compromise broad benefit._

------
raws
Is it me or the player does not have volume control nor playback speed?

~~~
cududa
...not sure I know of a single website with playback speed

~~~
Vedor
Youtube allows to control playback speed, within certain limits.

~~~
raws
And some addons for the site on pc go beyond the default limits.

------
sjg007
OpenAI is to X as DeepMind is to Google.

~~~
solarkraft
Are you referring to a window system or an iPhone model?

~~~
sjg007
X is a placeholder, maybe I should have used a ?

~~~
solarkraft
Or maybe a <to be inserted>. X is just way over used for actual things now
(not only out of context, Google's moonshot division is called X for example,
I'm guessing there are more cases).

