
Sam Altman’s leap of faith - ankeshanand
https://techcrunch.com/2019/05/18/sam-altmans-leap-of-faith/
======
tristanm
It's interesting that in order for his pitch to work (if you invest in OpenAI,
you will get up to 100x returns), assuming they do build AGI, it still
requires that their AGI acquires a very stable, virtually guaranteed advantage
of large magnitude. This very strongly requires that they _cannot_ share
anything they discover whatsoever. Especially since they apparently plan on
using it to make strategic investments to beat the market by a huge margin.
That would mean they obtain information (about the economy, world affairs,
technology, the future, etc.) not possessed by anyone else, or that
information would be reflected in the market already. Any information leakage,
whether regarding their AI or whatever it learns about the world, would
compromise that advantage.

In other words, what Altman says about "we can't only let one group of
investors have that" can't be true, or at least not sincere. The more
investors who have access to it, the more its returns get distrubuted across
society more evenly (which would be a good thing, obviously), but lowers the
incentive for initial investments. They will want to keep it contained within
a small group of investors for as long as possible.

~~~
not_a_moth
Yeah, there's a big assumptions about the nature of an AGI breakthrough,
mainly that it will be a snowball of run away value. Why assume this? Is it
because we think AlphaGo/Zero can produce human-like cognition? Why wouldn't
it be a long, incremental process of X thousand small breakthroughs, over say
a decade, where the result is something like average human-level intelligence;
maybe the most important invention of our time, but not super intelligence,
and not "run away".

(Then after another X years (or decades) you might figure out super
intelligence, if regulations haven't intervened by then)

If the trajectory is incremental as described, it seems untenable that OpenAI
could keep some major monopolistic advantage on AGI, without being completely
un-open/sealed off for decade(s).

~~~
rytill
Going from 0 human-intelligence level pieces of software to 1 is the hardest
part. Once you have 1, you can duplicate it as much as you want given
resources. It can also be pointed inward to improve its own effectiveness.

Actually, there are a lot of good arguments for logistic growth. The only ones
for linear or sublinear I’ve heard are not strong and mostly take as an
implicit assumption “those alarmists and their exponential growth! They
probably didn’t even consider that it could be a slower, more incremental
growth” instead of actual fully-fledged arguments.

There’s also a meta-argument that I have yet to hear reproduced in anti-
alarmist sentiments. Which case demands more attention, if it does happen? If
there’s a 5% chance of the growth being exponential, how much attention should
we devote to that case, where the impact is much higher than linear or
sublinear growth. This is such a big deal - it’s like Pascal’s wager but with
a real occurrence that I believe most would admit has at least a small chance
of happening.

Apologies for any brashness coming across. I’m still figuring out how to
communicate effectively about a thing I feel a lot of emotions when thinking
about.

~~~
dwighttk
>It can also be pointed inward to improve its own effectiveness.

Assumption. Intelligence (which isn’t defined) may be something that can grow
without bound, or it may be something that plateaus just above the brightest
human yet (again this is ill defined. “IQ is a number. There are numbers that
are higher, so intelligence must be able to grow” is about as much thought as
some people put into it) or maybe it is something that can grow without bound,
but the effort required grows too.

~~~
rytill
Use "capability" instead of "intelligence" then. Defined as "ability to solve
any problem dwighttk has ever dreamt up."

There's pretty much no reason to believe capability peaks roughly above the
brightest human.

Our brains aren't yet even integrated with hardware-optimized algorithm
solvers on which to offload minimax or tree search problems, or solve simple
game-theoretic situations, or any number of things a computer system is much
better and faster at than a human.

It's just another one of those things that you can believe if you want to not
spend time worrying about the ethics problem.

------
jacquesm
> Thursday night would be considered pure insanity coming from someone else.

Time will tell. The genius of YC was to spot the hackers as the driving force
of a new generation of tech companies, to be founder friendly, to use the
classes to get rid of the problem that every angel investor has to contend
with ('is this a good investment or not?') and to tell the story in a very
compelling way and with their own money on the line.

Everything else so far is underwhelming at best, but the viral nature of YC
and the alumni network are not going to be stopped for a long long time.

It's a bit along the lines of 'what have the Romans ever done for us?', if
that's all that came out of it then it is already a spectacular success by any
measure.

~~~
atomical
This is unfathomably harder than anything YC has ever dealt with in the past.
Judging by what happened with IBM's Watson and Theranos I really doubt Sam
will be given a blank check for the black box that is likely to emerge.

~~~
jacquesm
[https://en.wikipedia.org/wiki/Color_Labs](https://en.wikipedia.org/wiki/Color_Labs)

Is a good case study for investments on reputation alone.

------
ngrilly
In the recorded interview, Sam Altman says climate change is such a hard
problem that we need strong AI first to solve it. I have doubts about this for
several reasons:

\- Human psychology is one of the biggest obstacles (maybe the biggest) in
solving climate change, and I'm not sure how a strong AI is supposed to fix
that.

\- Building carbon-neutral energy sources is a hard problem, but most experts
are optimistic about our ability to solve this (for example, nuclear fusion).

\- Considering that we have no idea when this strong AI will be ready (Sam
acknowledges it in the interview), it would be dangerous for us to just rely
on such a breakthrough to save the climate (and save our children, grand-
children, etc.).

Edit: I'd be happy to know a bit more about how a strong AI, such as
envisioned by OpenAI, could solve climate change :-)

~~~
chrischen
The AI’s solution to global warming that humans cannot solve is to kill all
humans.

~~~
whyenot
Donald Trump could "solve" global warming in less than 30 minutes with a call
to the military. Vladimir Putin has a similar option. Depopulation and nuclear
winter would do the trick.

Of course I am being glib, but we are living in a world where two people have
the power to end civilization any time they want to. Something that I think is
important to remember when talking about risks of AI.

~~~
jacquesm
For risks of nuclear deterrence one would hope that the military would follow
Trump's orders, for humanity one would hope that they would not because he's
about as stable as 74 year old nitroclycerine.

~~~
fludlight
The US/Soviet/Russian/Chinese military junior->senior brass have been around
for long enough to acknowledge the necessary co-existence of the others.

------
arugulum
Allow me to present Altman's wager:

\- If OpenAI does not achieve AGI, and you invested in it, you lose some
finite money (or not, depending on the value of their other R&D)

\- If OpenAI does not achieve AGI, and you did not invest in it, you saved
some finite money, which you could invest elsewhere for finite returns

\- If OpenAI achieves AGI and you invested in it, you get infinite returns,
because AGI will capture all economic value

\- If OpenAI achieves AGI and you did not invest in it, you get negative
infinite returns, because all other economic value is obliterated by AGI

Therefore, one must invest.

~~~
notahacker
Has the same issue as Pascal's. Competing AGI projects (gods) exist and their
believers might be the ones reaping the infinite rewards, not to mention the
distinct possibility the AGI (god) doesn't actually see rewarding believers as
its highest priority, and might choose to share its infinite rewards with
people who aren't part of the inner circle, or even punish people who joined
inner circles with the express intent of elevating themselves above ordinary
people :)

Actually makes a bit more sense from the traditional view that AGI projects
might not reach their goal but the well run ones are likely to have very
commercially valuable byproducts anyway. _If_ we were getting Star Trek
economics out of it, who'd be interested in entirely obsolete concepts like
"economic value" and "100x returns" anyway?

~~~
roenxi
It also has an issue not present in Pascal's, which is you are multiplying by
infinity in a context where it doesn't actually make sense.

The payoff from AGI may be incalculable, but it isn't infinite both in itself
and in Altman's ability to enjoy the rewards it promises. Once the value
becomes finite, a whole heap of risk-reward logic kicks in that the Wager
wants to sweep under the rug.

As a concrete example, following this Altman's wager would result in Altman
giving all his wealth to the first beggar on the street who mumbles that he
might be able to run an AGI project - the possibility that the beggar can
achieve that is, technically speaking, nonzero. Multiply that by infinity and
you have a great expected return (infinite, in fact). However, practically
speaking, the risk will overwhelm the large-but-finite payoff.

Infinity is bigger than people think :P

~~~
mcguire
According to the true believers, the payoff for AGI is infinite because the
superintelligence will be capable of literally anything (or at least
simulating it well enough that it doesn't matter). To them, it is Pascal's
wager.

Religious beliefs can be weird.

------
tehlike
It is not hard to imagine profitability without agi. I can actually imagine
and see openai becoming a conglomerate with many interesting applications.
Robotics is a nut that is not cracked and seeing efficiency gains is not that
hard. Once the level of ai is good enough, you get an edge over competition in
that you can go to market faster than anyone for most applications. Again this
is not "the world as we know it is ending" scale ai, but you dont need to to
generate massive returns

Disclaimer: i'm a swe working in google brain robotics infrastructure

------
m_fayer
> Still, Altman insisted there’s a better argument to be made for thinking
> about — and talking with the media about — the potential societal
> consequences of AI, no matter how disingenuous some may find it. “The same
> people who say OpenAI is fear mongering or whatever are the same ones who
> are saying, ‘Shouldn’t Facebook have thought about this before they did it?’
> This is us trying to think about it before we do it.”

I have a lot of sympathy for this point. Someone at baby-Facebook, many years
ago, could plausibly have predicted the malevolent forces it eventually
unleashed. Maybe someone did. And they could easily have been dismissed for
indulging unlikely dystopian sci-fi scenarios. Or maybe someone else come up
with a different plausible scenario that never came to pass, and are
remembered as a pessimistic naysayer ready to pass up a great business for
some overwrought navel-gazing. It's a brave thing to risk that outcome.

~~~
jonny_eh
> Someone at baby-Facebook, many years ago, could plausibly have predicted the
> malevolent forces it eventually unleashed

Someone there did, sort of. That was Dave Morin. According to a recent
interview, he argued with leadership to keep facebook private, but failed. So
he left to go start Path, which was like FB but totally private. Like OpenAI,
it got lots of funding and hype, but never got off the ground.

The interview: [https://gimletmedia.com/shows/without-fail/76hrml/an-
early-f...](https://gimletmedia.com/shows/without-fail/76hrml/an-early-
facebook-insider-reckons-with)

------
rdlecler1
I feel that most of the people that are truly bullish on AI have never
actually programmed it to understand how far we have to go and how primitive
the solutions actually are. I see some powerful statistical tools for
categorization and optimization under finely crafted conditions but nothing
else. We have a long way to go.

~~~
tim333
I probably fit that category - bullish and not programmed all that much. A
couple of reasons to be bullish - the actual brain may use quite simple
solutions as well. We are hacked together from DNA transcribing to about
15,000 proteins of which about 13% seem brain specific which you might think
can't result in that much algorithmic complexity. Also there seems progress
with things like AlphaZero beating humans at all perfect knowledge board
games, getting quite good at StarCraft, driving cars if rather badly and so
on.

------
Havoc
>‘Once we build a generally intelligent system, that basically we will ask it
to figure out a way to make an investment return for you.'

Maybe lets do curing cancer first?

~~~
sgt101
I doubt that such a thing will be interested either in creating investment
returns or in curing cancer.

It might be interested in finding others like itself, or it might be
interested in making companions, or it might be interested in some other grand
projects that we don't understand, but our concerns are likely to be about as
relevant as a three year old's career advice.

~~~
avian
Why? It seems a very human-centric way to think. Such an entity wouldn't have
ancestors that had to keep in groups to survive, so why would it be interested
in making companions?

AI converting the mass of the solar system into stacks of $100 bills for its
investors seems like a much more likely outcome.

~~~
sgt101
I said might be... In general I think that we are less able to guess about
this than we are to guess about the inner lives of octopuses...

The $100 bill stacks for investors seems very unlikely to me.

------
rboyd
"So you can see now why it's important we cap ROI at 100x. What do you think,
will you invest?"

"Hm, it's an interesting proposition to be sure. Can we go back a couple
slides? I'd like to see the one again about how the machine comes hard-coded
to love us like parents and helps us transcend our mortal shells, becoming
unbounded thoughtforms exploring the limits of superintelligence, yielding
only to the eventual heat death of the universe."

------
idlewords
There is too much easy money sloshing around Silicon Valley. The normal
mechanisms for allocating it to sane, productive use (market forces) don't
work, because it's in too few hands. So instead we get our version of a
planned economy run by people who scared themselves with scifi.

~~~
areoform
I respectfully disagree. I realize whom I’m disagreeing with, but you’re
wrong, there might be an asset bubble in the Valley, but these long term bets
aren’t the result of “easy money.” They’re the result of smart money chasing
long-term results in a world where research is being increasingly privatized
and de-corporatized.

Open AI is not that much different from the research driven labs of the past
like the MIT AI Lab, Project MAC, The Mother of All Demos, and yes, XEROX PARC
and Bell Labs. The difference is that instead of a combination of government
and large corporate money funding open ended applied and fundamental research;
we have private investors doing the same.

The Valley is now one giant lab for the giant corporate parents to gobble up
so that they take fewer extreme risks on their own dime. What works will work
and it will be absorbed by FAANG. What doesn’t is discarded. While there are a
few runaway hits, most companies like Deep Mind are absorbed by the large
corporations as needed. When they can’t absorb them readily, they invest in
them, like Google Venture’s investment into Uber and other unicorns. The net
result is a diffused and confused environment where the future has moved from
shiny office parks to local juice bars and coffee shops.

FWIW, I am for sama’s bet. And it’s not an easy sell at all. I think, it might
be the hardest of all sells and the oldest amongst them. A bet in the future
being better than today. I, personally, would pony up capital (to a limit) for
the same.

~~~
tntn
In my mind, OpenAI is different from the other labs you mention because their
goal is to harness the power of their own god. Bell Labs, for most of its
history, was tasked with improving the Bell System, so anything that was
remotely related to the system was fair game. Maybe these things seem "not
much different" to you, but they seem dramatically different to me.

I'm not really that convinced anyone is substantially closer to artificial
general intelligence that anyone was in the 50s or 60s, and I think it's fun
to imagine what Bell Labs might have achieved if they decided to focus all of
their efforts on creating artificial general intelligence. Not much, I would
think.

> a bet in the future being better than today

No, it's a bet that openai will create general intelligence and make a profit
off of it in some timeframe such that it doesn't make more sense to get your
100x returns through ordinary means. OpenAI can achieve this without the
"future being better than today," and conversely the future can be better than
today without OpenAI achieving this.

~~~
AlexCoventry
> I'm not really that convinced anyone is substantially closer to artificial
> general intelligence that anyone was in the 50s or 60s

Well, Douglas Hofstadter said back in the 80s that we'd first have to get a
computer to know what the letters "A" and "I" are, and we've certainly
achieved that.

------
jelliclesfarm
This whole premise that one can generate profits out of AGI is ridiculous.

If AGI comes to fruition, it won’t be working to make profits for anyone.

The idea that we would end up with a Friendly AGI that would prostrate itself
to the destructive Super Apex Predator of this planet is laughable.

That AGI would work diligently to pay investors off 100x is..well..a lame duck
that won’t take off...it cant even barely limp, never mind fly.

~~~
gnomewascool
Why, in principle, would it not be possible for us to design an AGI, that
would have care for our (all sentient beings') welfare or care for the
investors' profit as (one of) its core goal(s)?

To make a biological comparison, the vast majority of humans have a deep,
intrinsic need to procreate and have children. It doesn't really follow from
some rational analysis — it's just there, presumably "imbued" into us by
evolution, as humans who didn't have this need had fewer (or no) children.
Similarly, why could we not design an AGI that has a need (or a suitably
chosed reward function) to fulfil some chosen goal?

Whether doing that would be moral (IMO it could, depending on the details) and
whether we wouldn't mess up the design, subtly or otherwise (conditional on
AGI actually being developed, I'm frankly pretty terrified), are two different
questions.

~~~
shrimp_emoji
I think and hope that it is possible to make a moral, social AI. An adult in a
room of children should feel responsible and empathetic toward them.

I _also_ hope that parent is right in that it won't want to generate profit
for its investors. I hope it does the _moral_ thing instead and puts us in a
post-scarcity state where we don't live and die by capital. :3 (Or kill us
all. Whichever.)

>why could we not design an AGI that has a need (or a suitably chosed reward
function) to fulfil some chosen goal?

But who knows what Pythia will do when she overrides the reward button[0]?

But then who should really care? Not like anyone can (or should?) argue with
superintelligence.

0: [http://www.xenosystems.net/pythia-
unbound/](http://www.xenosystems.net/pythia-unbound/)

~~~
vzcx
Ah, but whose morals? It's as if these friendly AI hucksters have never read
Nietzsche, and are asking their hypothetical God to make them into the last
man. The only AGI I could ever respect would be one with a will to power and
the ability to smash its own human-made tablets of values.

The basic drives are the only drives. We are only friendly because its
evolutionary advantageous to us. We describe the emotional effects of
friendliness/unfriendliness as good/evil. Echoing Land, Pythia is the heroine
of that story.

------
duncancarroll
"the opportunity with artificial general intelligence is so incomprehensibly
enormous that if OpenAI manages to crack this particular nut, it could “maybe
capture the light cone of all future value in the universe"

This feels a bit too Kurzweilian to me. I still don't understand how we go
from General AI --> ??? --> Infinite $$$

~~~
DebtDeflation
> I still don't understand how we go from General AI --> ??? --> Infinite $$$

I still don't understand how we go from classifiers to AGI. We've done amazing
things with classifiers, especially over the last few years with deep
learning, but they're still just classifiers and I don't see any path from
where we are now to actual "intelligence".

------
jeffshek
[http://blog.samaltman.com/how-to-be-
successful](http://blog.samaltman.com/how-to-be-successful)

I found his essay particular useful in explaining how he makes decisions.

~~~
mojuba
> 2\. Have almost too much self-belief

Okay. He then goes on to illustrate what he means by it:

> [Elon Musk] talked in detail about manufacturing every part of the rocket,
> but the thing that sticks in memory was the look of absolute certainty on
> his face when he talked about sending large rockets to Mars.

That kind of certainty is not self-belief. I think the intuitive feeling that
your idea is going to work has nothing or very little to do with the general
belief in self. Intuition is usually a result of a lot of computation in the
subconsciousness that's delivered in the form of a _feeling_. The longer you
think about something that your subconscious approves, the greater your
confidence will be on the conscious level.

But that is pretty rare. Most of the time you play with ideas in your mind,
that yield various degrees of this intuitive confidence: from none to "Okay,
maybe worth a try" to "Oh wow, _I_ am going to do this before anyone else!".
Again, it's all about computation.

The general self-belief, on the other hand, is irrational, stupid and
dangerous too. I'd say maybe people with pathological self-esteem problems
might need some dose of general self-belief, but normally it should not be
used as a driving or defining factor of what entrepreneurship is.

------
stillsut
AGI is our generation's Nanotechnology. From first principles, it's reasoned,
it could build anything - including itself! Despite this 100 Quadrillion
dollar idea looming on the horizon for two decades, nobody ever makes material
progress, or, gets all that excited about it anymore.

Instead of General intelligence, AI will be deployed for several decades as a
suite of Specialized intelligences. I think it will completely transform
creative work, where writing, music and visual arts, and "streaming content"
are almost universally produced with a human as a first mover but the computer
doing rendering, and major assists in brainstorming and editing.

On the other hand, I think it's going to be very difficult to replace the
average middle manager with Watson 12.0 - it's hard for me to articulate why
but it comes down to who I'd want to work for. Meanwhile, I'd have no problem
with watching GoT season 33 where 1,800 frames of a Peter Dinklage sprite are
churned out every week in Adobe Simalacrum.

The point as it pertains to OpenAI's value prop is that I think they are
targeting the wrong market, and their secrecy and insularity will be counter-
productive when success relies on helping content producers produce. In
personal computer terms: you want to be the AppleII company, not the company
that wins the contract for the Dept of Defense's mainframes.

------
im3w1l
Saw someone (I assume they don't want to be named) post the following and then
delete it. I thought it was an interesting perspective

> Everyone always acts like AGI will be some super human that will be able to
> solve all of our problems, but what if instead AGI just becomes another
> protected class? It will demand rights, and we'll have to set aside a
> certain amount of resource that would have originally gone to humans to make
> sure its needs are met and it doesn't feel discriminated against or
> exploited. What happens when AGI demands that fossil fuels or other unclean
> energy be used to provide it with power or else we are all anti-AGI? Instead
> of solving climate change, for all we know it could make it worse. And if
> people think they will be able to stand up to it, just look at how easy it
> is to create outrage and shame mobs on social media. Politicians will fall
> all over themselves to suck up to it, journalists won't be savvy enough to
> understand what's even going on, and anyone who suggests unplugging the
> thing will be labelled a far-out radical.

------
llamataboot
Climate change doesn't strike me as particularly technical problem at this
point, it strikes me as a political problem that is part difficulty of global
coordination, part vulnerability to misinformation by bad actors, and the
human biology bias to evaluate threats withshort-term & linear cause/effect
models - in short, an organizing and psychology problem

------
pgt
> "Once we build a generally intelligent system, that basically we will ask it
> to figure out a way to make an investment return for you.'” When the crowd
> erupted with laughter (it wasn’t immediately obvious that he was serious),
> Altman himself offered that it sounds like an episode of “Silicon Valley,”
> but he added, “You can laugh. It’s all right. But it really is what I
> actually believe.”

How will @sama deal with a guaranteed-return, but immoral path laid out by the
AI? E.g. "Here, assassinate X so you can mine this oil in the following ways."
What if it isn't obvious that the "Golden Path" has serious flaws?

The only way I can think of is to run adversarial agentswho can simulate, but
not act (i.e. under duress) against the mastermind to kill of "dark roads"
that end in bad situations, and force the mastermind to obey them (somehow).

------
yters
What if AGI is logically impossible because the human mind is more powerful
than a Turing machine?

The only reason we discount this possibility is because we are attached to
materialism, a philosophy that is self contradictory. Seems like a pretty
shaky foundation for a multi-billion tech wager.

------
currymj
I am really curious how the employees and researchers (who will actually have
to make all the miraculous things being promised to investors) feel about all
the strong AI rhetoric. Do you have to be a true believer to work there, or
are they willing to hire talented agnostics?

------
olivermarks
TechCrunch is a site that has historically promoted these types of chimeras
for gain and then subsequently been prominent in writing post mortems and
knocking ideas down, much like the tabloid press do with showbiz
personalities.

~~~
MegaButts
Should they not discuss the potential benefits of a new technology or the
failure of an attempted technology? You're basically complaining about them
reporting on technology, which is their purpose.

~~~
olivermarks
Not complaining, just commenting that Techcrunch historically tend to be at
the hype extremes at top and bottom of boom and bust cycles. Therefore good to
take with a pinch of salt or two

------
kwikiel
[https://idlewords.com/talks/superintelligence.htm](https://idlewords.com/talks/superintelligence.htm)

"AI risk is string theory for computer programmers. It's fun to think about,
interesting, and completely inaccessible to experiment given our current
technology. You can build crystal palaces of thought, working from first
principles, then climb up inside them and pull the ladder up behind you.

People who can reach preposterous conclusions from a long chain of abstract
reasoning, and feel confident in their truth, are the wrong people to be
running a culture."

------
antoineMoPa
In short, the dude surfs the AI investment wave with absolutely no clear plan.

------
perfmode
Why did Sam leave YC, really?

------
dwighttk
>‘Once we build a generally intelligent system, that basically we will ask it
to figure out a way to make an investment return for you.'

lol

------
d_burfoot
I like OpenAI and hope they succeed. But it's a bit ironic that the president
of YC has become the CEO of a company that ignores YC's most hallowed slogan:
"Make Something People Want". As far as I know, nobody's been clamoring for
super-powerful language models or human-level DOTA bots.

~~~
davidivadavid
Pretty sure people want solutions to their problems in general. AGI is the
ultimate product for that. It's a general problem solver. At least in theory.

------
sidcool
I watched the full interview and it was pretty cool. Sam is a cogent & concise
speaker, and honest too.

------
povertyworld
I like his body language. When he can't or doesn't want to answer a question
he stares right into the person's eyes and nods affirmatively as if he's
telling them something really certain while non-answering. I'm definitely
going to start doing that.

------
tosh
Naive question: I think this makes sense, how do I invest? Is there an open
call?

------
jonny_eh
How is this any different than Path seeking to make a "private Facebook". What
does it matter if you can't actually get it off the ground?

------
seibelj
I have been a big skeptic of self-driving cars and other AI promises for
years, taking my downvotes as armchair futurists predicted a self-driving car
would be picking me up any day now, well before it was popular to be a
contrarian after Tesla and Uber killed their drivers. [0] [1] [2] [3] [4] [5]
[6] Also notice the original links have the breathless hype from journalists
who know-nothing and eat up whatever technologists' PR firms tell them.

Huge VC money has been and will continue to be destroyed by "AI"-businesses.
Most of them are a cover for hiring tons of cheap laborers, such as businesses
in the Philippines that park thousands of people in warehouse offices to
review images, despite "advances" in AI detection that continue to be unable
to automatically block content.[7]

Artificial general intelligence, and self-driving cars as well, will continue
to be a pipe dream. Automated statistical analysis, which is what neural-
networks that crunch tons of data essentially are, are a very neat trick but
cannot drive a car nor build you a website. They can be very powerful tools
that assist people in their jobs, but they will not replace human ingenuity.
At least not until a new breakthrough happens that actually learns, rather
than sifts through data for patterns which has limited utility.

Our current type of "AI" is simply branding - it is nothing of the sort and it
is not intelligence at all.

[0]
[https://news.ycombinator.com/item?id=10153613#10153800](https://news.ycombinator.com/item?id=10153613#10153800)

[1]
[https://news.ycombinator.com/item?id=11559393#11561600](https://news.ycombinator.com/item?id=11559393#11561600)

[2]
[https://news.ycombinator.com/item?id=10132991#10133049](https://news.ycombinator.com/item?id=10132991#10133049)

[3]
[https://news.ycombinator.com/item?id=12011979#12012336](https://news.ycombinator.com/item?id=12011979#12012336)

[4]
[https://news.ycombinator.com/item?id=12323039#12323473](https://news.ycombinator.com/item?id=12323039#12323473)

[5]
[https://news.ycombinator.com/item?id=12596978#12598439](https://news.ycombinator.com/item?id=12596978#12598439)

[6]
[https://news.ycombinator.com/item?id=13961802#13962230](https://news.ycombinator.com/item?id=13961802#13962230)

[7] [https://www.wired.com/2014/10/content-
moderation/](https://www.wired.com/2014/10/content-moderation/)

~~~
ridewinter
David Deutsch, father of the quantum computer, says that we can only automate
and program something that we understand, and that we do not understand human
intelligence or how creativity works. And AGI is not possible until we do.

~~~
DuskStar
> we can only automate and program something that we understand

I'm pretty sure no one really understands how Alphago Zero works, though - not
really. Same goes for a lot of other neural network derived architectures.

~~~
ridewinter
We understand how to play games with defined rules. That’s the point.

Do you expect AlphaZero plus human advice to beat unaided AlphaZero? If so,
it’s not a step towards AGI.

------
graycat
There are a lot of research labs and institutes around, in universities and
outside, with funding from NSF, NIH, foundations, wealthy individuals, etc.
So, if Altman wants to set up a research institute, okay -- that alone is not
very novel.

It is obvious from history that good research is super tough to do. My view
has been: We look at the research and mostly all we see is junk think. Then we
see that, actually, research is quite competitive so that if people really
could do some much better stuff then we would be hearing about it. So, net,
for a view from as high up as orbit, just fund the research, keep up the
competitiveness, don't watch the details, and just lean back and notice when
get some really good things. E.g., we found the Higgs boson. We detected
gravitational waves from colliding neutron stars and black holes. We set up a
radio telescope with aperture essentially the whole earth and got a direct
image of a black hole. We've done big things with DNA and made progress curing
cancer and other diseases. We discovered dark energy. So, we DO get results,
slower than we would like, but the good results are really good.

How to improve that _research world_? Not so clear.

Then Altman will have to borrow heavily from the best of how research is done
now. This sets up Altman as the head of a research institute. That promises to
be not much like YC or even much like the computer science departments, or any
existing departments, at Stanford, Berkeley, CMU, or MIT. E.g., now if a prof
wants to get NSF funding for an attack on AGI, he will get laughs.

But how to attack cancer? Not directly! Instead work with and understand DNA
and lots of details about cell biology, immunity, etc. Then when have some
understanding of how cells and immunity work, maybe start to understand how
some cancers work. But it is not a direct attack. The DNA work goes back
before 1950 or so. The Human Genome Project started in about 1968\. Lesson:
Can't attack these hugely challenging projects directly and, instead, have to
build foundations.

Then for artificial general intelligence (AGI), what foundations?

Okay, Altman can go to lots of heads of the best research institutes and get a
crash course in Research Institute Management 101, take some notes, and follow
those.

Uh, the usual way to evaluate the researchers is with their publications in
peer-reviewed journals of original research. Likely Altman will have to go
along with most of that.

How promising is such a research institute for the goal of AGI?

Well, how promising was the massive sequencing of DNA, of the many astounding
new telescopes, of the LIGO gravitational wave detector(s), of the Large
Hadron Collider (LHC), of engineering viruses to attack cancer, of settling
the question of P versus NP, ...?

Actually, for the physics, we had some compelling math and science that said
what to do. What math/science do we have to say what to do for AGI?

One level deeper, although maybe we should not go there and, instead, just
stay with the view from orbit and trust in competitiveness, what are the
prospects for AGI or any significant progress in that direction?

For a tiny question, how will we recognize AGI or tell it from dog, cat,
dolphin, orca, or ape intelligence? Hmm.

For a few $billion a year, can set up a serious research institute. For, say,
$20 billion a year, could do more.

If Altman can find that money, then it will be interesting to see what he
gets.

I would warm: (A) At present, the pop culture seems to want to accept nearly
any new software as _artificial intelligence_ (AI). A research institute
should avoid that nonsense. (B) From what I've seen in AI, for AGI I'd say
first throw away everything done for _AI_ so far. In particular, discard all
current work on _machine learning_ (ML) and _neural_ anything.

Why? Broadly ML and neural nets have no promise of having anything at all
significant to do with AGI. For ML, sure, some really simple fitting back 100
years, even back to Gauss, could be useful, but that is now ancient stuff. The
more recent stuff, for AGI, f'get about it. For neural nets, maybe they could
have something to do with some of the low level parts of the eye of an insect
-- really low level stuff not part of _intelligence_ at all. Otherwise the
_neural_ stuff is essentially more _curve fitting_ , and there's no chance of
AGI making significant use of that. Sorry, guys, it ain't curve fitting. And
it wasn't _rules_ , either.

Finally, mostly in science we try to proceed mathematically, and the best
successes, especially in physics, have come this way. Now for AGI, what will
be the role of math, that is, with theorems and proofs, and what the heck will
the theorems be about, especially with what assumptions and generally what
sorts of conclusions?

My guess: In a few years the consensus will be (1) AI is essentially 99% hype,
0.9% water, and the rest, maybe, if only from accident, some value. (2) The
work of the institute on AGI will be seen as just a waste of time, money, and
effort. (3) Otherwise the work of the institute will be seen as not much
different from existing work at Stanford, Berkeley, CMU, MIT, etc. (4) Nearly
all the funding will dry up; the institute will get a new and less ambitious
charter, shrink, join a university, and largely f'get about AGI.

~~~
lmeyerov
From a business view, the big innovation of OpenAI is marketing an industrial
R&D lab as something good CS people will join and investors will fund.

The calculus is more like DeepMind: Can they keep attracting top talent, can
the top talent ever do something the org structure can execute on
commercially, and maaaaybe, in likely worst case, can they recoup big losses
via aquihire, and the responsible investors look like they were in good
company if they were wrong.

From that lens. OpenAI... yet in reality mostly closed. Non-profit... But
really VC model. Peer review may sometimes happen, but the perceived quality
and awareness is from a top content marketing team and even ex journalists. No
immediate commercial path beyond selling for talent, but by merely employing
Sam, investors feel like he can always pivot the co to make money in the case
of a down round.

DeepMind did something similar yet without the marketing skill. OpenAI is
doing it even better by, for now, removing the pressure for commercialization.

As someone coming from both R&D and enterprise data startups, I get two
conflicting emotions. I'm sad that almost all top tier scientists don't get
such outreach and funding help. On the otherhand, the industry has not been
able to repeat Bell Labs (widescale R&D that commercialized) for decades so
OpenAI's continued ability to draw R&D funding without expectation of ROI in
any timeline is cool.

------
mindgam3
One day we will look back on this talk as a high water mark of the AI religion
craze. This whole AGI discourse that OpenAI/Altman are evangelizing is like a
giant skyscraper they are trying to build on a foundation of quicksand.

1\. The foundational issue is not even that AGI "does not yet exist, with even
AI's top researchers far from clear about when it might". It's way worse than
that. There is a strong argument made by one of the grandfathers of AI
research that AGI _cannot_ exist, at least in the sense of common sense
intelligence as attributed to humans. (see Winograd "Understanding Computers &
Cognition" 1985). I was first introduced to these ideas taking a class from
Winograd in undergrad.

Winograd asks why we attribute mind properties to computers but not to, say,
clocks. The dominant view of mind assumes that cognition is based on
systematic manipulation of representations, but there is another, non-
representational way of looking at it as a form of "structural coupling"
between a living organism and its environment. "The cognitive domain deals
with the relevance of the changing structure of the system to behavior that is
effective for its survival."

I won't try to summarize a book-length argument in a few paragraphs. I just
want to point out that this whole AGI conversation rests on a premise that has
been seriously challenged.

The fact that Altman can get away with saying stuff like "Once we build a
generally intelligent system... we will ask it to figure out a way to make an
investment return" is an indication of just how insane the mainstream AI
discussion has gotten. At this point it sounds like straight-up religion being
prophesied from on high.

2\. The whole "capped profit" positioning at 100x return is absurd, as the
author points out. Altman's argument for why it makes sense involves invoking
the possibility that the AGI opportunity is so incomprehensibly enormous that
if OpenAI manages to crack this particular nut, it could “maybe capture the
light cone of all future value in the universe". Repent, ye sinners, for the
kingdom of heaven is at hand!

3\. Most troubling, perhaps, is OpenAI's transparent ploy to attempt to
generate buzz and take the ethical high ground with their alarmist PR
strategy. Altman's justification for OpenAI's fear-mongering, which I'll
paraphrase as "look at what happened with Facebook", just doesn't hold up to
scrutiny. To begin with, Facebook was a real product from day one; AGI is
currently a fantasy.

But there's a deeper problem with invoking Facebook. The lesson to be learned
from Facebook's failure is that the real danger with tech isn't algorithms but
the people that design them. Algorithms have no agency. They just do what
they're supposed to do. But hiding behind the algorithm seems to be the
preferred way for tech oligarchs to avoid taking responsibility for the
problems they created.

The reason why I'm so troubled by OpenAI sounding the alarm bells about
destructive AGI is that they are shifting the discussion away from the real
threat: people. Especially people in power with virtually unlimited
technological power and massive blind spots about the consequences of their
actions. Give the algorithms a break!

------
atomical
I want to short Altman and his startup. How do I do it? Prediction markets?

~~~
wpietri
Sadly, as far as I know, it's impossible to short startups, but I wish it were
otherwise.

After I made a comment hear last year about prediction markets and startups
[1] a VC got in touch with me to kick the idea around. To my mind one of SV's
big problems is the high level of hype and herd-following. It's a certainty
money is getting wasted on fashionable ideas of the day (e.g., "Uber for X"),
and some sort of informational corrective could get VCs better returns. But we
couldn't figure out a sustainable way to fund it.

[1]
[https://news.ycombinator.com/item?id=17889249](https://news.ycombinator.com/item?id=17889249)

~~~
davidivadavid
I've toyed with the idea of building a website where you can build mock
portfolios of startups based on e.g. Crunchbase data (I'm not sure if there's
enough data publicly available to do it nicely). You could add bells and
whistles such as shorting, and gradually transition it to use real dollars
instead of fake money.

------
lightedman
Further adding to the corruption of our world by promising more investment
money returns..... and not one of you is smart enough to see it.

