
Eliezer Yudkowsky on the Singularity, Bayesian Brains and Closet Goblins - gpresot
http://blogs.scientificamerican.com/cross-check/ai-visionary-eliezer-yudkowsky-on-the-singularity-bayesian-brains-and-closet-goblins/
======
AndrewKemendo
I am heavily biased as I was active on Overcoming Bias (Irony?) and then later
the Less Wrong forums, but every interaction I have had over the years with
Yudkowski or his followers has been insufferable.

Beyond just the pathos of that, I take major issue with the lack of technical
depth of MIRI and the rest of these "futurist" prognosticating semi-doomsayers
with respect to current AI state of the art, or even proposals for actual
roadmaps to AGI.

Their Step 1: Create AGI; Step 2: ??; Step 3: Doomsday reasoning chain has led
to what is in my mind the silly concept that AGI can be built to be "safe" to
humanity. This leads to false hope and to things like the AI Safety Pledge aka
"Research Priorities for Robust and Beneficial Artificial Intelligence"[1] -
for which admittedly I am a signatory largely cause my research adviser did
and it was the thing to do at the time.

That said, I thought his response to how he differs from Kurzweil - moreso the
conclusions he comes to - are pretty spot on.

[1] [http://futureoflife.org/ai-open-letter/](http://futureoflife.org/ai-open-
letter/)

~~~
VladKovac
The reason they don't have roadmaps to AGI is because they do _not_ want AGI
to be made before the Friendly AI problem has been solved.

When you think about the vast space of mind design space, plus all the ways
we've made mistakes reasoning about even simpler & stupider optimization
processes like evolution, I don't think they're being too silly.

If any of you want a quick and insightful introduction, this video is very
good: [https://vimeo.com/26914859](https://vimeo.com/26914859)

~~~
AndrewKemendo
>The reason they don't have roadmaps to AGI is because they do not want AGI to
be made before the Friendly AI problem has been solved

Right, which is an impossibility in my opinion. There is an inherent conflict
between systems with asymmetric power and capability in a resource constrained
environment. Trying to get around that fundamental principle is an exercise in
futility.

~~~
greenrd
Could you elaborate? "Systems with asymmetric power" presumably refers to the
AGI - or does it? Maybe you are referring to the AI box or the utility
function design or the "nanny AI" which is meant to contain the AGI? I don't
know what "capability in a resource constrained environment" refers to because
that could refer to pretty much anything in our universe or any finite
universe.

------
retbull
For all that I loved his hpmor book I can't stand to read him speaking the way
he does. It feels like he tries too hard to sound smart and has built his own
hill that he gets to be king of and ignore the rest of the world from the top
of.

~~~
dsjoerg
I disagree that he tries too hard to sound smart. I despise the tendency to
unnecessarily use complex words when simple ones will do. Police officers, for
example, are rightly famous for this tendency. In Eliezer's writing and in
this Q&A, the complex words are there because they are the most concise and
precise expression of what he's trying to say -- simpler words would either be
inaccurate or would have to go on at length.

For example:

Horgan: Is college overrated?

Yudkowsky: It'd be very surprising if college were underrated, given the
social desirability bias of endorsing college.

You'd need a few paragraphs to explain what Yudkowsky means to people who
don't already understand social desirability bias.

~~~
snowwrestler
Yudkowsky has a way of thinking that he believes everyone else should adopt.
He is skilled at using phrasing to frame his statements in his way of
thinking.

For example, instead of saying something like, "a lot of people think college
is good," he phrases it in terms of a bias. Why? Because if he can get the
other person talking about the subject in terms of biases, then he has
advanced his own favorite way of thinking.

But is liking college really a bias? And if so, compared to what norm? He
offers no explanation; he just asserts it.

~~~
greenrd
I don't think he's saying that liking college is a bias. Or rather, he is, but
I think he's saying something subtly different (which implies the former):
that those who denigrate college, or are "meh" about college, take a "social
desirability hit" \- they are likely to be seen as either unintelligent manual
labourers, or Philistines, or annoying self-taught outliers, or insulting
people who have put in the hard work to go to college, or a combination. And
therefore young people grow up in such an environment and many of them have a
strong belief, in such an environment, that college is a necessity, and don't
really consider the evidence dispassionately.

------
saint_fiasco
>I'd try to do all the things smart economists have been yelling about for a
while but that almost no country ever does.

Yudkowsky then mentions NGDP level targeting, consumption taxes, land value
tax, negative wage taxes, Singapore's healthcare and Estonia's e-government.

Is there an economics textbook or article that explains in layman's terms what
those things are and why they are good ideas?

~~~
seiji
NGDP level targeting says during economic downturns, the government should
fill the gap between the downturn and the previous economic output level
(creating jobs, buying things, paying people to do things, etc). Basically,
don't let an economic downturn interrupt people's lives—always give them
something to do and opportunities to grow.

land value tax comes from the idea nobody should be able to "own" land, you
should rent land. When there's a more profitable use for your land than you
are current exploiting, you must give the land up. The UK enjoys things like
99 year leases on land instead of in the US where you "buy" land and own it
until the heat death of the universe.

negative wage taxes is like basic income if you make the "negative wage limit"
really high. The government pays you because you don't make enough (maybe for
reasons outside your control, like all jobs you are qualified for are now done
by robots).

Singapore has free computerized centralized healthcare that doesn't cost a
million dollars a person-year like in the US.

Estonia lets you get something resembling a "mini passport" with no
international recognition (perhaps some cross-EU identity recognition, but no
residency benefits), but with legal ties to an Estonian "e-residency" so you
can verify your identity online electronically (chipped smart cards verified
by government records, etc).

~~~
LesZedCB
> land value tax comes from the idea nobody should be able to "own" land, you
> should rent land. When there's a more profitable use for your land than you
> are current exploiting, you must give the land up. The UK enjoys things like
> 99 year leases on land instead of in the US where you "buy" land and own it
> until the heat death of the universe.

That sounds like the scariest solution to the problem of land ownership. There
are plenty of cases where the government decides that somebody must give their
land up to expropriate more value from it with Eminient Domain laws. Who gets
to decide when land is not extracting enough value? Politicians who are bought
out by private corporations controlled by a board of directors, none of whom
live anywhere near the land being expropriated? Land should be collectively
owned by the people living on it and producing from it.

~~~
jsprogrammer
Land value tax sounds like a proposal to return directly to feudalism.

"Using land the most profitably" (ie. extracting the most tribute) will become
de facto ownership for the controlling entity through tribute, while everyone
else will be tenants of the few controllers.

~~~
saint_fiasco
I think what the grandparent meant was that if you were not using the land
profitably you are going to _want_ to 'sell' the land to someone who can use
it more profitably and is therefore willing to pay a high price for it. If you
didn't do that and just rented the land you would lose money because of the
high land tax.

I think 'selling' here means "giving the other guy the privilege of having to
pay lots of taxes instead of you".

~~~
LesZedCB
I think renting land in any fashion is morally reprehensible. Though I don't
fit in with much the neoliberal HN crowd as an anarchist communalist.

Ownership should be restricted to the community of inhabitants of the land.
The processes around controlling the land would be voted on democratically,
and a federation of community ownerships with local representation would deal
with extra-community processes around broader land decision making.

------
jo6gwb
Knowing nothing about this field, reading the comments did serve to temper my
enthusiasm for the article.

~~~
saint_fiasco
I only see three comments, one by a guy who complains that Yudkowsky was mean
to him and has no academic credentials, one from a guy who calls Yudkowsky
overconfident (in what, his belief that we should be cautious?) and naive for
being a libertarian and one from a guy who admits he also knows nothing about
the field.

------
andrewprock
Unfortunately, the questions the interviewer asks are all rather trite. On the
flip side, most of the answers are rambling and barely coherent. Not the
finest presentation of the field as a whole. :(

~~~
snowwrestler
I find Horgan very frustrating as a writer. He seems to prefer trite
controversy over deep understanding.

------
basch
>I don't think that humans and machines "merging" is a likely source for the
first superhuman intelligences.

I have to disagree with him a little bit. I think "merging" is already
happening and will continue to accelerate. AI/Human feedback loops are easy to
conceptualize. AI does its own thinking, when it is unsure it consults an
array of humans for their opinion. Repeat ad infinitum. Maybe a second array
of humans proofread AI decisions, watching for conclusions they disagree with.
It might not be a matrix plug in the back of your head, but data centers and
human-arrays stuck in a feedback-loop, (communicating bidirectionally with
screens, cameras, and eye movements) probably offer a scenario that out
performs either the machines or humans in isolation. Each augments the others
limitations.

Are machines faster by themselves, or with a human co-processor to consult at
their discretion? Can a machine go "hey im not great at this task yet, but
humans seem to accelerate at it"?

~~~
pinouchon
When Ray Kurzweil talks about merging with the machine, he talks about it in
the literal sense (as nanobots in your body, or uploading your mind to a
computer). You seem to view the merging in a broader sense, which I agree is
happening.

~~~
basch
[http://abstrusegoose.com/171](http://abstrusegoose.com/171)

[http://blog.dilbert.com/post/102627914061/dilbert-
pocket](http://blog.dilbert.com/post/102627914061/dilbert-pocket)

(the philip k dick in me thinks, if we merge, the machines would absorb us,
not us them into our blood. the borg > cyborgs)

------
mmagin
I'll just leave this here:
[http://rationalwiki.org/wiki/Roko's_basilisk](http://rationalwiki.org/wiki/Roko's_basilisk)

~~~
maaku
Dude that was years ago. Give it up.

