
Brian Eno: We've been living happily with AI for thousands of years - cdcarter
https://www.edge.org/response-detail/26191
======
Rauchg
For different versions of this argument see:
[https://en.wikipedia.org/wiki/AI_effect](https://en.wikipedia.org/wiki/AI_effect).

Bostrom said: "A lot of cutting edge AI has filtered into general
applications, often without being called AI because once something becomes
useful enough and common enough it's not labelled AI anymore".

Finally, the problem with all these articles is the lack of precise definition
for the terminology involved (it reminds me of debates about "consciousness").

Recipe: make a slight twist on the interpretation of the already-vague
definition of the term and a new essay emerges.

In this case, the author is equating "AI" to "Abstraction". All the things he
mentions he doesn't understand, he could understand by looking them up and
talking to the experts, but he doesn't _have to_ , because the appropriate
simple interfaces are in place. That's intelligence in its general form,
where's the artificial part?

~~~
roymurdock
A lot of the arguments/heated debates on HN stem from a failure to define key
terms. Less so in threads on compsci-related topics, but almost always on
threads with econ-related topics. Especially universal basic income. Once you
identify this common issue it becomes easy to avoid these frustrating (and
frustrated) conversations altogether.

Eno has taken a pretty liberal view of "AI" but he makes a good point about
specialization. I would define the concept he's getting at as "collective
social memory," but I enjoyed his musings nonetheless.

~~~
xrange
>A lot of the arguments/heated debates on HN stem from a failure to define key
terms. ... Especially universal basic income.

Can you recommend some definitions that are a good starting place for basic
income discussions that we could helpfully point to when discussing that
issue?

~~~
vec
There are actually two slightly different ideas that are called UBI.

The libertarian variant is a relatively small guaranteed income (usually
roughly the current minimum wage), is usually paid for with a VAT or sales
tax, and replaces most or all of the current welfare programs. The main
objectives are to provide a safety net for the poorest citizens in a fairer
and less bureaucratic manner and to somewhat equalize the bargaining power
between low-wage workers and their employers, usually as an alternative to
unionization.

The socialist variant is significantly larger (living wage), is usually paid
for with graduated income or capital gains taxes, and supplements rather than
replaces the existing welfare programs. Its main goals are to provide a direct
mechanism for income redistribution and to facilitate non-profitable
professions (artist, full-time volunteer, professional student, stay at home
caretaker, etc.) for those that want or need them.

There's obviously a lot of overlap between the two, and most proponents
support both sets of objectives to some degree. That said, most of the
contentious arguments seem to derive from critics either not understanding the
difference between the two proposals or strawmanning their interlocutors into
one extreme or the other.

------
atemerev
As a curious software engineer, I have a basic familiarity on where heating
oil comes from (I remember my first sci-pop book with a section on basics of
oil processing), where nuts are grown, basics of food industry supply chains,
and every other example covered. My knowledge might be rudimentary and without
much detail, but I try to cover all bases, and when I discover some area of
human knowledge I genuinely know nothing about, I jump to Wikipedia in
excitement. (I read Wikipedia a lot).

I thought that being uncomfortable with own ignorance is a fundamental part of
human nature. But apparently it isn't.

~~~
ansible
Yeah, I thought that was all kind of funny too. I'm a voracious reader, and
have been curious about just about all parts of science and technology at some
point. So I too knew at least the basics for everything he mentioned.
Magically plop me in a stone-age agrarian society, and we'd be back to at
least an 18th-century level of technology in a couple decades.

~~~
schoen
There were some science fiction stories (I forgot their names and authors) in
which time travelers actually encounter a lot of practical difficulties in
past societies, because the people they encounter don't believe them or don't
readily see the benefits of their suggestions. So maybe you need to add "a
stone-age agrarian society that's eager to take advantage of my knowledge",
since that part can't necessarily be taken for granted. :-)

~~~
ansible
Yup. And you need a common language, so that may take a while.

Some simple stuff like crop rotation, composing, irrigation (if needed in that
climate), should be some easy wins though.

And then it would be on to making iron, glass and other basic building
materials.

It is more likely that I'd just die of something stupid though, like bears.

------
pajop
Tim Urban points out in his AI article that it might be dangerous to expect
that the AI that we know now would still be the same AI we will have in the
very near future [http://waitbutwhy.com/2015/01/artificial-intelligence-
revolu...](http://waitbutwhy.com/2015/01/artificial-intelligence-
revolution-1.html)

"So as AI zooms upward in intelligence toward us, we’ll see it as simply
becoming smarter, for an animal. Then, when it hits the lowest capacity of
humanity—Nick Bostrom uses the term “the village idiot”—we’ll be like, “Oh
wow, it’s like a dumb human. Cute!” The only thing is, in the grand spectrum
of intelligence, all humans, from the village idiot to Einstein, are within a
very small range—so just after hitting village idiot level and being declared
to be AGI, it’ll suddenly be smarter than Einstein and we won’t know what hit
us:"

Compare the images:

[http://28oa9i1t08037ue3m1l0i861.wpengine.netdna-
cdn.com/wp-c...](http://28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com/wp-
content/uploads/2015/01/Intelligence-600x472.jpg)

[http://28oa9i1t08037ue3m1l0i861.wpengine.netdna-
cdn.com/wp-c...](http://28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com/wp-
content/uploads/2015/01/Intelligence2-1024x836.png)

~~~
scotty79
I'm not afraid of AI because intelligence has diminishing returns. 1000 times
smarter entity might be only 2 or 3 times better at predicting the future than
human is. So 1000 times smarter superhuman AI maybe could tell weather two
weeks ahead instead of one. And the physical world is inherently like weather,
chaotic in mathematical sense.

Our upper hand against even very dumb animals doesn't come from our
intelligence. It comes from thousands of years of infrastructure that we built
to amplify ourselves. AI would have to build its own, which we won't let it
(at least initially) or steal ours in which having ability to see just a bit
farther in the future doesn't help that much with.

It won't get much ahead of us with science because science is limited by
technology. 1000 times smarter physicist won't figure out new universe laws
without getting his hands on data from new accelerator, larger than any other
built before.

That's some reasons why I think run-away AI won't be dangerous to us unless we
let it build or take over our physical world which won't happen given our
current caution towards subject.

~~~
Houshalter
I don't think that is true, just from observing the difference between humans.
Really intelligent mathematicians can do work that a weaker mathematician
would never accomplish if they spent their whole life on it. And the average
person could probably not even understand it given years.

There seem to be vast differences in ability between humans, despite all of us
having 99.99% the same DNA and same brain structure. Who knows how far beyond
humans a superintelligent AI could be.

>AI would have to build its own, which we won't let it (at least initially) or
steal ours in which having ability to see just a bit farther in the future
doesn't help that much with.

Who says we need to let it? It just needs to be better than the best human
hackers, and it will be able to compromise a huge amount of our infrastructure
in a short time. Possibly without us noticing. Even intelligent groups of
humans have been able to achieve terrifying things like stuxnet.

Then if all else fails, it can just persuade people. It's smarter than the
best human sociopaths and politicians after all. And it could bribe people by
earning vast amounts of money on the stock market, or other areas where it has
a comparative advantage over humans. That would buy it a huge amount of power
to start building independent infrastructure.

But the scariest outcome is that it doesn't bother with robot armies at all.
If it's so intelligent, it could design working nanotech. The only reason we
don't have it is because it's so hard to engineer. So much complexity and
moving parts, is very hard for humans to wrap their brains around.

Human brains weren't evolved to be engineers after all, it's a lucky
coincidence we are capable of it at all. An AI brain optimized for this task
should far exceed humans. The same way chess computers far exceed human
chessmasters.

~~~
scotty79
I agree that AI can do marvels in math, but math quickly departs from reality
(like string theory) and being able to prove things about some beautiful
intricate mathematical structure most of the times doesn't help you much with
the world of matter.

I recommend Culture novels written by Iain M. Banks. There are vastly powerful
AIs but they spend as much time as they can afford in "Infinite Fun Space"
which is basically math taken to insane depths.

I know that there's a saying that all math is eventually applied but I think
this saying is popular because it is sort of paradoxical that some of the most
bizarre math eventually found their application. What I think is for math just
a bit deeper than what human can grasp, "eventually" quickly becomes larger
than the age of universe.

About stealing... Hackers are successful not because they are ungodly smart.
They are successful because they have the will to look for and exploit
vulnerabilities. Intelligence, again, doesn't help all that much. I don't
think people who wrote stuxnet were able to do this because they were of
superior intelligence. They were just usually intelligent people (which means
barely more intelligent than average human) that were sufficiently motivated
by huge budget and interesting problems.

Super-intelligence won't help you with stock market because it's purely random
game. You can see it in results of active management funds. HFT gives
impression of algorithms beating people at trading but the thing they use to
extract the value is not intelligence, it's speed. If you can trade faster you
can beat slower guys because you are playing slightly different random game
than they do. If you compare one HFT with other HFT then you are back to
random results. So AI won't have upper hand on stock market unless we allow it
to trade faster then our non-conscious software.

In theory I could image AI persuading people to give it what it wants. Because
people are stupid and have flaws that are recurrent and easily exploitable.
But again, I don't think intelligence would make such a huge difference. 1000
times more intelligent conmen could be just twice as effective because of
chaotic nature of how human flaws interact.

> If it's so intelligent, it could design working nanotech. The only reason we
> don't have it is because it's so hard to engineer. So much complexity and
> moving parts,

It's hard to engineer because it's so damn small, not because it's complex. To
build nanotech you'll need to build tools to build tools to build tools to
build nanotech. You could do all that but even if you are 1000 times smarter,
reality has a speed limit. You can't take a shovel of sand from the beach and
build CPU, no matter how intelligent you are. You have to build fab first.

> Human brains weren't evolved to be engineers after all, it's a lucky
> coincidence we are capable of it at all. An AI brain optimized for this task
> should far exceed humans. The same way chess computers far exceed human
> chessmasters.

Yes. But it can exceed humans at consciousness, charity and compassion even
faster because those are purely intellectual things, like chess while
engineering is limited by moving atoms and energy around.

~~~
Houshalter
>being able to prove things about some beautiful intricate mathematical
structure most of the times doesn't help you much with the world of matter.

Mathematical ability is just an example. The same abilities that apply to math
also apply to engineering, programming, etc. A superintelligent AI would be
able to do unbelievable things to "the world of matter". Because the main
requirement for manipulating matter is intelligence, discovering better
designs and technologies.

> Hackers are successful not because they are ungodly smart. They are
> successful because they have the will to look for and exploit
> vulnerabilities. Intelligence, again, doesn't help all that much. I don't
> think people who wrote stuxnet were able to do this because they were of
> superior intelligence.

I find this assertion unbelievable. Even average hackers have significantly
above average IQ. I can't find the statistics right at the moment, but many
STEM degrees have above average IQ. The best is physicists which had an
average IQ of 130. An average person can barely figure out how to operate
their email client, they won't be building stuxnet anytime soon.

>Super-intelligence won't help you with stock market because it's purely
random game.

It is not. People make fortunes with just slightly better statistical models,
or slightly better information. Traders spend millions to fly drones and
helicopters over oil tanks and parking lots, to get a slight edge over others.

>It's hard to engineer because it's so damn small, not because it's complex.
To build nanotech you'll need to build tools to build tools to build tools to
build nanotech. You could do all that but even if you are 1000 times smarter,
reality has a speed limit. You can't take a shovel of sand from the beach and
build CPU, no matter how intelligent you are. You have to build fab first.

But we could potentially bootstrap nanotechnology really quickly from existing
biology. There are already labs that will make proteins on demand from DNA.
The problem is it's just so complicated.

~~~
scotty79
> main requirement for manipulating matter is intelligence

Rather, it's knowledge. And to get more knowledge you need to manipulate
matter. Pure intelligence is useful up to a point but the you need to go out
and get more data. That's the limiting factor I think of. The only thing you
can do with just intelligence is math (that can have some use later on) and
philosophy (that's totally useless).

> The best is physicists which had an average IQ of 130.

Incredible amount of people have IQ over 130. If that was key factor to
writing stuxnet there'd be new one each day.

> An average person can barely figure out how to operate their email client,
> they won't be building stuxnet anytime soon.

IMHO that's mostly because they lack knowledge and any reason to care. You
really don't need IQ above 130 to do technical things and having IQ of 150 or
even 200 doesn't help you all that much, it seems, with pushing boundaries of
human capacity. Progress is made mostly by fairly common intelligent people
talking to each other.

> It is not. People make fortunes with just slightly better statistical
> models, or slightly better information.

And they lose fortunes with significantly better statistical models, and
better information (up to but excluding insider trading). When you sum up
everything it's as random as it gets. Take a look at Warren Buffet bet against
hedge funds.

In all fairness additional information could help, but data is not information
and where good models are unavailable and processes are chaotic you are almost
just as likely to infer correct information from data as incorrect.

> But we could potentially bootstrap nanotechnology really quickly from
> existing biology. There are already labs that will make proteins on demand
> from DNA. The problem is it's just so complicated.

For me it looks more like building multi-core CPU in times of Blaise Pascal.
Foundation theory is here, even some tech, but we have no idea how much
technicalities lie ahead of us to figure to get to our dreams.

~~~
Houshalter
>The only thing you can do with just intelligence is math (that can have some
use later on) and philosophy (that's totally useless).

We already have vast amounts of knowledge on the internet. I can download all
of wikipedia in an hour, and fit it on a flash drive. Most of the world's
scientific papers and books are digitized and available.

The limiting factor is no longer knowledge. It's the ability to absorb
knowledge. To be able to instantly find an obscure paper from 1930 that's
relevant to your current thought, or know some random fact from some article
you read years ago, etc. That's something AIs would have a huge advantage over
humans at.

>Incredible amount of people have IQ over 130. If that was key factor to
writing stuxnet there'd be new one each day.

Stuxnet wasn't written by one person. It was probably a huge team of
intelligent people, who worked possibly for years.

But who says an AI can't be equivalent to a group of humans? If it has enough
computing power, it could make copies of itself. And unlike humans it can
communicate thoughts and plans instantly to it's other "selves".

And who says it has to work at the same speed humans work. Human brains run at
maybe 100 hz. AI's built out of silicone could work thousands of times faster.
Doing the same work, just in much shorter time.

>And they lose fortunes with significantly better statistical models, and
better information (up to but excluding insider trading). When you sum up
everything it's as random as it gets. Take a look at Warren Buffet bet against
hedge funds.

Look it's simple math. If you can predict prices 1% more accurately than
anyone else, then you can make a huge amount of money in the long run. Stock
prices aren't random. They are determined by real events, mainly how much
profit the company makes.

>but we have no idea how much technicalities lie ahead of us to figure to get
to our dreams.

That's the point though. A superintelligent AI could figure out exactly what
those technicalities are, and find the shortest route to that tech. If you
went back in time to Blaise Pascal, if you took the right books and plans from
the future, you could get them to CPUs in mere years. We could advance
technologically much faster than we are, the limiting factor is the speed of
invention, which is slow.

~~~
scotty79
> The limiting factor is no longer knowledge. It's the ability to absorb
> knowledge.

I get your point that from all experiments we performed so far and bothered to
write down and digitize there might be few tricks left to squeeze out, but
vast majority of our current progress comes from new experiments.

I agree that AI could very well write stuxnet or mess with our security so I'm
hoping we get bit more tight in that department before we manage to build AI.
I'd definitely prefer first artificial consciousness to be built before robots
are as popular as cars or smartphones.

We will eventually develop AI and it'll be definitely challenging to
orchestrate it running most of our civilization and not killing us in the
process.

What I'm not afraid is AI taking what we know so far and secretly turning
itself into miracle making god in matter of weeks. We will have few years or
decades of existence before AI becomes vastly more powerful than the rest of
us and till then we will have ability to align our priorities.

> Human brains run at maybe 100 hz. AI's built out of silicone could work
> thousands of times faster. Doing the same work, just in much shorter time.

It's not work. It's just thinking. AI can create philosophical theories or
build new optimal JavaScript frameworks at blazing speed and still not make
any progress in physics that would make it more powerful than us. It needs new
data. It needs to run experiments to get anywhere beyond what we achieved so
far. If we maintain transparency and caution in what AI is given and allowed
to do then we may have safely transition from current world to AI world.
Besides, at some point our civilization will encounter others it's better to
bring our own AI to the table. Alien one might attach much less sentimental
value to us.

> If you went back in time to Blaise Pascal, if you took the right books and
> plans from the future, you could get them to CPUs in mere years.

No, you couldn't. Do you know how long it takes to build a fab with current
infrastructure available? You'd have to bring half or their industry into XX
century before you could have a CPU.

> If you can predict prices 1% more accurately than anyone else, then you can
> make a huge amount of money in the long run.

Yes. The thing with stock market is that you can't. It's not because you are
not smarter than other traders. It's because interaction between all the
traders creates pretty much perfect chaotic random process that no-one can
predict.

> Stock prices aren't random. They are determined by real events, mainly how
> much profit the company makes.

Same way random generator driven by unknown algorithm is predicted by its
seed.

~~~
Houshalter
>I get your point that from all experiments we performed so far and bothered
to write down and digitize there might be few tricks left to squeeze out, but
vast majority of our current progress comes from new experiments.

I don't think so. We already know the laws of physics to a great degree. An AI
or even human could design all sorts of amazing things without ever doing a
single experiment.

Of course, there's no reason it can't do experiments, also. Once it's free on
the internet, it need only contact some random Joe and bribe/threaten/persuade
them to do it's bidding.

>No, you couldn't. Do you know how long it takes to build a fab with current
infrastructure available? You'd have to bring half or their industry into XX
century before you could have a CPU.

Perhaps. This doesn't seem to be true of most technologies though. You could
go to 1800 London and show them how to build a modern car or airplane in a
year or so. You could introduce everything from antibiotics to radios,
centuries before they were actually invented. Ancient romans could have built
crude steam engines, and bootstrapped industry in a century, if they had known
how.

Building a CPU might require first building multiple other industries to
support it, but that can be done. If the AI lays out in painstaking detail,
every step necessary to construct every tool, every machine. And yes it would
take a lot of labor, but they would be able to do it.

It seems impossible to us, because we can't imagine that kind of complexity.
Humans are terrible at managing complex systems. We overlook or forget
details, we don't account for possible mistakes, etc. No single human even
knows every step necessary to build a pencil, because we specialize so much.
An AI could be aware of every detail, of every step in the process, and manage
it at terrifying efficiency.

------
mwfunk
He's addressing one aspect of AI that can be disturbing to some people,
specifically becoming reliant on the expertise of others in your day-to-day
life and thus becoming less independent. If your biggest issue with the
concept of AI assistants is that they might have insights about you that you
yourself don't have, I could see this article making someone feel better about
it.

For me at least, that's not the primary fear. I don't fear people becoming
dependent on AI. Rather, I fear people misusing the information that AI
reveals about others. I don't feel uncomfortable from (for example) AI
observing my media habits and using that information to make recommendations
for other media.

However, I would feel uncomfortable if someone with access to that AI then fed
those insights into some other AI that I wasn't aware of for much more
nefarious purposes, such as profiling me to try to quantify my loyalty to the
government.

That's just one example. Even if I had absolute faith in my government, bad
actors could misuse that information to do other things, like determine which
people would be most likely to take on debt on unfavorable terms (so they
could be sent a 20% APR preapproved credit card, naturally!), or which people
would be most likely to want to help a troubled Nigerian prince who just needs
a place to temporarily park a bunch of money.

I'm not trying to fearmonger and I'm actually very excited for the future of
AI. I just don't think this piece is addressing the deeper fears people have
about the technology.

~~~
TheOtherHobbes
The problem with habit reinforcement is that you've simply created a feedback
loop.

"You like [music type]? Here are more bands/artists of [music type]."

It looks like an innocent service. But it's devastating to real exploration,
because it makes it much less likely you'll ever discover Band/Artist Z whom
you'd never normally listen to but love anyway.

This is why thoughtless customer profiling is a dumb idea, and certainly not
the sure fire insta-profit marketing panacea it's sometimes supposed to be.

It has _some_ uses, but you're reducing customers to stimulus/response robots
with a limited behavioural repertoire, and that's an excellent way to miss a
lot of opportunities.

~~~
grahamburger
> The problem with habit reinforcement is that you've simply created a
> feedback loop.

> "You like [music type]? Here are more bands/artists of [music type]."

So far I've found the opposite. Pandora has exposed me to music that I never
would have sought out on my own. In fact I can think of several artists that I
would have judged by their cover, so to speak, and never given a chance even
if I had stumbled on them on my own, instead of coming to awareness mid-song
that "I don't know what this is, but I kind of like it."

My wife teases me about this a little bit when she finds me listening to music
that she says isn't 'me' or doesn't seem like something I'd like. She's right,
but I guess Pandora knows my tastes better than she does (and better than I
do, to be fair.)

~~~
jschwartzi
Contrast that with Spotify, which when given a song or artist that I like goes
out of its way to recommend music that is superficially similar but that I
hate. Or how it can recommend 80 songs to me a week based on things I have
listened to and end up recommending me nothing that I like.

~~~
lswainemoore
Huh. I've actually found the "Discover Weekly" feature to be remarkably good
at picking music that I like. I'd say I affirmatively like about half to
three-quarters of the songs per week, with only a couple that I find myself
skipping.

I wonder if it's better at certain genres than others.

~~~
cdcarter
> I wonder if it's better at certain genres than others.

The Discover Weekly algorithm is, as I understand it, based on what songs have
been added to other user playlists. So if you're primarily listening to a
genre that has a lot of intense people making carefully curated playlists,
you're gonna get a better Discovery Weekly.

------
JumpCrisscross
> _I read once that human brains began shrinking about 10 thousand years ago
> and are now as much as 15% smaller than they were then._

Timeline's a bit off, but the science is still a shocker:

> _Over the past 20,000 years, the average volume of the human male brain has
> decreased from 1,500 cubic centimeters to 1,350 cc, losing a chunk the size
> of a tennis ball. The female brain has shrunk by about the same proportion._

[http://discovermagazine.com/2010/sep/25-modern-humans-
smart-...](http://discovermagazine.com/2010/sep/25-modern-humans-smart-why-
brain-shrinking)

~~~
tambourine_man
I find that scary. A huge collection of specialists makes for a pretty fragile
species.

~~~
jonknee
Specializing enabled huge advances, everyone having to do everything is
actually quite fragile because it can only support a small population.

~~~
unusximmortalis
I think it can be drawn a parallel with the centralized vs decentralised
network. Which one is better? And I think the stronger one for the species is
a combination of the two.

~~~
majewsky
There are different types of decentralized networks. Not every such network
has totally equal nodes, they may still serve different roles. The network is
still decentralized as long as sufficiently many nodes are servicing each
role.

------
sixQuarks
I don't see the correlation. The author is talking more about systems. The
difference with AI is that it will have all of the knowledge within a system
and thus be able to make incredible connections between disparate fields of
knowledge.

On a separate note, I had an interesting thought about AI as I was looking at
a cool architectural photo. I would love to design a building, but I don't
know a thing about it. It would take me years just to learn all the details.
Then, I thought, one use of AI may be to connect with our brains and make
learning easier. Imagine if we could instantly learn all the fundamentals of
architecture, then use our creativity to create our own designs. It's a mix of
AI and human creativity, neither one replaces the other.

~~~
mastazi
> The difference with AI is that it will have all of the knowledge within a
> system

If you asked me to come up with 5 different definitions of AI, the concept of
"all of the knowledge" wouldn't really be there. I agree on the "incredible
connections" part, though.

~~~
sixQuarks
"all of the knowledge" is a huge part of AI. The bigger the datasets, the
better results you have from machine learning, and it's a trivial thing for AI
to absorb huge amounts of information. What humans find difficult, computers
can do easily and it's usually vice-versa for stuff like pattern recognition,
creativity.

~~~
mastazi
> The bigger the datasets, the better results you have

Wouldn't you agree that the ability to achieve relatively good results with
relatively small datasets is one of the main reasons for the current ML boom?

> it's a trivial thing for AI to absorb huge amounts of information

Of course, I agree on that. But I also note that "traditional" (non-ML)
algorithms can digest big data too, so I don't see that ability as the
defining characteristic of the current wave of ML-related technologies.

------
jacquesm
Echoes of 'I, Pencil' in there:

[http://www.econlib.org/library/Essays/rdPncl1.html](http://www.econlib.org/library/Essays/rdPncl1.html)

Some more background about the brain shrinkage referred to in the article:

[http://www.scientificamerican.com/article/why-have-our-
brain...](http://www.scientificamerican.com/article/why-have-our-brains-
started-to-shrink/)

~~~
unknown_apostle
Yep, "I Pencil" came to my mind as well. But opaqueness is the only thing that
AI shares with global market (or catallaxy as Austrians would call it).

What Brian Eno is missing, is that global markets are created by billions of
human minds on the basis of a constant process of trial-and-error (to the
extents government allows them to try and fail). It's not just error tolerant.
It's constantly making profit by improving and correcting errors. And it's
been around for 1000s of years.

Whereas computers have been around for less than a century and are programmed
by a few people and AI may not have with "common sense" or error tolerance. To
deliver yourself to that seems unacceptable to me.

~~~
jacquesm
And it's not as if we didn't have pencils before economies of scale kicked in
either, the invention of the pencil (1564) pre-dates much of the mechanisms
described in 'I, Pencil' so at one point it was definitely possible to know
the entirety of pencil-making.

Even so it serves as a very graphic reminder of how interconnected all of
humanity and industry is at this point.

[http://www.enchantedlearning.com/inventors/page/p/pencil.sht...](http://www.enchantedlearning.com/inventors/page/p/pencil.shtml)

------
campground
This is similar to a thought I've had for some time now; That artificial
intelligence is most likely to emerge - rather than be explicitly designed -
from our increasingly complex, interconnected, self-regulating systems and
institutions. We will be no more aware of, or able to converse with it, than
the bacteria in our guts are aware of us. Also, artificial intelligence is the
wrong term. We have artificial intelligence. What we are talking about is a
new, higher form of real intelligence.

~~~
at-fates-hands
>> What we are talking about is a new, higher form of real intelligence.

Which if the author is right and our brains have already been shrinking means
that within a few hundred years, there won't be any humans left - just "higher
intelligence".

I've never understood people's naivety thinking we're always going to be at
the top of the evolutionary ladder. At some point, we will be replaced and
humans in whatever form they take now will cease to exist in the very near
future.

~~~
icebraining
_Which if the author is right and our brains have already been shrinking means
that within a few hundred years, there won 't be any humans left - just
"higher intelligence"._

No, it really doesn't. Even if the extrapolation was correct - which is
dubious at best - losing 15% of brain over _10 000_ years certainly wouldn't
mean "no humans within a few hundred years".

------
dredmorbius
Brian Eno is a skilled musician. He's a lousy economic, systems, and AI
theorist.

The Edge is revealing itself far more to be a forum in which people of
provenance hold forth on that which they've no particular qualifications or
grounds to discuss (something which _never_ happens elsewhere on the Internet,
of course </s>).

Complex, highly-interdependent systems exibit fragility, nonlinear
transitions, and multiple optima, some not reachable, some local and highly
persistent but undesirable.

Tossing up ones hands and declaring that all shall be as God / Allah / The
Great Spirit / FSM wills it abandons all sense of agency.

The prospects of a global collapse of systems concerns a great many people,
and for much the reason AI would: the mechanisms, logic, interactions, limits,
and consequences aren't clear. See David Korowicz's "Trade-Off".

[http://www.feasta.org/2012/06/17/trade-off-financial-
system-...](http://www.feasta.org/2012/06/17/trade-off-financial-system-
supply-chain-cross-contagion-a-study-in-global-systemic-collapse/)

As for technological unemployment, that's been a consideration for over 200
years. You'll find strong treatments from J.S. Mill, and commentary from
Abraham Lincoln.

Some modern sources, referencing those:

Technology in Society

1984, Vol.6(4):263–284, doi:10.1016/0160-791X(84)90022-8 "High technology and
job loss", Russell W. Rumberger
[http://31.184.194.81/10.1016/0160-791X(84)90022-8](http://31.184.194.81/10.1016/0160-791X\(84\)90022-8)

Robert Struble Jr, (1993) "Towards a Structural Solution to Unemployment",
International Journal of Social Economics, Vol. 20 Iss: 11, pp.15 - 26

[http://31.184.194.81/http://dx.doi.org/10.1108/0306829931004...](http://31.184.194.81/http://dx.doi.org/10.1108/03068299310046063)

~~~
bogomipz
This is one of the most elitist comments I've read on HN.

What are your special qualifications that allow you to say who is and who is
not qualified to comment on a topical issue?

I'm guessing by your comment stating that he is a skilled musician that you
aren't actually familiar with Brian Eno at all. He's an unskilled musician,
that's by his own admission. He's not technically proficient on any
instrument. He's quite the technologist though.

~~~
dredmorbius
Space alien cats don't need qualifications. Nor do they claim them.

As others have noted, Eno's piece is a poor rewrite of _I Pencil_ , itself a
poorly reasoned propaganda piece.

Crooked Timber has an excellent deconstruction of that:
[http://crookedtimber.org/2011/04/16/i-pencil-a-product-of-
th...](http://crookedtimber.org/2011/04/16/i-pencil-a-product-of-the-mixed-
economy/)

The one at Freakonomics is weaker sauce but also pretty biting:

[http://freakonomics.com/podcast/i-pencil/](http://freakonomics.com/podcast/i-pencil/)

I've pointed to several sources discussing technological unemployment, and
more critically _the long history of discussion of that topic from 1800
forward in mainstream and heterodox economic, as well as political and other
literature_. None of which Eno's careless handwave points at.

If informed, sourced, intelligent, and specifically refutable comment is
elitist, I'll take it.

You've also focused on the irrelevant element of my argument, though if
anything, you're also undercutting your own criticism of me. Eno's skill is
evident in his body of work. Which I have listened to, own some of, and rather
like. His self-description is at best inaccurate.

Your assumptions as to my familiarity or otherwise with Eno's works place you
on the rather precarious precipice of a domain in which I am and insist
privileged expertise of obvious nature.

Cheers.

~~~
bogomipz
"Your assumptions as to my familiarity or otherwise with Eno's works place you
on the rather precarious precipice of a domain in which I am and insist
privileged expertise of obvious nature."

That is one poorly formed run-on sentence. What does that even mean? That's
sounds as though it came out of a babble generator. Is that an AI joke?

"You've also focused on the irrelevant element of my argument, though if
anything, you're also undercutting your own criticism of me."

Really? So your inaccuracy is irrelevant? How convenient for you. And no I've
not focused on that, I focused on elitism and now some strange sense of
entitlement you seem to reserve for yourself.

Sorry I don't give a toss about your brand of pop-econmics? You should get
over yourself. I am not alone in this view either:

"We and others have noted a discouraging tendency in the Freakonomics body of
work to present speculative or even erroneous claims with an air of
certainty."

Source: [http://www.americanscientist.org/issues/pub/freakonomics-
wha...](http://www.americanscientist.org/issues/pub/freakonomics-what-went-
wrong)

The fact that you're all bent out of shape over content on the Edge and yet
you support your infallibility by citing a pop entertainment show on NPR is
kind of laughable.

~~~
dredmorbius
It means I know my life, my experiences, my tastes, and my thoughts, far
better than you.

Don't even try to claim primacy of such knowledge. Not of me, not of anyone.

(Now, if someone's saying one thing and doing another, point that out. But a
person owns and has priviledged access to what rattles within their own
skull.)

The economics I cited and referenced is most decidedly _not_ pop. I've got my
own thoughts on some matters, those aren't what I'm presenting here.

You're also now going all ad-hom on Freakonomics. I didn't say that
Freakonomics is right. I'm presenting it as a valid argument, in place of
constructing a similar one from whole sauce for your entertainment. The point
isn't that either source are authorities, but that I've read and agree with
the reasoning.

And just hang onto that cloth you're about to hand me, I've little need of it.

------
ebbv
This is a cute but ultimately wrong argument. It's like saying we live with a
black hole because there's one in the center of our galaxy.

Yes you could stretch the term of AI to say that a large system made up of
people and simple machines is some form of AI, but that's stretching it to the
point of becoming meaningless.

When people talk about real Artificial Intelligence they usually mean general
AI, or at least a specific AI that is capable of human-like decision making.
Not Computer Chess and not an auto-stop in a filling pump.

Stretching AI to the point of ludicrousness like this seems to serve only the
purpose of trying to shut down discussion around human-like AI. Which is not a
noble goal.

Discussing human-like AI is important, especially before we figure it out. It
would have been nice if people in the early 1900s had spent time thinking
about the consequences of putting so much carbon into the atmosphere before
they did it. Let's not be another generation of people who could have had a
lot more forethought than they did.

------
josephpmay
This is closely related to the economic concept of the "Invisible Hand"[0] and
also explains why planned economies never seem to work, no matter how well-
intentioned they are (see the current state of Venezuela - although in their
case, as in almost every case of central planned economies, greed and
corruption were the prevailing forces of the ruling agency)

[0]
[https://en.wikipedia.org/wiki/Invisible_hand](https://en.wikipedia.org/wiki/Invisible_hand)

~~~
dredmorbius
The "invisible hand" metaphor used by Adam Smith _is not an explanatory
mechanism_ , and if anything _is an admission that the specific mechanism isn
't understood_. In use at the time and earlier, it had the sense of "the
invisible hand of Providence" (or God). Though Smith, as Hume, was almost
certainly what we'd now call an athiest.

He used the term three times, in three different books: _The Theory of Moral
Sentiments_ , then _An Inquiry Into the Nature and Causes of the Wealth of
Nations_ , and finally in a book on the history of astronomy. It's clear from
context that Smith _wasn 't_ embuing markets especially with invisible
handedness, but using a common phrase of the age.

The _modern_ invention of this metaphor dates to the 1930s and 1940s, being
first used in its modern sense by Paul Samuelson, and latched onto like a
desperate child by the budding organs of the Mont Pelerin Society, better
known as the von Mises / Hayek / Friedman / Rothbardian variant of Libertarian
theology. Its _popular_ significance grew after publishing of _Adam Smith 's
Invisible Hand_, a compilation of modern economic fallacies miscast as truths,
by Regenry Press, a Libertarian propaganda mill, in 1963. You can trace the
evolution of the term via Google's Ngram viewer.

One of the more notable "quotations" from Smith's _Wealth of Nations_

Economic historian Gavin Kennedy has traced this history in depth, published
multiple papers on it, and writes a blog, "Adam Smith's Lost Legacy", which I
highly recommend.

(You'll also find some discussion of the false myth that's developed over the
term in the very Wikipedia article you've linked.)

[https://econjwatch.org/articles/adam-smith-and-the-
invisible...](https://econjwatch.org/articles/adam-smith-and-the-invisible-
hand-from-metaphor-to-myth)

[http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1781536](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1781536)

[https://adamsmithslostlegacy.blogspot.com/](https://adamsmithslostlegacy.blogspot.com/)

My own recommendation is that people actually _read Adam Smith_ to see what he
wrote and meant:
[https://www.reddit.com/r/dredmorbius/comments/4cyroa/adam_sm...](https://www.reddit.com/r/dredmorbius/comments/4cyroa/adam_smiths_lost_legacy_or_why_you_should_read/)

More on the Mont Pelerin Society:
[https://en.m.wikipedia.org/wiki/Mont_Pelerin_Society](https://en.m.wikipedia.org/wiki/Mont_Pelerin_Society)

~~~
pastProlog
Also, Adam Smith's example of the invisible hand was as a mechanism to prevent
so-called free trade, and raise protectionist tariffs. The people you
mentioned have twisted Smith's words into the exact opposite meaning - they
say Smith's invisible hand sweeps away protectionist tariffs and allows
international free trade. He said the complete opposite of what they say he
said.

~~~
dredmorbius
Thanks, I was going to mention the context of use but needed to go back to
confirm what the usage was.

I _am_ aware that his _one_ use of "free market" was in a passage describing
protectionist trade practices favouring the woolens manufacture industry in
England: keeping raw wool import costs low and preventing import of finished
goods, thereby maximising the revenue-cost differential, which is to say,
profits.

------
johnchristopher
Right. Let's again substitute the meaning of a word or a concept for another
one and start up another dictionary debate.

~~~
RIMR
I think you're missing the point.

The idea is that artificial intelligence will prevent us from having to
understand a lot of things in order to accomplish them, but that effect is no
different than any other technological innovation we have made.

The definition of AI has also changed plenty in the past sixty-something years
we've been developing it.

A couple of good Wiki links on the subject:
[https://en.wikipedia.org/wiki/History_of_artificial_intellig...](https://en.wikipedia.org/wiki/History_of_artificial_intelligence)
[https://en.wikipedia.org/wiki/AI_winter](https://en.wikipedia.org/wiki/AI_winter)

------
peter303
Some concepts have an implied prefix "digital computer-aided" to distinguish
from related concepts. We all know we are intensely interested in the computer
version.

Another example is "virtual reality" which existed since cavemen made up
campfire stories and drew wall pictures.

------
Moshe_Silnorin
What people are worried about is artificial systems that are of higher
capability than all or the vast majority of humans. If the principal agent
problem is solved by those that create them and they can be cheaply
reproduced, then the value of human wages goes below subsistence. This likely
isn't a big problem as economic growth rates would be so unimaginably high in
this scenario (with doubling times on the order of months) that even
significantly less wealth distribution than what occurs today could easily
cover a basic income. This is why I'm not nearly as worried about
technological unemployment. And as most people have at least some capital,
even without redistribution many people would be fine.

If we can't solve the principle agent problem, we will have introduced self-
replicating entities with much higher intelligence into our environment. As
most possible utility functions require resources to pursue, we will be
competing for resources with more intelligent entities. A competition we will
lose.

So solving the principal agent problem is a big issue. Comparing human-level
AI to markets is like Megafauna saying "We've been competing with other
mammals for millions of years, man is only a difference in degree rather than
kind. We'll be fine."

~~~
pygy_
For your first scenario economic growth must be fueled by a proportional
growth in energy consumption.

Fossil and Nuclear fuels exist in finite quantities, and the rate of sunlight
that hits earth is more or less constant.

Most people would still starve as fields would be repurposed to produce fuel.

~~~
Moshe_Silnorin
Maybe it would only last a few years. But once we can convert capital directly
into labour in a manner that scales, we will get insane amounts of economic
growth.

~~~
ffwd
I understand what you're saying but I think it's bad to use the term
'economic' growth. There would be potential productivity growth but not
necessarily economic growth, since the economy is a human endeavour. AIs don't
need money and won't contribute to money, and the machines would only be
'allowed' to produce what the customers would be able to buy (so essentially
the current economy), unless they would produce lots of extra stuff just for
the hell of it.

To have all that growth I think you'd need to decouple the machines from the
economy but then distributing natural resources would be a problem, and then
you come back to central planning and that's a whole other can of worms.

------
dd36
This is similar to Kevin Kelly's technium theory:
[http://kk.org/thetechnium/](http://kk.org/thetechnium/)

Technological innovation is an extension of evolution. Whether that's the
invention of the alphabet or a computer, we are a part of this system
continuing evolution.

------
Noseshine
While taking neuroscience it occurred to me that one could make an argument -
depending on what one wants to show, the usefulness of a model is always
limited by the intended use - to see a similarity between neurons and humans:
The system outcome is not the sum of what each neuron "knows", and each neuron
is really "ignorant" and "stupid".

For example, why are people bothered that there are people who are into
conspiracy theories? Or some who see dangers everywhere, while for others
everything (everyone) is good? Maybe that's their role in the "humanity
computer"! Some neurons' (humans') task is to be extra-paranoid so that the
majority don't have to. and others are the opposite, nothing bothers them.
What seems "crazy" is, when looked at from a higher plane, quite possibly a
very reasonable organisation. Maybe "humanity" does not - _should not_ \- make
sense on an individual level (i.e. ever single human be "sensible",
"reasonable"), but on the level of "humanity". Why this obsession that
everybody has to agree, and people who don't are vilified? If all neurons in
the brain were to agree you'd have a very dysfunctional brain. What it needs
are complex connections and (feedback) loops that enhance and suppress output
depending on the _overall_ input. "Overall" is important - not "what an
individual (human or neuron) sees", but the sum of all inputs into the system.
It does not have to make sense on an individual level.

I like how Sherlock Holmes says it (short, 30 seconds):
[https://www.youtube.com/watch?v=HuIMmwJbnco](https://www.youtube.com/watch?v=HuIMmwJbnco)

The attempt to understand "humanity" and what's going on on this planet on an
individual human level is doomed to fail. The most you can get is a "feeling"
that you get it -but if you do, it's wrong, and it's really bad. If you also
happen to have some "power" (individuals having too much power is a bad
construct) the outcome can be disastrous.

The things the linked article talked about I use to mention in the context of
"magic". You know, what the fantasy books and movies are all about. They have
"magic items" \- whose main property is that nobody knows what they actually
are or how they work, where they come from. Sounds familiar? I don't even have
to look at an iphone.

This wonderful story sums it up very well I think (and please ignore the
object that it uses, here "Coke", it's _not_ about Coke, so no need to discuss
the merits of overpriced unhealthy sugar-water):
[https://medium.com/@kevin_ashton/what-coke-
contains-221d4499...](https://medium.com/@kevin_ashton/what-coke-
contains-221d449929ef)

Quote:

> _The number of individuals who know how to make a can of Coke is zero. The
> number of individual nations that could produce a can of Coke is zero. This
> famously American product is not American at all. Invention and creation is
> something we are all in together. Modern tool chains are so long and complex
> that they bind us into one people and one planet. They are not only chains
> of tools, they are also chains of minds: local and foreign, ancient and
> modern, living and dead — the result of disparate invention and intelligence
> distributed over time and space._

And look, I don't even have to explain my thoughts myself! Which, if I had
been born in the forest away from thousands of years of human experience and
exchange with other humans in time and space, I would probably never have
developed in the first place. Instead I can go and use a few words of "glue"
to link to pieces written by others - that they themselves owe to others.

Think about that in the next discussion about high-earning people "they earn
it"! Do they? Back to the example "born alone on the forest". If someone
develops a Facebook or a Tesla or a Dell computer from such roots, _then_ I
agree, they deserve billions. For clarification: I'm not talking about the 1%,
I'm talking about the 0.01% (The Economist:
[http://www.economist.com/news/finance-and-
economics/21631129...](http://www.economist.com/news/finance-and-
economics/21631129-it-001-who-are-really-getting-ahead-america-forget-1;) Some
charts in a short video:
[https://www.youtube.com/watch?v=QPKKQnijnsM](https://www.youtube.com/watch?v=QPKKQnijnsM))

~~~
tomrod
> For example, why are people bothered that there are people who are into
> conspiracy theories?

I upvoted, because you bring up a good general idea if we consider all
humanity as holonic items in a larger system. However, in the case of
conspiracy, most individuals find their time wasted.

~~~
Noseshine
I refer to Nassim Nicholas Taleb and Black Swan events. Or fire insurance.
Yes, it probably _is_ wasted. However, that is the point: employ a few (few!)
neurons on looking out for the "crazy", the unlikely, the improbable. Hope
that it's all wasted.

But just like with insurance, when it _is_ wasted you say "Thank god" (I'm an
atheist but I can't think of an atheist phrase :-) ), you don't say "what a
waste" (that I paid for insurance). You can be sure you have (quite) a number
of neurons that if you knew what they do would think are "waste".

 _Also_ , the point is for you as someone not into that stuff to _ignore
them_. Not like some of my Facebook friends (former colleagues) who seem to
spend most of their day hunting for what they think are examples of the most
stupid humanity can create. I'm not so sure that it isn't _them_ who really
are stupid. If they just ignored "crazy", nobody would even know such people
exist. Instead more people seem intent on bringing the most obscure ideas and
idiocies some human somewhere developed to the light that it would otherwise
never even have.

------
bogomipz
I love Eno. Such an influential composer and yet he doesn't consider himself a
musician in the least. He's used his own ignorance of music to spectacular
effect. This a piece the Telegraph did on him a while ago. It's worth reading:

[http://www.telegraph.co.uk/music/artists/how-brian-eno-
creat...](http://www.telegraph.co.uk/music/artists/how-brian-eno-created-a-
quiet-revolution-in-music/)

~~~
fuzzfactor
Also the author of The Microsoft Sound WAV that played upon Windows startup in
1995.

At the time probably the tune played most often per day for a number of years.

------
dredmorbius
For those interested in what people who actually know whereof they speak have
to say on the topic of technological unemployment, and the history of economic
discussion of the topic, a solid outline of discussions from Ricardo, Mill,
McCollough, and Neisser, is included here:

[https://econospeak.blogspot.com/2014/04/the-technology-
trap-...](https://econospeak.blogspot.com/2014/04/the-technology-trap-
permanent.html)

~~~
ZenoArrow
> "For those interested in what people who actually know whereof they speak
> have to say on the topic"

[https://en.wikipedia.org/wiki/Argument_from_authority](https://en.wikipedia.org/wiki/Argument_from_authority)

I've no problem with learning from others, but pointing to a select handful of
people who 'actually know whereof they speak' is not going to help explore the
field fully.

~~~
dredmorbius
Pointing to expertise is not argument from authority.

~~~
ZenoArrow
It depends. In this case, it was the framing that made it the argument from
authority, namely "in what people who actually know whereof they speak". The
implication is that the sources of truth are limited to a select few people.

~~~
dredmorbius
Argument from authority is "X is true because Y says it is".

That's not what I claimed.

This is tedious.

------
benkarst
Thoughtful article. Eno refers to hidden processes in systems as AI hinting at
their similarity.

I am assuming there are already digital systems in place that monitor "hidden"
processes that make a chicken sandwich for example
([https://www.youtube.com/watch?v=URvWSsAgtJE](https://www.youtube.com/watch?v=URvWSsAgtJE)).

Such systems could monitor data points and possibly forecast the price (or a
great many things) of a chicken sandwich.

------
nthcolumn
That AI is not separate from humanity. He accepts that we ask, we get. It
serves us. People are worried about the non-we. Global civilisation cannot
destroy humanity without destroying itself - unless it creates another human-
autonomous AI. The non-we AI.

------
grondilu
To me that's a stretch. AI is not a carpet under which you put everything you
don't know about how the World works.

------
esalman
True AI would learn and teach itself new stuff. Something a central heating,
burner, car or wifi cannot do.

------
orblivion
I would call that a natural intelligence.

------
cobbzilla
seems like an issue of semantics - he's taking something most of us would just
call "human culture" and naming it "AI" \- which is an interesting idea, but
true artificial intelligence is something that involves some amount of self
awareness of the system to itself.

~~~
RIMR
To be further semantic, one could say you're referring to AI automata, not
just AI.

~~~
ZenoArrow
It's interesting how we conflate self-awareness with true intelligence. It
implies that something cannot be truly intelligent until the point it creates
a personality for it to identify with. I'd argue that a better indication of
intelligence is self-directed learning, even if the seed of that desire to
learn is programmed in from an outside source. I don't think its necessary for
an AI to need to form a narrative of self.

------
zitterbewegung
I think the article is missing the point of what AI will do by eliminating
jobs like transportation and fast food preparation . I think once we get to
that point we will have to implement basic income .

~~~
PeCaN
AI and basic income in the same comment. I almost forgot I was on HN for a
second there.

~~~
timboisvert
A Haskell mention would've completed the HN trifecta.

~~~
brobinson
I think Phoenix/Elixir has dethroned Haskell.

~~~
PeCaN
Or Go or Rust.

But Phoenix/Elixir does seem to be the new thing. Not that I mind, always
happy for the Erlang platform to get more attention.

