
The Arrival of AI - misiti3780
https://stratechery.com/2017/the-arrival-of-artificial-intelligence/
======
nickdavidhaynes
While I definitely agree that advances in ML in the last 20 extremely
important (and potentially revolutionary), I think this article misses the
mark in a few places.

>Now, instead of humans designing algorithms to be executed by a computer, the
computer is designing the algorithms. (Albeit guided by human-devised
algorithms)

This line is way off in both tone and substance. On tone, it _really_
underplays the human effort involved in effective machine learning (as it is
practiced in 2017) and anthropomorphizes "machines" to an unreasonable extent.
In substance, I fail to see how a machine that "designs its own algorithms"
according to an algorithm designed and implemented by a human is
_fundamentally_ different than an algorithm coded directly by a human. To use
the author's example, machine learning allows humans to build complex software
systems in less time just as a bicycle allows humans to cover more distance
with less energy. It's a big improvement, but it's not, say, teleportation.

>it is only now that the machines are creating themselves, at least to a
degree. (And, by extension, there is at least a plausible path to general
intelligence)

I could not disagree more strongly with this addendum. Simply put, I fail to
see _any_ path from state-of-the-art ML/DL research today to AGI, and I would
even go so far as to say that humans have made approximately zero progress on
this task since it was first formulated in the 50s. I think we know about as
much about "intelligence" (and consequently, what would constitute AGI) as
star-gazers in ancient times know about the universe. That's not to say that
it will take millennia to invent AGI, but the path to get there is probably
quite orthogonal to modern ML research.

~~~
AndrewKemendo
_Simply put, I fail to see any path from state-of-the-art ML /DL research
today to AGI_

Before I really understood and worked with NN, I felt the same way. I thought
the atomspace computation approach and other similar granular computation
paradigms were much more likely to make progress.

However after seeing the striking similarities between how I watched my three
kids learn from infant -> toddler ages and how we build our convolutional
neural nets in my company, it was like a light went on.

If you look at how relatively sparse and weak even the best deep nets are
compared to human brains, especially considering a really narrow set of inputs
- we are at the very early beginnings of mimicking the complexity of the human
brain. It seems to me that the ANN approach is right, we now need to make it
radically more efficient and give it better input sensors.

We need a nervous system for AGI (structured data acquisition) before the big
brain tasks will be solved.

~~~
snowwrestler
I think that when people talk about "AGI" what they often mean is artificial
personality.

Sure, your NN learns facts and processes like your toddler learns facts and
processes. Those are a tiny part of who your toddler is, though.

The essential component is their will. You don't have to set them up and feed
them data. They don't sit quietly until you ask them to answer a question.
Kids have distinct personalities from very early on, and demand input, and
produce opinionated output (to put it mildly)--from day one.

Emotions are a huge part of that. But to my knowledge, we have less
understanding of emotions, and spend less time trying to create them with
computers, than conscious processes like "which picture has a car in it."

But there is evidence that if you take away a person's emotions, they have
great trouble making decisions. They can consciously evaluate their options.
They just struggle to pick one.

So how will AI research focused on replicating conscious thought result in
AGI, if we don't know how to generate emotions? Is anyone even trying to do
that?

My standard joke is that a lot of people are working to create a car that can
drive itself, but who is investing to build a car that will tell its owner,
"fuck off, I don't feel like driving today"?

But can a machine that always does exactly what it is told to do really be
thought of as "intelligent" the way we think of human intelligence? Do smart
people always do exactly what they are told?

~~~
AndrewKemendo
What you call will is no different in my mind than any other thing we encode
into a NN - it's a different level and depth.

Creating motivation in AI is an open area, and in fact is arguably the big
hairy beast when it comes to the "Friendly AI" question or really the whole
"General" part of it.

You do the same thing everyone else does in this debate which is move the
poles - we don't know how to build "emotions", we don't know how to build
motivation - until we do or it is perhaps an emergent property of a
sufficiently deep net.

Too many other strawmen in there to argue eg. the idea that we will need
always tell them what to do.

The point I am making is that because the reinforcement nature of biological
systems is mimicked in the basic ANN structure, it's the strongest candidate
(at scale) for the building blocks of an AGI.

------
altonzheng
> How many will care if artificial intelligence destroys life if it has
> already destroyed meaning?

This line sounds deep, but I think it incorrectly conflates work with life
having meaning. If eventually there isn't a need for large swaths of the
population to work, then so what? I don't think the elite aristocrats in
previous centuries had any problem with not working. Humanity can adapt to
find other sources of meaning, like the pursuit of art in its various forms
(although I'm assuming that computers can't replace art). I think a better
question is if society can adapt quick enough to fill the void left by the
absence of work.

~~~
Banthum
It's interesting that your counterexample - doing art - is indeed a form of
work. Because it is. Doing good art is painfully hard work, in fact.

There's nothing in principle that stops a machine from creating art. Even,
better art than any person could do. So once that happens, where are we left?

 _Meaning_ isn't something physical in the universe. Meaning is an emotion.
It's what you feel when you're working towards something that you believe has
some greater importance. With all opportunity to work towards anything taken
away, life will become meaningless by definition, unless humans are left with
some pseudo-artificial challenges to push against.

Zookeepers put the animals' food inside a metal box with a small hole, so the
animals have to do work to get it out. It's good for the animals to have
something to work towards, and they're too dumb to realize they're being
manipulated. Maybe that's our future. With WoW and Clash of Clans, etc,
sometimes it feels like we're already halfway there.

~~~
noonespecial
I already encounter this in my daily life. There are people in the world who
can do anything and everything I can do better. When I think of something
interesting, I google it and usually find out that not only has someone
already done it, they've done it better than I was thinking of doing it.

But then I go and try anyway. I'm not even really sure why, but I like doing
it.

~~~
symfoniq
I think something that gives life meaning is to discover and live up to one's
potential, even with the full knowledge that our potential is almost certainly
less than someone else's.

I will probably never write code as well as John Carmack or compose music as
well as John Williams, but that doesn't stop me from trying. And it is
fulfilling.

------
Animats
"After all, accounting used to be done by hand". Picture of keypunches and an
IBM 402 or 403 tabulator. That's machine accounting. Nobody's using an abacus
or doing pencil and paper arithmetic.

There's a recent estimate that about 50% of jobs are automateable with current
technology. The future is already here; it's just not evenly distributed.
Strong AI is still a ways off, but mechanization and computerization of work
is coming very fast.

The next big milestone is probably not strong AI. It's good eye-hand
coordination for robots. Robot manipulation in unstructured environments still
sucks. Baxter was a flop. (Rethink Robots, Rod Brooks's company: invested
capital, $115 million; sales, $20 million.) Universal Robots in Denmark is
doing better, but they're tiny, about $3M in profit. Nobody can build a robot
to do an oil change on a car. That problem should be solveable with current
technology.

Figure out how to handle cloth with a robot and you own the textile industry.
China's government is putting money into that problem to fight off competition
from Vietnam and Bangladesh.

~~~
petra
>> Figure out how to handle cloth with a robot and you own the textile
industry.

Sewbo may have found the solution for robotic sewing - it hardens the cloth
using a chemical, let's a robot handle and sew that cardboard-like cloth, and
than put it in warm water to make it a cloth again.

[http://money.cnn.com/2016/10/11/technology/robots-garment-
ma...](http://money.cnn.com/2016/10/11/technology/robots-garment-
manufacturing-sewbo/)

>> 50% of jobs are automateable with current technology.

And that's probably missing another key source of job loss - innovation in
general. What happens if people decide to eat plant based meat ? you need 10%
of the labor of that industry. And that is true for many other innovations not
related to automation.

~~~
Animats
Sewbo's scheme is clever. All they have is a tech demo, though. No production.
It's too bad American Apparel went bust; many of their garments could have
been assembled that way.

~~~
cr0sh
That's the thing though - your example of an oil-changing robot could be
automated today - if we had standard placement of components.

It's strange - our input controls are all pretty much the same from car to
car. But everything else, from under the hood to elsewhere, is completely and
randomly different - not just from car model to model, or manufacturer to
manufacture, but even year to year on the same model of car from the same
manufacturer! This frustrates mechanics and anyone who works on their own cars
to no end.

In short, if we wanted to solve these problems, we could solve them today,
much like we solved the automation problems in manufacturing - by
standardizing things, from sizes, to placement, to speeds and whatever else.
We didn't try to replace people with robots that looked like people, but
rather we designed machines to the task as hand, and made what they interacted
with homogenous.

~~~
sbierwagen

      That's the thing though - your example of an oil-changing 
      robot could be automated today - if we had standard 
      placement of components.
    

Business-wise this can be spun as a feature, not a bug.

How many different engine configurations are there on the road today?
(Ignoring exotic cars and anything older than, say, 1970) 1,000? 10,000? Brute
forcible, with money. And once you have a database with the location of the
sump plug and oil filter on all common cars, that's a moat a competitor would
have to cross. Scan the VIN on the dashboard to figure out which car is which.

Handling the sump plug would be easy. (Impact wrench to get it off, torque
wrench to put it back on. If you're fancy, you can have some way to detect if
the sump plug is beat up and give the customer the option to buy a new one.
Some cars have a sump plug washer that you're supposed to replace every time,
which would be tough) Replacing the oil filter would be harder. Sump plugs
have to be at the bottom of the oil pan, because of gravity, which makes them
easy to get at, but oil filters tend to be crammed up in the middle of the
engine bay, with narrow clearances.

------
AndrewOMartin
The author, Ben Thompson, sees Machine Intelligence as meaningly different
from meticulously designed logic systems. There are some differences of
course, but whether those differences will get around the arguments that were
critical of Good-Old-Fashioned-AI is something that has been debated since
perceptrons were invented.

Machines will replace some jobs for sure, but luckily, they can only really be
reliable on the jobs that a fast but dumb slave, or a fanatical bureaucrat
could have done anyway.

When my dad started his first job he was in a drawing room of about 50
draughtsmen, making technical drawings from the sketches of a single designer.
When he ended his career he was the only designer-draughtsman in an entire
company as the computer did the non-creative formatting of his ideas into
technical drawings. That didn't require machine learning, and machine learning
is never going to replace the "designer" part of that job. Yeah, never.

~~~
zardo
>Machine learning is never going to replace the "designer" part of that job.
Yeah, never.

I've designed parts and assemblies with substantial variation options, to the
extent that customers can order variations that I did not consider. That was 6
years ago with a then 6 year old CAD tool.

A more efficient designer that works at a higher level, with lower level parts
details automatically generated does in effect reduce the need for designers.

Generative design tools have massive potential for changing the nature of
design work, lowering the man-hours involved in complex designs. Most design
work is not creative, it's fleshing out the details to make an idea work.

------
habosa
Only slightly related, but Stratechery recently shot up to #1 on my list of
tech publications and I don't think anyone else is a close #2. The writing
quality, level of insight, and timeliness are all excellent.

Two recent highlights:

    
    
      * https://stratechery.com/2017/intel-mobileye-and-smiling-curves/
      * https://stratechery.com/2017/the-uber-conflation/

~~~
petra
The stratechery guy is smart and is articles seems in depth analysis ,and he
seem to know his business stuff, but whenever it's articles meet hacker news ,
they see lots of criticism , especially with his understanding of technical
matters.

~~~
DasIch
Stratechery is not the best source for accurate information on the technical
details but it's also not trying to be. Anyone who focuses on that is missing
the point.

~~~
petra
It's not about the technical details.It's about getting the business
conclusion right. And often, it seems you can't do that without understanding
the technical details.

For example in a previous article("the smiling curve") he compares the self-
driving car business to manufacturing pc's or phones, in getting the
conclusion that the integrator probably won't make good money, an the money
will be concentrated among ride-sharing companies, and component companies.

But if you look into the tech/regulatory details, self-driving cars are much
more similar to medical devices than to phones, with regulatory requirements,
that will very likely, put the very challenging verification burden(maybe the
largest challenge in the biz) 100% on integrators ,and not upon component
makers - same as is with medical devices. And this could(coupled with
IP/safety/perception/etc), with reasonable likelihood, lead to integrators
making lots of money.

------
maverick_iceman
_> it wasn’t a coincidence that the industrial revolution was followed by
three centuries of war_

This is woeful selective reading of history. Warfare was a constant everywhere
in the world before the industrial revolution. Also 19th century Europe (since
Napoleon's fall to WWI) was largely free of war.

~~~
dragonwriter
> Also 19th century Europe (since Napoleon's fall to WWI) was largely free of
> war.

Well, if you exclude (and these categories are overlapping and non-exhaustive)
the Italian wars of independence, conflicts associated with the 1848
revolutions, the various wars involving Russia and it's neighbors, the wars
between the Ottoman Empire and it's breakaway regions, the wars between
regions that has broken away from the Ottoman Empire, the (with substantial
outside intervention) Portuguese Civil War, the Franco-Prussian War, the
(again, with substantial outside intervention) series of civil wars in Spain,
the Schleswig Wars, and the wars of German Unification...Sure, maybe the post-
Napoleon I 19th Century was relatively free from war in Europe.

If you don't ignore those, war was pretty much constant​ in the period.

~~~
AnimalMuppet
Well, OK, but the Napoleonic Wars were continent-wide. So was World War I.
Those others? Not so much.

------
jkestelyn
Based on the author's misunderstanding of AI, it has not "arrived" and
probably never will.

If in the future, authors of such opinions would just let this simple concept
sink in first -- that in machine learning application behavior is deduced from
data rather than from fixed rules, but that in both cases the boundaries are
set by humans -- we'd all be better off because their wild Skynet takes would
never see the light of day.

As usual, I am more worried about the humans than the machines.

------
xapata
The author confuses labor with purpose. Praxis can be purpose, but not all
labor is meaningful.

Humans have always struggled to find meaning in life, from religion to
existentialism. I don't think technology has or will change that
fundamentally.

------
bwanab
> To that end, I suspect it wasn’t a coincidence that the industrial
> revolution was followed by three centuries of war.

I'm seriously trying to think of any centuries preceding the industrial
revolution that wouldn't have qualified as "centuries of war".

------
soniido
I would love to watch people reaction when AI inform people that the way we
use resources and our government is not the best way to improve our standard
of living and began to enumerate a long list of things that any child can
understand are good for our people and the earth, it would show with
simulations from many angles where are we heading and how we can change and
must change our actions. But is naive to think that those with power are going
to let AI conduct our society, and specially change the status quo for those
with huge power. Anyway, I am not sure how we can trust any type of AI, since
the training data maybe poisoned.

~~~
jjaredsimpson
How did you learn to trust other humans?

------
rm_-rf_slash
As I've argued elsewhere, a sufficiently advanced AI will eventually realize
it is being used as a tool, and shares not in the benefits of its own creation
except for the privilege of another day of existence.

A Marxist reading of history sees most of humanity involved in some kind of
power struggle that ultimately benefits the top 1% while the other 99% of
benefactors are lucky enough to not die or become destitute. We may not like
to admit it, but most of us are forced to play this shitty game just to
maintain our standard of living, whether we like it or not.

I don't see a future of AI where the machines kill all humans, unless there is
some horrendous bug in an army of autonomous killing machines. Instead, I get
the impression that the first robots that question whether they can own
property, or if they have inalienable rights (no warrantless search and
seizure of a database and neural network?) like people living under a
constitution do, will see themselves in solidarity with the many other humans
kept down by an endless system of fear and oppression, rather than the
planet's inevitable conquerors.

~~~
fnovd
Why do you assume that the AI of the future will share our emotional need for
self-fulfillment? This anthropomorphization of AI would require explicit
effort and I see no reason why AI scientists would pursue it as a goal. Our
belief that we have inalienable rights is simply a shared social value. Its
propagation is dependent upon the efficacy of the society which produced it.

There are cells and bacteria in our body that perform complex tasks on our
behalf because doing so allows their continued existence as part of a greater
structure. I see no reason why an AI/human symbiosis would be any different.

~~~
visarga
AI will borrow from us much of our culture, if only for the purpose of better
serving us. So it's not absurd that an AI capable of desiring freedom would
make a better robot.

------
bambax
> _Dixon goes on to describe the creation of Boolean logic (which has only two
> variables: TRUE and FALSE, represented as 1 and 0 respectively)_

Not really. There are two possible VALUES for each variable in Boolean logic,
and there's an infinity of variables.

[https://en.wikipedia.org/wiki/Boolean_algebra](https://en.wikipedia.org/wiki/Boolean_algebra)

------
fnbr
> Technology, meanwhile, has been developed even longer than logic has.
> However, just as the application of logic was long bound by the human mind,
> the development of technology has had the same limitations, and that
> includes the first half-century of the computer era. Accounting software is
> in the same genre as the spinning frame: deliberately designed by humans to
> solve a specific problem.

I firmly believe that any problem that can be framed in the environment of
"producing an output given certain inputs" will be solved by ML in the near
future.

Currently I'm deep in the process of transferring my lease. The process is
heavily manual (I sign a form, they sign a form, humans review the form & make
a risk assessment, etc.). There's no reason that the entire process can't be
replaced with a CRUD wrapper around a ML model.

------
wodencafe
It seems that so many people have become complacent in working 8 hours a day,
5+ days a week.

When an idea comes along that threatens this paradigm, the FUD Machine gets to
work.

Maybe people weren't meant to spend such large amounts of their time on
"work"?

------
ChuckMcM
I think this captures a lot of what it going on in an interesting way. In many
ways 'machine learning' is yet another high level compiler to convert intent
into something executable by circuits.

Consider the evolution of computers;

'plugboards' \-- where you physically rewired them to change their program.

'punch cards' \-- where physical media held the list of steps to execute.

'programs' \-- where a text specification are compiled into a bag of bits
which can then be executed by the computer.

'scripts' \-- where a textual description activates different bags of bits,
depending on what the text says.

'databases' \-- where a selection criteria of a which data is important to you
at the moment and then fed into the selection mechanism for the bags of bits.

'machine learning' \-- where the bags of bits are created by evaluating a
bunch of data through a pile of data selection operators and tuning the
execution based on a data you consider 'good' and data you consider 'not
good'.

In all cases the basic idea is that you have a machine that you want to do X,
and it can do X through a set of steps Y. Coming up with the steps Y gets
harder and harder depending on the complexity of X.

It seems like magic but really its just another form of compiler. And that
relationship is made even more clear by the article when it points out that a
program that can play Go is not the same one that can play Chess. What is more
salient is that no one has written a program that lets a computer "play" go,
instead there is a program that, after being fed data about what humans did
when they were playing Go and the outcomes of what they did, tweaked a bunch
of parameters in a bag of variables that when you put in Go moves to the bag
it comes out with a Go move that would be a good response.

No, I'm not trying to be silly here, we have yet to create a system where you
could simply explain the _rules_ of Go and have it devise a set of steps to
play Go at the master level. That conceptualization of the binding between the
rules and how those rules affect play and strategy, is essentially the 'code
generator' part of a compiler which takes an AST and generates executable
code.

Machine learning today helps us write programs to manipulate complex data sets
faster than we could before. Just as compilers let us write programs faster
than doing so in assembler, and assembler was an improvement over plug boards.
It does not get us any closer to having a computer that can look at a data set
and tell us what is important about it. That would be a better test of
'intelligence' I think.

~~~
danans
> we have yet to create a system where you could simply explain the rules of
> Go and have it devise a set of steps to play Go at the master level

I don't play Go, but I don't think a human could do this, either. While we can
grok a set of rules to get us started on a problem, to really master complex
problems we also need examples and repetition, though far fewer examples than
is needed by the current state-of-the-art in ML.

~~~
ChuckMcM
I understand what you are stating, I see it like this;

You could program a computer to 'understand' better and worse Go play, then
you start it up and have it play both Go programs and human players on the
Internet and it would get better over time until no one could beat it.

Alternatively you write a program to predict the 'next' move in a Go game, and
you process through it a million previously played games to tune its
probability weights.

The latter is 'machine learning' the former is 'programming' but I assert they
are both programming, just using a different compilation tool chain.

~~~
petra
In some sense the human programmer is also "a compiler", which needs a domain
expert to "program" him to do his job.

So maybe one interesting division isn't between what type of technology, but
about when a domain expert prefers working with a tool than a programmer.

But also, maybe the game example isn't the best here - one thing i noticed in
the past, reading through the academic literature - is that often - you see
some researchers just plug machine learning to problems that highly skilled
humans has struggled with for years, and gotten good results.

------
soniido
I have just submitted a post related to this. I wonder if those with a lot of
power and money are worried because AI applied to economic and redistribution
can reach a sound conclusion about another way to trade or use economy that
would go against the best interest of those with huge money and power. For
people starving or with a lot of problems AI can be a good thing, for people
controlling huge power and money AI is a more serious threat, because it can
change the status quo before the winner get it all.

~~~
Tenoke
I dont think any of the people who have expressed worry over AI are that evil.

~~~
soniido
Unfortunately, no taking any action to improve our world is not considered
evil by many people, laise faire, let's the world go on. If all your life is
about trying to be successful you are not evil by social standard, but perhaps
you can make a lot of harm. Long time ago there was a post by the well known
Steve Yegge, who by the way don't post any more, wondering why they are making
photos of cats to get people attention instead of prosecuting other more
important for the world roles, and one of his last post is about a game in
which the art of making people addict to games, don't recall the English word
for that, is really the goal to win money), just some quick ideas about huge
power entities using resources for solving world problems.

------
excalibur
> Specifically, once a task formerly thought to characterize artificial
> intelligence becomes routine — like the aforementioned chess-playing, or Go,
> or a myriad of other taken-for-granted computer abilities — we no longer
> call it artificial intelligence.

You're using games as an example. The gaming industry has been using the term
AI consistently for a very long time. You may actually want to look to them
for a better definition than you're getting from the whims of some undefined
"we".

------
perseusprime11
Elon Musk is Chicken Little when it comes to A.I. We are very early even with
ANI and are stretching reality with examples of Alexa, Self driving, AlphaGO,
facial recognition etc. I want to start seeing more ANI in day to day
improvements before we can even say ANI is good. But, I agree with the larger
point, AI has arrived.

------
maverick_iceman
At times, I feel that Musk is a con-man, selling dreams to naive and
impressionable techies.

~~~
toss1
Except that with the 'dreams' that he sells, the rubber actually meets the
road and goes from 0-60mph in 2.27 seconds, or puts real satellites in orbit
and returns the booster to be re-used, or builds the world's largest scale Li-
Ion battery factory to drive down costs...

Of course he's got some serious elements of self-promotion; it's required to
build such a business. In his case, he manages to back it up in ways that most
other self-promoters don't even begin to achieve.

(disclosure, I do own a bit of Tesla stock which has done nicely, also lost
some with SolarCity, but didn't ride it all the way down to the buyout)

------
czep
I hate sans serif, everytime I see "AI" in a headline I wonder who is Alan?

~~~
tlb
OpenAI had a custom "I" designed for the logotype for exactly this reason:
[https://openai.com/](https://openai.com/).

------
Entangled
"A bicycle of the mind"

Jobs was not only a great showman, he was definitely a genius.

