
To Understand the Future of AI, Study Its Past - lelf
https://www.forbes.com/sites/robtoews/2019/11/17/to-understand-the-future-of-ai-study-its-past/
======
Animats
We still have a huge gap in the middle between symbolic AI and machine
learning. Basically, what the cerebellum does - manage short-term activity.
"Common sense" at the low level.

Not much going on in this area. Which is why robot manipulation in
unstructured situations still sucks, after 50 years.

~~~
lisper
The biggest breakthrough in AI will come when someone builds a robot that can
fold a shirt.

~~~
nl
Yeah someone made a big claim about it being increadibly hard for robots to
deal with cloth. It's true it is tricky, but the shirt folding problems has
been solved for a while.

[https://foldimate.com/](https://foldimate.com/)

[https://www.theverge.com/2018/1/10/16865506/laundroid-
laundr...](https://www.theverge.com/2018/1/10/16865506/laundroid-laundry-
folding-machine-foldimate-ces-2018)

~~~
lisper
"The Foldimate demo I saw wasn’t actually a fully working prototype, meaning
clothes-folding technology wasn’t all there yet."

So not quite solved just yet. And I'll give you long odds against them working
out the kinks in the next five years.

~~~
friendlybus
A general purpose cloth folder is difficult, but standardized pieces are
covered.

[https://youtu.be/OpiFp9i6owY?t=209](https://youtu.be/OpiFp9i6owY?t=209)

~~~
Animats
It's tough, but the industrial laundry business more or less has this working.
Not much AI, lots of compressed air.

------
gumby
This is a surprisingly well done introduction for people who weren't exposed
to the first 50 years of AI research.

------
roenxi
With AlphaGo in 2015 we crossed an important bridge in the Turing Test -
before AlphaGo, we could be pretty certain that a high level Go game was
played by a human. After AlphaGo, we can't be sure.

At this point a display of great competence no longer means that a human did
it. Sure; all the parts haven't been completely connected to make self driving
cars, general game playing AIs or conversation bots. But on _any_ discrete
task it is no longer reasonable to say "computers can't possibly do that" the
way I could about a Go game between two 9 dan players in 2010.

I can still say "not commercially viable" and I can point to specific attempts
that don't work, but computers are now on the same threshold as humans - [data
+ time = results]. It may be more data and more time than a human, but that is
a big change from [logically modelled domain = results] which is where we were
before in AI.

> ...AI systems whose actions cannot be closely scrutinized and explained...

Author is one of the large group of people who are in for a shock when they
try to scrutinize and explain a humans crazy actions. I can't even explain why
I get the wrong result sometimes when I add numbers in my head.

~~~
pmoriarty

      >> ...AI systems whose actions cannot be closely scrutinized and explained...
    
      > Author is one of the large group of people who are in for a shock
      > when they try to scrutinize and explain a humans crazy actions. I
      > can't even explain why I get the wrong result sometimes when I add
      > numbers in my head.
    

I was just rereading Asimov's _I, Robot_ , and was struck by this passage:

 _" The cotton industry engages experienced buyers who purchase cotton. Their
procedure is to pull a tuft of cotton out of a random bale of a lot. They will
look at that tuft and feel it, tease it out, listen to the crackling, perhaps,
as they do so, touch it with their tongue, and through this procedure they
will determine the class of cotton the bales represent. There are about a
dozen such classes. As a result of their decisions purchases are made at
certain prices, blends are made in certain proportions. Now these buyers can
not yet be replaced by the machine._

 _" Why not? Surely the data is not too complicated for it?_

 _" Probably not, but what data is this you refer to? No textile chemist knows
exactly what it is that the buyer tests when he feels a tuft of cotton.
Presumably there's the average length of a thread, their feel, the extent and
nature of their slickness, the way they hang together and so on. Several dozen
items, subconsciously weighed out of years of experience. But the quantitative
nature of these tests is not known. Maybe even the very nature of some of them
is not known. So we have nothing to feed the machine. Nor can the buyers
explain their own judgment. They can only say, 'Well, look at it. Can't you
tell it's class such and such?'"_

Asimov wrote this in the 1940's and in this passage he tried to illustrate how
unquantifiable and impenetrable human judgment was compared to artificially
intelligent robots, which he saw as ultimately rational calculating machines.
Ironically, today AI techniques such as neural networks are criticized for
making much the same sort of impenetrable judgments.

~~~
aidenn0
Asimov also assumed we wouldn't have AI at all if we couldn't mathematically
prove certain things about it, and instead we are handing over deadly tasks to
AI without any knowledge of how it works.

~~~
galaxyLogic
Yes Asimov assumes AI would be based on logic, he was not aware of neural
networks which I think had not been discovered yet. It should have been clear
though that if we grow brains in vats they might resemble human brains
greatly.

------
fredguth
Excellent historical overview. However, I am astounded with this idea that
human intelligence is explainable. It is not. We do not know why people decide
how they decide. The explanation is always a narrative created subsequently.

~~~
TeMPOraL
> _The explanation is always a narrative created subsequently._

Only in the most tautological sense - explanation has to be made in a form of
narrative in order to communicate it. As for the correctness aspect, I recall
reading on HN recently that the study which claimed to demonstrate that such
explanations are wrong had some bad issues with their data science.

------
account73466
We should view AI in the context of the technological revolution. You all have
a good approximation of what might happen. The question is what to do.

I believe that since the population size of intelligent agents will grow
exponentially fast and pretty soon, one should consider one of the following
options

i) Do pretty much the same as before.

ii) Join a FAANG company.

iii) Do PhD+ level research in ML/CS/AI.

iv) Make money as fast as you can.

v) Some option I didn't think about.

If you pick (i), it seems likely that your kids (if any) will have no money
or/and power to remain relevant. Pick (ii)+(iv) or (iii)+(iv). The latter
seems preferable but it is harder.

~~~
CuriouslyC
The thing most people don't realize is that advances in AI are going to (at
least for the medium term future) produce high powered tools rather than true
autonomous agents. The people who rise to the top of the new economy enabled
by AI will be those that learn to take full advantage of the tools, and who
have those traits that machines lack, i.e. creativity and vision.

~~~
account73466
>> who have those traits that machines lack, i.e. creativity and vision

I would go further and say that once machines will be able to lie better than
humans, we are done.

>> produce high powered tools rather than true autonomous agents

including tools which will help to tell a compelling story and/or lie to
people. We can see precursors of these tools appearing during the last few
years.

------
bwang29
Year 2020 is perhaps one of the best year to simulate to study the origin
story of A.I., what a time we are living in today!

------
Veedrac
> Connectionism is at heart a correlative methodology: it recognizes patterns
> in historical data and makes predictions accordingly, nothing more. Neural
> networks do not develop semantic models about their environment; they cannot
> reason or think abstractly; they do not have any meaningful understanding of
> their inputs and outputs.

And the symbolists make the same mistake they did the first time. Imbuing
programs with nicely-named symbols and hard coded logic does not possess them
with understanding, it arrests away their ability to learn it.

Simple programs do not _understand themselves_ , they have no more awareness
of the logic they are running than a mouse has awareness of its neurons.
Symbols and their programs represent understanding only as much as they map to
concepts in the programmer's mind, not their own. Understanding must be
something that partakes in computation, not the definition of the program.

Classical chess AI, built with perfect chess simulators, idealized search and
expert heuristics, despite flowing through interpretable and semantically
sensibile programs that humans have built, are entirely isolated from the
semantics of their programs—for the names and the layout of the data
structures are not properties that the program itself has any access to.

Despite the limitations of machine learning, this cannot unreservedly be said
of neural networks, which are demonstrably extracting semantically meaningful
latent spaces from highly complex inputs _as part_ of their computation. ML is
still not self-reflective in any useful sense (it cannot hear itself think; it
does not see itself learn), but at least it is handling the first level of the
task on top of which we might conceivably build understanding. To MuZero, a
game of chess is _an aspect of its network that it could perceive_ , at least
within a given branch of its search. And we know these networks must be
building something at least knowledge-analogous, since how else could a
network like GPT-2 answer questions (however imperfectly) across a range of
out-of-domain tasks, like knowledge retrieval and translation?

And this is why Gary Marcus' position (besides his repeated telling of false
claims) misses the point. Yes, we should embed programs with priors and
reasoning beyond brute connectionism—and to that, most people agree;—but this
understanding _cannot_ live in the symbols, it must by necessity live in _the
structure of the computation_ , and this structure must itself be accessible
to the AI. It is this latter thing that the ML community is already doing, in
likely the majority of ML papers, of which the convolutions Gary likes so much
are just the tip of the mountain. It is this latter thing that explains, much
against the grain of Gary's claims, why MuZero is a better network than
AlphaZero.

~~~
joe_the_user
It seems like everyone says the problem is "lacking of understanding" but it
doesn't seem like anyone "understands understanding" on the level that this
seems to require.

>" _Yes, we should embed programs with priors and reasoning beyond brute
connectionism—and to that, most people agree;—but this understanding cannot
live in the symbols, it must by necessity live in the structure of the
computation._ "

I'd argue it needs to live in the computation and live in the symbols and be
able transparently go between these. But how to make that work is still
unknown.

~~~
galaxyLogic
I'd argue neural networks need to evolve to a stage where they create symbolic
representations and invent a language that allows neural nets to communicate
and learn from each other. That would probably require that they also develop
a symbolic representation of 'self', as in my knowledge vs. what is
communicated to me by other neural beings via some language we all can use to
exchange symbolic representations with.

------
freddealmeida
Daniel Kahneman has an interesting thought on this: “I am puzzled by the
number of references to what AI “is” and what it “cannot do” when in fact the
new AI is less than ten years old and is moving so fast that references to it
in the present tense are dated almost before they are uttered. The statements
that AI doesn’t know what it’s talking about or is not enjoying itself are
trivial if they refer to the present and undefended if they refer to the
medium-range future—say 30 years." See edge.org for more.

~~~
haspok
I think a lot of the confusion comes from the fact that everybody defines "AI"
differently.

To me, AI = self consciousness, anything less is just fancy ML.

If you ask the question "are we going to have self-conscious machines in 30
years" I would bet against it. If anyone has any reason to bet _for_ it, I'd
like to hear those reasons :)

------
DanielleMolloy
Can symbolism be defined as a _reductionist_ approach? Did symbolism fail
because it did not acknowledge computational irreducibility of intelligent
systems?

------
6gvONxR4sf7o
This article downplays some benefits of symbolic approaches. For example, a
set of logical statements can benefit from a SAT solver, and a concrete object
can benefit from a relational database or a search through a graph, and
theorems can be proved symbolically.

------
williamDafoe
Neural network fads come around every 25 years (1960s - killed on purpose by
Minsky by his book Perceptions. Minsky wanted everybody to work on his type of
AI, symbolic AI which went nowhere. Late 1980's. 2010s.) They usually die out,
leaving no trace behind. Maybe this time it's different, imho because of
Google translate. Most neural networks don't actually earn any money. But in
general they make super slick demos, that's for sure .

~~~
Der_Einzige
Yeah this is a totally wrong take.

It's like you haven't seen the advancements in CV or NLP (not involving
translations) made capable entirely due to fancy neural network arcitectures

------
jacobsenscott
> Deep learning's recent accomplishments have been nothing short of
> astonishing.

For example...

~~~
thfuran
You wouldn't believe it anyways.

~~~
pts_
Try us.

------
DanielleMolloy
Are connectionist / neural network models commercially more successful than
symbolist approaches have ever been? Or is the current AI decade comparable to
the 80s culminating in expert systems?

~~~
dgacmu
Much more successful. (and still perhaps just as over-hyped. :-). Most Google
searches you do were influenced by a neural network prediction. The ads you
see from Google and Bing were predicted by statistical machine learning
models. When you talk to your phone or Google home or Alexa or Siri, the
speech to text is performed by a deep neural network. When you (don't get as
much) spam in your inbox? DNN. When you search your iPhoto album for cats?
DNN. List goes on and on.

DNNs are one of the best tools we have for bringing uncertain, hugely complex
real world audio, text, and visual data into a form where we can manipulate it
symbolically or mathematically with traditional programming. In other words,
they bring more domains into the scope of automation. (Observe that none of
what I just said sounded like "AI" as popularly imagined).

------
imvetri
To understand yourself, Study your childhood.

------
purplezooey
I mean... symbolic AI? Isn't that like Prolog and all that stuff... might as
well be VAX/VMS

------
vonnik
This article basically makes Gary Marcus's argument, and the argument is
getting a bit tired. Geoff Hinton puts it more succinctly than I could:

[https://twitter.com/tabithagold/status/1070736319901519876](https://twitter.com/tabithagold/status/1070736319901519876)
[https://twitter.com/tabithagold/status/1071189769499996161](https://twitter.com/tabithagold/status/1071189769499996161)

"....The old-fashioned car manufacturers said 'We believe in electric motors
too, and we can derive electric motors by injecting petrol into the engine."

There is a persistent lure to symbolic AI, and it is the lure of
anthropomorphization, of thinking that AI must be smart in the way humans are
smart, by manipulating symbols.

But natural language is not so much a central expression of machine
intelligence as it is of human intelligence. Humans confuse linguistic
aptitude with intelligence. Symbolic manipulation will be at best an API by
which machines can relate to humans as the machines get smarter and smarter.
Symbolic tools that conform to the bandwidth limitations of humans, which are
constraints that our current machines don't face. (Historically they did, and
symbolic AI made more sense then, since neural-net training was unfeasible.)

The lure of symbolic AI puts us in the bondage of old ideas, as Keynes would
say. Symbolic AI is the equivalent of replacing our economy's fiat currency
with an arbitrary supply of a yellow mineral.

I could go on, but I'll just link to this page, for those who are interested:
[https://pathmind.com/wiki/symbolic-
reasoning](https://pathmind.com/wiki/symbolic-reasoning)

~~~
naveen99
Natural language is high dimensional and hard to do convolutions with. That’s
why you need dimensionalaty reducing embedding layers. There isn’t anything
magical about symbols, their space just has less structure than simple
numbers.

