
Human-Level Intelligence or Animal-Like Abilities? (2018) - Anon84
https://cacm.acm.org/magazines/2018/10/231373-human-level-intelligence-or-animal-like-abilities/fulltext
======
mensetmanusman
Surprised I haven’t come across this analogy before. Thinking about it now,
‘Animal-like abilities’ is a much better analogy in the context of the general
public. It also helps ground scientists in thinking deeply about the
difference between human and animal intelligence (some animals can use tools).

~~~
throwaway_pdp09
This is vague but it was a while ago... some researcher was trying to make AI
more accessible to the public by downplaying high expectations. She
represented it as a cartoon dog, amiable, willing, but evidently none too
bright. I believe it was no more than an experiment, though a clever one. Ring
anyone's bell?

------
andra1
Francois Chollet tries to address some of these problems by tackling how
intelligence is measured. Most AI systems are measured by task performance,
but he argues that intelligence is about "skill-acquisition efficiency".

[https://arxiv.org/abs/1911.01547](https://arxiv.org/abs/1911.01547)

------
op03
Ant level intelligence I'd be happy with. "Colony you may now clean up my
kitchen table and do the dishes. Thanks"

~~~
TeMPOraL
And then you keep thinking about how fast cereal is disappearing in your
house, wondering whether you're just so sleepy in the morning you don't
remember eating more and more of it - until one night you spot a string of
AInts stealing your cereal, piece by piece. By the time you've realized your
mistake, the colony has grown 100x in size and already spread to the two
neighboring houses, being held back by the front line of an ongoing war with
an AInt colony of your friend one street down.

~~~
op03
:) that wld entertain me, having witnessed a couple ant-termite wars in the
garden.

------
YeGoblynQueenne
The title of the article does an awful job of summarising it. The main subject
of the article is not about the level of intelligence of current or future AI,
as the title suggsts. Instead, the article is a reflection on the progress in
the field in the last few years and a discussion of the degree to which
progress in deep learning has benefited, or harmed, AI research in general.

This is best summarised by the "Key Insights" box on top of the article:

 _> the recent successes of deep learning have revealed something very
interesting about the structure of our world, yet this seems to be the least
pursued and talked about topic today_

 _> In AI, the key question today is not whether we should use model-based or
function-based approaches but how to integrate and fuse them so we can realise
their objective benefits_

 _> We need a new generation of AI researchers who are well versed in an
appreciate classical AI, machine learning, and computer science more broadly
while also being informed about AI history._

The first "key point" refers to classes of functions that can be seen as
"cognitive functions". For example, mapping a set of inputs to outputs can
reasonably be considered as approximating some aspect of cognition when the
inputs are regions of images and the subjects their labels, so that the
function performs object recognition, a task that AI research has long
considered an aspect of cognition. Understanding how such functions work has
the potential to contribute to our understanding of human cognition, that has
long been a major goal of AI research. Yet, in recent years, interest has
shifted from understanding such results to applying them at the level of phone
apps, etc.

The final key point is a call to arms. We can't make progress as a field by
throwing out everything we've done before, everytime we achieve some success
in a narrow range of tasks. The author witnessed this happenning in the 1980's
with the rise and fall of expert systems -and the AI winter that followed. In
modern times, the success of deep learning has all but eclipsed the deep
knowledge that researchers in the field once possessed about symbolic logic
and important avenues of research are impossible to follow because the younger
generation of researchers simply don't have the necessary background - and are
"bullied by the success" of neural networks into directing their careers
towards neural network research, whatever their true interests.

On a personal level, not summarising the article anymore, the latter is the
most disturbing development. Neural networks can perform "perceptual" tasks,
but are wholly incapable of reasoning. Symbolic AI had reasoning down pat- and
not in approximate fashion (as a recent trend in deep learning research
attempts to perform it). Yet, we seem to have regressed and lost one ability
to perform one set of cognitive tasks in the process of figuring out how to
perform another.

In the past, AI researchers were well-rounded polymaths, versed in CS but also
(continuous) mathematics, physics, psychology, linguistics... Nowadays,
researchers seem to be optimising for a narrow band of knowledge and ignoring
everything else. This cannot end well.

------
api
Careful... you're on the verge here of realizing that intelligence is not a
single-dimensional thing and that 'g' is bullshit.

~~~
natch
Trollish comment above but I do wonder about this. I always hear people who
think ‘g’ is bullshit giving some form of this argument:

“AI can’t even x, how would you expect it to ever possibly do y?”

It’s like saying:

“How could a baby, which can not even tie its own shoes, spell a simple word,
or fill a cup with water, ever possibly grow up to write a book, manage a
corporation, or formulate a strategy for fighting a war? Impossible!”

It’s almost as if they think babies can’t learn. The AI is not a baby though,
they say. Yeah, it’s not. But it’s a constantly learning body of work that is
not regressing and not slowing down. Why would it not get to ‘general’
eventually?

~~~
GuiA
We have billions of examples of babies growing up to learn to spell and tie
their shoes, therefore it's very reasonable to assume that that's a skill most
babies will get to.

There are 0 examples of a man made construct reaching "general" intelligence
(whatever fuzzy meaning you may attach to it); there is no reason to believe
that because we can reach some arbitrary human milestone today ("be a world
class Go player"), any other human milestone ("start and run a company, and be
a mentor and inspiration to others") is plausible.

This is a very different case from e.g. "we built a prototype plane that can
lift 15 ft off the ground for 10 seconds, we should be able to build one that
can lift 10,000 feet for 10 hours", because in the former case we were
beginning to have an understanding of aerodynamics that made that path
plausible. We do not have such an understanding backing current ML work.
That's well illustrated by the fact that only 6 years elapsed between the
first manned flight and crossing the Atlantic by plane - practice caught up to
theory really fast, as it tends to do in those cases. We are about 6 years in
the current ML revolution - what is its equivalent to crossing the Atlantic?

The logic you're advocating for here is more along the lines of "I taught my
dog how to sit - shouldn't be too hard to get him to sit in front of a
computer and start writing Python code now that I've got the first half
figured out".

(or - I taught my dog how to fetch - shouldn't be too hard to train him to be
a full time food delivery employee, etc).

~~~
luckylion
> There are 0 examples of a man made construct reaching "general" intelligence
> (whatever fuzzy meaning you may attach to it); there is no reason to believe
> that because we can reach some arbitrary human milestone today ("be a world
> class Go player"), any other human milestone ("start and run a company, and
> be a mentor and inspiration to others") is plausible.

 _No_ reason? If you were an outside observer looking at the very first
individuals of what you decided to call human, maybe. But after so many steps
were made? You can argue that, before there was a plane, there was zero reason
to thing that humans can ever create one. And you could've claimed that for
pretty much anything else. And you'd generally see that they did get created.
Why would artificial intelligence be _fundamentally_ different? I'm not saying
"we can do it in the next five years", but to say that there's no reason to
believe that it will _ever_ happen?

> We do not have such an understanding backing current ML work.

And we did not have the knowledge of aerodynamics before we started looking
into throwing things, then keeping things in the air for longer than they
would if they were thrown, and eventually flying them (yes, a bit simplified).

