
The Concept of ‘Cat Face’ - campbellmorgan
http://www.lrb.co.uk/v38/n16/paul-taylor/the-concept-of-cat-face
======
gomox
A friend of mine did his PhD thesis on automated music synthesis and as a
subproject came up with a model to study music transcriptions that he got from
the Peachnote corpus [1].

One of his preliminary results was that his algorithms successfully
"discovered" 4 big classical music movements on their own, i.e. without any
prior labelling or classification, by using clustering algorithms. He posted
about it on his blog with a link to his paper [2].

I always had a hard time explaining to non-computer people how amazing that
seems.

[1] [http://www.peachnote.com/](http://www.peachnote.com/) [2]
[http://pablozivic.com.ar/post/51774763596/perceptual-
basis-o...](http://pablozivic.com.ar/post/51774763596/perceptual-basis-of-
evolving-western-musical)

~~~
derefr
It doesn't seem all _that_ amazing to me.

Humans have to be trained to perceive the differences in complex streams of
information; by default, they just perceive the low-level features—a "wall of
noise."

Machine-learning algorithms, meanwhile, can use general techniques to notice
_information-theoretic_ properties of various pieces of data. Effectively,
computers can "do statistical aggregation" about as effortlessly as humans "do
hierarchical knowledge representation."

And, in an information-theoretic sense, the different "trends" throughout the
history of music _look_ different under statistical analysis. They're complex
in different ways; they have different "lumps"; different aspects of them can
or cannot be compressed together as redundancies.

If you would like to see this for yourself, simply feed a raw melodic note-
structure (not embedded in XML or anything) to any modern dumb compression
algorithm, and then look at the result in a hex editor, while also having the
source still open. You should quickly be able to recognize the
"transformational signature" that characterizes something like a chaconne, vs.
something like a sonata.

------
monkmartinez
Are the computers/AI already smarter than us?

My eight year old son is enamored with "Ok Google" on my phone. He can ask it
questions until we tell him "that is enough, let google rest"... and it is
very interesting to see where it takes him.

He has learned to tailor his questions to elicit a voice response in addition
to the actual google search. The questions must use keywords to achieve this
desired response. It is like a new form of boolean search logic, just used
verbally. Not only that, but "Ok, Google" according to him "knows
everything"...

We call it searching the internet to learn something, an eight year has
decided that "Ok, Google" already knows everything. He just has to ask it to
see a video of the Puff adder eating its prey and it will show him. In fact,
there are not many things that have stumped "Ok, Google" and my son assumes
the problem was with his question, not with the machine.

So to circle back to my original question, I didn't know about the Taipan
snake or smallest person in the world, or seen videos of Puff adders eating
prey... If all I have to do is ask, isn't the machine already smarter than I?

~~~
khedoros
Is an extremely large and well-indexed encyclopaedia smarter than you? It
certainly contains more information than your head does, so you could say it
"knows" more. Look at it from the other direction, though: If the machine were
smarter, would your son have to learn how to structure queries, or would the
machine learn how your son asks questions?

Put another way, if I can tell you (verbatim) what some person wrote about
quantum mechanics, but I can't rephrase it into some other form to help you
understand the information, then how smart am I? I can repeat rote information
without any analysis (just like a sheet of paper could), but I can't tell you
what it means. On the other hand, if I tell you enough information, you're
likely to connect it together meaningfully, and come to your own conclusions
and explanations about it. That seems like a fundamentally different kind of
"smart" than what OK, Google can do.

~~~
monkmartinez
> Is an extremely large and well-indexed encyclopaedia smarter than you?

Kind of... I mean, is it useless trivia or is that information that can be
used? I can't use information that I don't know exists. The only way I learn
what I don't know is by using energy to search, read, do, to hopefully
comprehend.

What if "Ok, Google" had some kind of RNN or other ML technique that learned
my sons questions and thought process based on how he use the service. Who is
to say they are not already doing that? The ads from Google are downright
frightening with respect to the accuracy of what I am currently
contemplating/researching.

To take this a bit further into "silly, not so silly" land...

With ML techniques; Deepmind, Watson, and others are showing that computers
connected to this "encyclopedia" are besting their human counterparts. Is it a
giant stretch to say that everything requires some input or energy to learn?
Therefore, the only thing we are really missing is the wiring... that is, and
I hate to use the "skynet" aphorism, but one day "it" will just turn on and
there will be no looking back.

\-------------------

>On the other hand, if I tell you enough information, you're likely to connect
it together meaningfully, and come to your own conclusions and explanations
about it. That seems like a fundamentally different kind of "smart" than what
OK, Google can do.

We still have a problem of accuracy. Why should I trust your perception and/or
your explanation of the concept? I realize this is a problem with computers as
well. However, anything outside of emotional arbitrage should be easy to
verify.

My own conclusions are biased as hell. I fully admit that is a problem, but it
is a problem shared by humanity. I am not saying Wikipedia isn't biased... but
it is 100x better in most cases than one single person's perception simply due
to scale.

I am rooting for advanced ML/AI... I also hope that "going analog" is always a
viable option.

~~~
khedoros
In my opinion, information storage and retrieval isn't "intelligence" on its
own. A system that implements it can't be said to be "smart". You could argue
"knowledgeable", but that's not the same thing.

Imagine that no web page on the internet says what happens when you mix red
and blue paint, but there are results for classic color wheels, the results of
mixing red+yellow, blue+white, blue+yellow, red+black, etc. Google will see
that the page has a bunch of color-related words and words about mixing and
paint, so you'll get a page result about that, if you ask it about the
red+blue combo. Google's awesome at that kind of thing. But it stinks at the
kind of reasoning that humans excel at: Opening that page, seeing
"blue+yellow=green", seeing that green is between blue and yellow on a color
wheel, and concluding that purple is between red and blue...then maybe
continuing on and learning about additive vs subtractive color, and such. The
human reader synthesized new information: red+blue=purple. Google organized
existing information.

> We still have a problem of accuracy. Why should I trust your perception
> and/or your explanation of the concept?

That's not the point. The point is that a thinking person can process
information in ways that we don't know how to make computers do (yet). We're
still distilling the concept of human intelligence down to its core.

I think Google and Watson are still at the level of using and presenting
already-generated information effectively, but not generating and organizing
their own new information, and not really "understanding" the things that
they're retrieving.

------
codeulike
Here's the bit about cats:

 _If this [huge google] network had been fed thousands of images labelled as
‘contains cats’ or ‘doesn’t contain cats’ and trained to work out the
difference for itself by iteratively tweeking its 1.7 billion parameters until
it had found a classification rule, that would have been impressive enough,
given the scale of the task involved in mapping from pixels to low-level image
features and then to something as varied and complex as a cat’s face. What
Google actually achieved is much more extraordinary, and slightly chilling.
The input images weren’t labelled in any way: the network distilled the
concept of ‘cat face’ out of the data without any guidance._

~~~
pavlov
The problem is that this network-contained concept of "cat face" is still a
symbolic representation. It's a much more complex algorithmic symbol than the
rules found in something like 1960s Eliza, but its understanding of the world
is on the same level.

You can't ask the "cat face" neural network anything about cats. It has no
idea what they actually are in relation to the world. A two-year-old human can
usually tell you more about cats than you'd care to listen.

~~~
empath75
You can't ask a child's visual cortex to tell you anything about cats, either.
But connect that visual recognition ai with something like Watson, and you
have what you're looking for, no?

~~~
pavlov
I just don't know. I'd be happy to see that be the case...

But I'm afraid it's going to be the equivalent of this cake recipe: "Take an
egg and a packet of sugar. Break the egg over the packet."

You certainly need eggs and sugar to make a cake, but on their own and
combined without understanding of the whole, you're not getting very far.

~~~
empath75
The human brain is mostly a bunch of ugly wetware hacks, not a single coherent
'intelligence' that does all the thinking. If you attach enough single purpose
AIs together you might get something much more human like than trying to
create a single neural network that does everything.

------
sosuke
There is a hope I have, that I see many others have too, which is that our
human intelligence is somehow special. That somehow the leaps in logic we can
perform somehow make us unique. That we won't be outdone by a computer a
million times faster or smarter than us because we have something else that we
can't replicate in transistors and circuits.

I love AI, but I have that hope too. That somehow we won't be made irrelevant
by our own creations. Makes me think of our autonomous vehicle fun taking over
the trucking industry. Millions of people made irrelevant through no fault of
their own.

~~~
ChuckMcM
I would guess a number of people have that in the 'fear' category, that should
it not be special then somehow people will feel diminished.

It would make a good punchline for a fictional story of people researching
brain disorders and intelligence. It would work like this; The researchers in
the story develop the means to 'cure' someone of all known neurological
disorders. They try it on their test subject and the result is someone who
perfectly happy just to be there, has no ambition or curiosity, and requires
no entertainment or outside stimulus. The researchers recognize that the
person is acting like an intelligent but non-sentient species, and they
realize they have "undone" what ever happened to humans according to the
garden of eden story in Genesis.

~~~
kwhitefoot
See Peter Watts' Blindsight.

~~~
ChuckMcM
Ok then, added that one to the queue. Thanks!

------
carapace
One of my favorite things to anticipate is "How are these hard-nosed rational
materialists going to cope when the AIs discern ghosts?"

What I mean is, some of the "cat faces" they identify will correspond to
things that are "real" but that also violate our assumptions about reality.
When this happens the typical reaction is to shut the door and burn the room.

~~~
Practicality
We're always bad at knowing what to do with unknown data, because, by
definition, when you don't know about it you don't know what to do with it.

Quite often it's just assumed to be error.

Those errors have traditionally led to great discoveries though, (such as the
Mercury wobble
[https://en.wikipedia.org/wiki/Tests_of_general_relativity#Pe...](https://en.wikipedia.org/wiki/Tests_of_general_relativity#Perihelion_precession_of_Mercury))
so if they are repeatable, then such "ghosts in the data" could turn up great
fundamental truths.

Every bit of this is speculation though. I am not so sure that anything so
profound is going to turn up in AI datasets.

------
fegu
A great read. Well written and fascinating.

