
How Does Recent AI Progress Affect the Bostromian Paradigm? - r721
http://slatestarcodex.com/2016/10/30/how-does-recent-ai-progress-affect-the-bostromian-paradigm/
======
visarga
The author seems to make a distinction between reinforcement learning (goal
driven intelligent behavior) and human behavior. As if humans were beyond
optimizing a simple reward. We're not.

I don't think human cognition is open ended, it is just maximizing a bunch of
reward channels (food, shelter, sex, avoiding pain, curiosity, creativity).
These reward channels are bred into, by evolution. They are what allow us to
continue existing, and that is their only condition.

Even the way we can perceive time and space is very limited. Our concepts are
not universal, they are just good enough for us. AIs could surpass humans by
optimizing on a set of a few rewards, hopefully better chosen than those we
evolved in millions of years.

------
maxander
>I think this [AI beating humans at facial recognition] is scarier than most
people give it credit for. It’s no big deal when computers beat humans at
chess – human brains haven’t been evolving specific chess modules. But face
recognition is an adaptive skill localized in a specific brain area that
underwent a lot of evolutionary work, and modern AI still beats it

This could give us a new lower bound on the computation required for human-
level AI. "Facial recognition," IIRC, is a well-localized cognitive function
occupying some reasonably consistent fraction of a human's grey matter. The
processing power required to recognize a stream of faces via ML techniques can
be quantitized. Simply multiply the former ratio by the latter quantity and
you have the processing power required to functionally emulate a human brain,
no?

Regarding his larger points: I don't have time to track down the reference,
but there's fairly consistent stories coming out about how, for any sort of
classification deep net, small adversarial changes to the input can cause
arbitrary mis-classifications to arbitrary confidence. So even if you give
your godlike AI a paperclip-recognizing deep net, the grounding problem could
still bite you; your AI may learn to take over all economic production, or it
may learn that if it pokes its camera _just like this_ it suddenly looks like
the universe is made of paperclips.

Which matches the human case- since, certainly, we dissipate plenty of energy
towards ends orthogonal to what we're supposedly optimizing. Even to the point
of, _ahem_ , directly subverting some of classifiers that are key to our
reproductive cycle.

