
Model-Free, Model-Based, and General Intelligence [pdf] - shurtler
https://www.ijcai.org/proceedings/2018/0002.pdf
======
nutjob2
It seems the fundamental problem with bottom up/learning AI is that it is
opaque and essentially unknowable. I find it all very hackish. We can develop
systems now which we can test and seem to work, but we don't know exactly why
they work (eg: what parts of the training data they are promoting) and when
(or why) they will fail. The effectiveness of adversarial inputs to trained
vision systems illustrates this.

Zoom forward to a super-human AI that mimics our brains in its approach but
exceeds its capacity. What is stopping it, for instance, learning that it can
play the long game of being good until it has sufficient power at its disposal
and then becoming evil? No matter what training data you present, you can't
know exactly what the result will be.

I get the feeling that learning systems will be combined with model systems
with the former performing "low level" tasks and the latter providing a
verifiable "executive" that guides high level goals or outcomes.

~~~
nmca
One approach being considered is "AI Safety Via Debate"[0], which hopes to
prevent deception by carefully constructing games in which a superhuman
agent's best strategy is honesty. Note that this is the goal; much work to be
done!

[0] [https://arxiv.org/abs/1805.00899](https://arxiv.org/abs/1805.00899)

~~~
nutjob2
Unfortunatley being dishonest or evil is just one example. Arguably the AI can
develop new classes of deviancy, abuse or maladaptation that we haven't
conceptualized yet. We supersize the ability, surely we supersize the
problems.

It leads to a scary question: what does a superhuman AI really want?

~~~
Nasrudith
To be fair a HFT agent can count as superhuman AI technically. Wanting isn't a
thing that applies yet to actual AI and there is no special sauce that
indicates advancement beyond neuron scale. Barring directives and assuming
"grown" what it wants can be utterly peripheral to rationality and likely
based on what it is taught - internationally or not. Look at how society
preaches honesty from a young age and then starts teaching lying again by
rewarding it. The real lesson is the spartan one on stealing- don't get
caught. It may not be intended but it is the result.

------
algorias
I attended this talk at IJCAI, and I must say that the whole system 1 / system
2 analogy rubbed me the wrong way.

A solver for e.g. 3-SAT is general only in a very narrow sense, namely that an
entire class of problems can be reduced to the specific problem it solves.
However, the solver itself is not doing the reducing, rather it is being
spoon-fed instances generated by somebody, and that somebody is doing all the
hard work of actually thinking. The solver is just doing a series of dumb
steps very quickly, with lots of heuristics thrown in. How is that not also
"system 1"?

Anyway, the whole thing was just a fancy way of saying that you can either
solve problems exactly, in the way that complexity theorists and algorithm
designers do things, or statistically, in the way that learning theorists do
things. No need to superimpose a strained analogy.

~~~
sophistication
Not to mention that there is no conclusive evidence of the dual process theory
yet, see for example this experimental study finding that logical "type 2"
answers are actually typically faster and that intuitive "type 1" answers are
typically also logical:

[https://www.sciencedirect.com/science/article/pii/S001002771...](https://www.sciencedirect.com/science/article/pii/S0010027716302542)

~~~
naasking
> Not to mention that there is no conclusive evidence of the dual process
> theory yet

Define "conclusive". There is considerable evidence for this dual reasoning
mode.

As for your study, system 1 thinking is not inherently illogical. In fact,
it's necessarily logical otherwise it would be maladaptive. The point is that
it's logical in a "lossy" way that sometimes excludes pertinent information
for speed of response, and so sometimes goes wildly wrong.

------
jeisc
AGI will be a reflection of ourselves. First we must resolve the basic
problems of the human condition (poverty, hunger, housing, war, ...) before
developing AGI as it will surely amplify our worst nature as well as our best
nature.

------
sgt101
Was this an invited talk?

~~~
odderik
Yep, it was.

------
diminish
"If we want good AI, we can’t look away from culture and politics."

At the end AI will join our tense political atmosphere of parties fighting for
ruling the world?

