
The Frame Problem (2016) - benbreen
https://plato.stanford.edu/entries/frame-problem/
======
erikpukinskis
Every time I see someone make an illogical prediction about AI, it's because
they don't understand the frame problem.

Immense processing power is of no value, if you have no capability of
grounding and regrounding it adaptively in reality. That's the job our bodies
and our societies provide, and AIs have no way to do it, except taking on a
human body and joining human society, at which point an argument can be made
they're not AIs any more, but human cyborgs.

~~~
yorwba
While I agree that AIs need lots of experience with all kinds of situations to
take all of them into account correctly, I don't think that a human body is
necessary for that. It could be a network of surveillance cameras, or a fleet
of sensor-stuffed cars or any other source of large numbers of observations.
To predict the effect of an action, you don't need to be able to execute it
yourself; or even if you can do it, there is no need to have done it; simply
observing it is enough. There is no reason to suggest that AIs couldn't learn
similarly through observation.

~~~
erikpukinskis
I don't mean to be rude, but did you read the article? The reason AIs can't
learn just from observation is the frame problem.

You use your body to interpret what you see. Animals who are paralyzed from
birth are blind, not because they have no eyes, but because they have no
embodied experiences to allow them to interpret what they are seeing.

~~~
yorwba
_In spite of these subtleties, a number of solutions to the technical frame
problem now exist that are adequate for logic-based AI research. Although
improvements and extensions continue to be found, it is fair to say that the
dust has settled, and that the frame problem, in its technical guise, is more-
or-less solved (Shanahan 1997; Lifschitz 2015)._

Yes, I read the article.

> Animals who are paralyzed from birth are blind, not because they have no
> eyes, but because they have no embodied experiences to allow them to
> interpret what they are seeing.

That's interesting. Where can I learn more about that?

~~~
erikpukinskis
[https://io9.gizmodo.com/the-seriously-creepy-two-kitten-
expe...](https://io9.gizmodo.com/the-seriously-creepy-two-kitten-
experiment-1442107174)

------
ianbicking
I was playing recently with logical planning [1], and heuristics to determine
what possible actions to explore. A heuristic that I happened upon, that
worked quite well, was to work backward from the desired solution and choose
actions that reduced the requirements on the state of the world.

The result was not plans that were optimal, but it found plans quite quickly.
When I changed it to find optimal plans (essentially performing a reverse
breadth-first search) then it did so, but instead of "finding" a solution it
simply exhausted the space of actions and states. The algorithm stopped
seeming intelligent, and merely felt thorough.

I wonder if this is state-requirement-reducing heuristic is closer to the
problem-solving approach we use. Not an approach where we understand the world
thoroughly, but one where we make plans that require the least specificity of
the state of the world.

In part this is because our planning is iterative; given a goal: plan towards
the goal, perform an action, observe the state, plan towards the goal, perform
the next action. Unexpected effects are accounted for... eventually. And the
result is often not optimal.

Also logical planning does not introduce conditionals, even though our plans
always contain conditionals. To use the Yale Shooting Problem [2], we might
construct a higher-order action, like "shoot the turkey" – and implicit in
this action is "if the gun is not loaded, load the gun, then shoot the
turkey". Also "if the turkey has escaped, find the turkey, then shoot the
turkey". If the turkey escapes while you are loading the gun, then perhaps a
better plan would have included securing the turkey before loading the gun,
but this is exactly the sort of mistake that humans make in their planning.

With conditionals it's even easier to have actions that reduce the specificity
of the requirements to achieve a goal. But the plans will probably not be
optimal, but still they are resilient to a world with unintended consequences.

[1] [https://github.com/ianb/bitplanning](https://github.com/ianb/bitplanning)
[2]
[https://en.wikipedia.org/wiki/Yale_shooting_problem](https://en.wikipedia.org/wiki/Yale_shooting_problem)

------
phkahler
This seems really stupid to me. An action has defined effects, there is no
reason to think of all possible nonexistent side effects. Then they allow for
that as a default and point out an apparent exception: what if an object is
moved into a can of paint... This is so incredibly stupid I can't imagine
people take this seriously.

~~~
lisper
> there is no reason to think of all possible nonexistent side effects

But there is. Actually there are two reasons.

The first is that the "non-existent side-effects" are only non-existent in
certain circumstances, i.e. an action's effects depends heavily on the context
in which it is executed. Moving X from A to B can have lots of other effects
beyond changing the location of X depending on the circumstances. These
effects can be _arbitrary_ \-- consider for example the case where X is a
switch and A is the OFF position and B is the ON position.

The second is that there really is no such thing as a truly non-existent side-
effect because even a null action -- waiting, doing nothing -- results in
change, and sometimes that change turns out to be the best way to achieve your
goal. For example, if you're a farmer whose goal is to water their crops, one
way to do it is to do nothing and wait for rain. But to figure out whether or
not that's what you should do you have to know an awful lot about how the
world works.

In fact, waiting and doing nothing is often an integral part of many goal-
achieving procedures. Think about, for example, traveling as a passenger on an
airplane. The usual procedure goes something like:

    
    
        Buy a ticket
        Wait
        Go to the airport
        Wait
        Board the plane
        Wait
        Disembark the plane
    

If you leave out the null actions, that procedure fails badly.

------
2sk21
I'm just happy that somebody thought it worthwhile to post this article here.
With all of the heard running towards ML, its great to see that not everyone
has forgotten GOFAI inspired issues!

