
Some Philosophical Problems from the Standpoint of AI (1969) [pdf] - jpelecanos
http://www-formal.stanford.edu/jmc/mcchay69.pdf
======
forthelove
I'm too dumb to understand any of this stuff re AI, but one thing occurred to
me recently on the philosophical side of tech: When your life is closely
intertwined w/ tech, you begin to almost solely view the world through the
lens of progress and innovation. Progress and innovation often comes at the
total expense of momentary awareness and satisfaction. Innovating becomes an
expectation. It's just what comes next, that's it. That's a problem, generally
speaking, IMO.

~~~
miloshadzic
_" Silicon Valley’s amorality problem arises from the blind faith many place
in progress. The narrative of progress provides moral cover to the tech
industry and lulls people into thinking they no longer need to exercise moral
judgment."_

Morality and the Idea of Progress in Silicon Valley
[http://berkeleyjournal.org/2015/01/morality-and-the-idea-
of-...](http://berkeleyjournal.org/2015/01/morality-and-the-idea-of-progress-
in-silicon-valley/)

~~~
nercht12
Interesting article. Thanks for linking!

I think the statement could also be succinctly summarized from a simpleton's
perspective as: "Have a problem? Science and tech will eventually find the
answer." Science and tech keep answering questions, so it gives the false
impression that _every_ problem can be answered with science and tech, which
in turn seems to justify - at least in part - every answer science and tech
gives.

~~~
wu-ikkyu
>"Have a problem? Science and tech will eventually find the answer."

I used to believe this myself when I was first getting into tech in school and
reading Kurzweil's The Singularity is Near.

Then I got into the real world and realized the hardest problems to solve are
not technical problems, but _people_ problems.

------
colorint
>However, work on artificial intelligence, especially general intelligence,
will be improved by a clearer idea of what intelligence is. One way is to give
a purely behavioural or black-box definition. In this case we have to say that
a machine is intelligent if it solves certain classes of problems requiring
intelligence in humans, or survives in an intellectually demanding
environment.

What if my expectation is that it be able to decide if an algorithm, given
some inputs, will halt? Or being able to decide if a proposition is true
within an axiomatic system? Maybe the better question is why Turing himself
was optimistic about intelligent machines.

~~~
yters
Yes there are many problems that in general are impossible or intractable for
computers, yet humans solve instances of these problems with ease. In fact, it
seems most problems of interest to us fall in this category of generally
impossible or intractable. So, is the mind just by chance stumbling on the
computationally tractable instances of these generally intractable problems,
or is the mind doing something special that computers inherently cannot do?
I've not seen this question addressed anywhere. Instead people assume
materialism and conclude the mind is a computer, which merely begs the
question. There are many more uncomputable than computable functions, so there
is no a priori reason to assume the mind is a computation.

~~~
tensor
> Yes there are many problems that in general are impossible or intractable
> for computers, yet humans solve instances of these problems with ease.

Can you provide examples? Generally when we say a problem is intractable in CS
we are talking about solving it for every instance, not just a particular
instance.

For example, the halting problem is "given an arbitrary program, can you
determine if it will halt?" I see no reason believe that a human can determine
if any arbitrary program will halt. In fact, it's trivial to come up with
examples of programs that are so large that there is no way for a human to
remotely comprehend what they do, let alone say if it will halt.

So what are these intractable problems that humans can solve with ease?

~~~
igammarays
Example: recognizing and picking up objects, with grace and agility. Simply
calculating the rotational angles required on limbs with multiple (>8) joints
is intractable for a computer, yet we do it all the time.

Another example: writing new sorting algorithms. Writing a program to do a
well defined task.

------
Emma_Goldman
I like the paper. But it does not have much to do with the traditional canon
of philosophical problems. Thankfully it does nothing more than set out a
conceptual space in which the general theoretical framework of AI can be
analysed. I guess that might be philosophical pragmatism, i.e. finding
answers/language sufficient to one's needs and conditions. But there are no
especially overt second-order epistemological/ontological claims, e.g. such as
figuring out what knowledge is or can be.

------
du_bing
Thanks for posting the old paper. It’s a marvelous one that can help me think
more clearly about intelligence.

