Hacker News new | past | comments | ask | show | jobs | submit login
Some Philosophical Problems from the Standpoint of AI (1969) [pdf] (stanford.edu)
45 points by _culy on Aug 17, 2017 | hide | past | favorite | 28 comments



I'm too dumb to understand any of this stuff re AI, but one thing occurred to me recently on the philosophical side of tech: When your life is closely intertwined w/ tech, you begin to almost solely view the world through the lens of progress and innovation. Progress and innovation often comes at the total expense of momentary awareness and satisfaction. Innovating becomes an expectation. It's just what comes next, that's it. That's a problem, generally speaking, IMO.


"Silicon Valley’s amorality problem arises from the blind faith many place in progress. The narrative of progress provides moral cover to the tech industry and lulls people into thinking they no longer need to exercise moral judgment."

Morality and the Idea of Progress in Silicon Valley http://berkeleyjournal.org/2015/01/morality-and-the-idea-of-...


Interesting article. Thanks for linking!

I think the statement could also be succinctly summarized from a simpleton's perspective as: "Have a problem? Science and tech will eventually find the answer." Science and tech keep answering questions, so it gives the false impression that every problem can be answered with science and tech, which in turn seems to justify - at least in part - every answer science and tech gives.


>"Have a problem? Science and tech will eventually find the answer."

I used to believe this myself when I was first getting into tech in school and reading Kurzweil's The Singularity is Near.

Then I got into the real world and realized the hardest problems to solve are not technical problems, but people problems.


Always a synchronicity of late. I've been thinking in the last few days about how often things that are "morally neutral" can often be confused as moral goods. Technological progress is a morally neutral thing, it must be evaluated whether the progress is a good one or not (e.g. chemical/biological weapons, etc). It hit me when someone wrote about diversity being a morally neutral thing. They gave a bizarre example of adding a deranged person or some other outlier to a crowd of folks an recognizing its more diverse, but certainly not for the better. Something to that effect.


That's a problem, generally speaking, IMO.

I'm not necessarily disagreeing, but I am curious why you think that is a problem.


If you're innovating for the sake of innovation, then you're stuck in a rat race without an attainable end goal.


Is there anything that isn't a rat race without an attainable end goal?

Even happiness/satisfaction/contentment/enlightenment etc... aren't ultimately attainable.

It's biologically determined that we will always search for the next thing.


Actually I do know people who are generally happy, and people who are generally unhappy. So there's a counterexample (since general happiness appears to be attainable). You may know people like that too.

Regarding the people who are generally happy, "rat race" would not apply to their experience of life.


The idea that life gets better over time is very new, as it coincides with the Industrial Revolution.

Before then, people thought in terms of food fuel and muscular mechanics. Same principle applied for slaves, livestock, and so on.

After industrialization, people began to think in terms of (typically) chemical fuels and technical mechanics. Problems that would be solved by the introduction of slaves or domesticated animals were instead solved by the use of machines.

Nowadays, in the Information Age, problems are not solved by throwing money or machines at it, but by utilizing the right information.

The tools we make, make us.


It is true that information is playing a greater role. However economic activity remains a good proxy for energy consumption, even whilst mainstream economics denies it.


aligned with that is heavy reliance (built out of educational exposure mostly) on objectivity/objectivism as a core organizing factor of knowledge - this comes with expectations of generalizability or shared assumptions/beliefs that doesn't always hold or holds in warped ways


>However, work on artificial intelligence, especially general intelligence, will be improved by a clearer idea of what intelligence is. One way is to give a purely behavioural or black-box definition. In this case we have to say that a machine is intelligent if it solves certain classes of problems requiring intelligence in humans, or survives in an intellectually demanding environment.

What if my expectation is that it be able to decide if an algorithm, given some inputs, will halt? Or being able to decide if a proposition is true within an axiomatic system? Maybe the better question is why Turing himself was optimistic about intelligent machines.


Yes there are many problems that in general are impossible or intractable for computers, yet humans solve instances of these problems with ease. In fact, it seems most problems of interest to us fall in this category of generally impossible or intractable. So, is the mind just by chance stumbling on the computationally tractable instances of these generally intractable problems, or is the mind doing something special that computers inherently cannot do? I've not seen this question addressed anywhere. Instead people assume materialism and conclude the mind is a computer, which merely begs the question. There are many more uncomputable than computable functions, so there is no a priori reason to assume the mind is a computation.


You have to consider that computer science is soemthing we do; it is, in a sense, a way that we talk about a way that we conceptualize our own ability to solve problems. This issue only comes up because people confuse our theories about problem solving with our abilities to solve problems. Or put another way, the halting problem is disappointment about metaproblems, that we can't model the model. Even more deeply, we encounter a world in which we have these things ready-made; we come into the world with the capability to do things, but because of that, our capability comes before our understanding of our capabilities, and the whole thing becomes a jumble.

This opens up onto philosophical traditions that have mostly been obscured behind modern ideas about us being like computers, but you may want to consider something like Merleau-Ponty's The Phenomenology of Perception (or if you're really strong of heart, Husserl's Crisis of European Sciences and Transcendental Phenomenology).


Phenomenology points out important truths, but do not do so in a technically interesting way, and sometimes in language that is consistent with materialism which also doesn't help resolve anything, at least within my brief exposure to the field such as Dreyfus.


> Yes there are many problems that in general are impossible or intractable for computers, yet humans solve instances of these problems with ease.

Can you provide examples? Generally when we say a problem is intractable in CS we are talking about solving it for every instance, not just a particular instance.

For example, the halting problem is "given an arbitrary program, can you determine if it will halt?" I see no reason believe that a human can determine if any arbitrary program will halt. In fact, it's trivial to come up with examples of programs that are so large that there is no way for a human to remotely comprehend what they do, let alone say if it will halt.

So what are these intractable problems that humans can solve with ease?


Example: recognizing and picking up objects, with grace and agility. Simply calculating the rotational angles required on limbs with multiple (>8) joints is intractable for a computer, yet we do it all the time.

Another example: writing new sorting algorithms. Writing a program to do a well defined task.


You can come up with examples where it's infeasible for a programmer to decide halting, but that's not the same thing, because feasibility is just about the amount of time or resources it would take to do something. One can imagine giving an arbitrarily difficult program to an arbitrarily bored programmer, and having an answer to whether it'll halt after ten years. But that's pure thought experiment, because nobody would ever actually do it, because it wouldn't be worth doing.

That is, the undecidability of the halting problem hinges on the need to specify a mechanical/formal process for making the decision, which cannot be done.


Sure, just limit the types of programs to those that could be understood by an average programmer within 8 hours. This way, a human could solve the halting problem for a limited domain of programs, but there is still no general algorithm which would be able to make the same determination.


The main intractable problem is recognizing and solving the tractable instances. If computers could do this in the general case as well as humans, then we'd not need humans.


> So, is the mind just by chance stumbling on the computationally tractable instances of these generally intractable problems, or is the mind doing something special that computers inherently cannot do?

Approximations. The mind uses approximations - for example, you can use approximate solutions that are fast and computable and still reap 99% of the benefits of a full solution.


Finding good approximation heuristics is not easier.


I don't fully accept the premise of the statement you quote, as I think there are things that humans use intelligence for, but which do not necessarily require intelligence - for example, beating grand masters at chess. I suspect your examples fall into that category, at least if you are referring to deciding specific cases for which the issue is decidable.


Odd way to phrase an interesting question. By your supposed expectation, are heuristics equivalent to inteligence? Do you mean a solution to a subclass of algorithms (an vs any) otherwise I don't see how the question makes sense.


>> Or being able to decide if a proposition is true within an axiomatic system?

You mean like Prolog?


I like the paper. But it does not have much to do with the traditional canon of philosophical problems. Thankfully it does nothing more than set out a conceptual space in which the general theoretical framework of AI can be analysed. I guess that might be philosophical pragmatism, i.e. finding answers/language sufficient to one's needs and conditions. But there are no especially overt second-order epistemological/ontological claims, e.g. such as figuring out what knowledge is or can be.


Thanks for posting the old paper. It’s a marvelous one that can help me think more clearly about intelligence.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: