Hacker News new | past | comments | ask | show | jobs | submit login

I think you meant 'hear' :)

If you are self-aware and you are curious about self-awareness then you can present your environment with stimuli (spoken words, gestures, and so on) to create variations on the Turing test, if the parties appear to think like you do then slowly the evidence over time accumulates to the point where you will have to assume others are sentient.

Testing rocks, cats and people in this respect will give you different ideas about the sentience of rocks, cats and people (and in the case of the cats, if you should frighten them with your sounds or gestures will give you a healthy respect for small creatures with claws).

I believe the people in my environment when they say they have a mind too, I trust cats to have 'some form of mind' because they seem to be self motivated like I am. Their motivations are clearly different than mine, but they do seem to think, at some point the evidence that points in that direction has to be taken to be overwhelming, so therefore we lean towards taking it to be the truth.

No simulation gets in the way of that one.

Now if someone were to slip a simulated intelligence in my environment (we are assuming that it is a fact here for the sake of the argument) and I would not be able to tell the simulation apart from any of the other entities then we have achieved a situation where there would be an equivalence between that simulated intelligence and 'the real thing'.

Maybe I'm stretching things here, but it seems to make sense to me.




You're assuming other beings as your starting point, you need to demonstrate first that there is an external world that sense data received from your mind are a) trustworthy and b) externally produced. Taking the standard "brains in vats" position there is no way to show in your system the the apparent actions of [apparently] external beings aren't simply being fed to you.

Assuming a realist position (on the existence of the universe!) the fact that a cat responds in a similar way that a person with a mind does proves nothing - if I am startled (assuming I have a mind) then it is not the internally thoughtful part of myself that causes me to jump but it is a basic response below the action of my thought processes, I can't decide not to jump.

Consider that androids will soon appear to think - or that if you play poker online against a bot, but think it a person, that this person [bot] appears to you [from your position seeing their moves] as thinking. It is clear that the bot is only apparently thinking it is simply acting according to a complex algorithm.

Taking your point on a "slip[ping] a simulated intelligence" in front of you. You seem to be saying that if the simulation can work then it is a real mind (a sort of pure Turingism). What if a simpler person is convinced that the intelligence is real, but you are not. Does that mean that the intelligence has a mind? Does it think when interacting with that other person but not with you. I contend that it genuinely thinks in neither situation.

[Yes I meant "hear", I have problems with homophones, usually it's their=there and usually I catch it!]




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: