Hacker News new | past | comments | ask | show | jobs | submit login
Scott Aaronson's Worldview Manager helps uncover inconsistencies in your beliefs (csail.mit.edu)
14 points by bumbledraven on Sept 4, 2009 | hide | past | favorite | 15 comments



I played around with the 'strong AI' section but found it to be internally inconsistent, maybe it says more about the makers of the world view manager than it says about the visitors ? Or maybe I am internally inconsistent :)

According to the test there is an inconsistency between:

A simulation is never equivalent to the real thing. (true)

and

One can know that minds exist purely through empirical observation. (true)

The explanation given why these are inconsistent is:

Empirical observation will not be able to distinguish between some being and a simulation of that being, if the simulation is good enough. If simulations can never be equivalent to the real thing, then empirical observation is not enough.

The crux here is that the 'good enough' bit modifies the statement in 'B' after the fact, no simulation is perfect, but now we have to take one that is 'good enough' to be taken for perfect.


I didn't get that far. There was a question about whether souls(!) can affect the world. It could easily be argued either that if you don't believe in souls you should answer either 'disagree' or 'neutral', which suggests that we're supposed to treat it as a counterfactual, but in that case, we don't know anything about what "soul" means for the purposes of the question.

Basically, this was an interesting idea, but I don't know if it's possible to write questions that reflect a broad enough range of worldviews to actually categorize them. There are probably hundreds of distinct views of "soul", including both people who believe in them and people who don't.


You're right in that your two statements appear to be consistent and their explanation is a bit twisted. But I'd be fascinated to here of how you know, by observation, that there are a plurality (>1) of minds.

The explanation appears consistent with a response that says "A simulation may be observed to be equivalent to the real thing (true)".

Perhaps a simple logic slip.


I think you meant 'hear' :)

If you are self-aware and you are curious about self-awareness then you can present your environment with stimuli (spoken words, gestures, and so on) to create variations on the Turing test, if the parties appear to think like you do then slowly the evidence over time accumulates to the point where you will have to assume others are sentient.

Testing rocks, cats and people in this respect will give you different ideas about the sentience of rocks, cats and people (and in the case of the cats, if you should frighten them with your sounds or gestures will give you a healthy respect for small creatures with claws).

I believe the people in my environment when they say they have a mind too, I trust cats to have 'some form of mind' because they seem to be self motivated like I am. Their motivations are clearly different than mine, but they do seem to think, at some point the evidence that points in that direction has to be taken to be overwhelming, so therefore we lean towards taking it to be the truth.

No simulation gets in the way of that one.

Now if someone were to slip a simulated intelligence in my environment (we are assuming that it is a fact here for the sake of the argument) and I would not be able to tell the simulation apart from any of the other entities then we have achieved a situation where there would be an equivalence between that simulated intelligence and 'the real thing'.

Maybe I'm stretching things here, but it seems to make sense to me.


You're assuming other beings as your starting point, you need to demonstrate first that there is an external world that sense data received from your mind are a) trustworthy and b) externally produced. Taking the standard "brains in vats" position there is no way to show in your system the the apparent actions of [apparently] external beings aren't simply being fed to you.

Assuming a realist position (on the existence of the universe!) the fact that a cat responds in a similar way that a person with a mind does proves nothing - if I am startled (assuming I have a mind) then it is not the internally thoughtful part of myself that causes me to jump but it is a basic response below the action of my thought processes, I can't decide not to jump.

Consider that androids will soon appear to think - or that if you play poker online against a bot, but think it a person, that this person [bot] appears to you [from your position seeing their moves] as thinking. It is clear that the bot is only apparently thinking it is simply acting according to a complex algorithm.

Taking your point on a "slip[ping] a simulated intelligence" in front of you. You seem to be saying that if the simulation can work then it is a real mind (a sort of pure Turingism). What if a simpler person is convinced that the intelligence is real, but you are not. Does that mean that the intelligence has a mind? Does it think when interacting with that other person but not with you. I contend that it genuinely thinks in neither situation.

[Yes I meant "hear", I have problems with homophones, usually it's their=there and usually I catch it!]


Here's a group of quizzes with the same idea, but a nicer interface: http://www.philosophersnet.com/games/.


There are a bunch of (what I consider to be) bugs in the topic files at present. (The QM one is particularly horribly broken -- actual bug rather than mere confusion, I think; the strong-AI one is also pretty bad, but here it's that I think the author of the topic file was confused about some things.)

This will become more interesting if it gains the ability to take uncertainty into account more effectively. Unfortunately, I have my doubts about whether their (firmly logic-based) resolution engine can be made to do this without major work.


> The QM one is particularly horribly broken -- actual bug rather than mere confusion, I think

Broken link, right? I reported it to them a few hours ago and they/he're apparently working on it.


Nonono, not just a broken link. Something is confusing the inference engine and it's claiming "tensions" and giving "explanations" for them that don't make the slightest bit of sense. The files that define the "topics" are actually on the web -- they expose their git repository -- and there's one thing in the QM file that looks like a mistake, which might be responsible for the trouble.


It bothers me that there's exactly one political position on the list of available topics, and that Mr. Aaronson seems to spend a fair amount of time dissing it on his blog.

Intentional or not, this is not the kind of message you want to send if you're serious about promoting consistent rational thinking.


As others have pointed out, the questions are somewhat confusingly vague, but what bothered me the most was that it's rather dull if it finds no inconsistencies in your answers --- it's like a baldly philosophical opinion poll.


Very, very slow for me.


Same here, painfully so. Also, I tried the libertarian one and the statements were so vague as to seem useless. Ran out of patience.


I'd like to hide the comments on questions, at least until after I've answered them. I found them to be a distraction.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: