
The Brain in a Vat Argument - infinity
http://www.iep.utm.edu/brainvat/
======
throwaway_yy2Di
Wow, what a tangle of verbosity. Yes, you could be a brain in a vat. Or you
could be a brain on a biped. Obviously you cannot distinguish between these
two possibilities merely by _thinking_ about them, because your thought
processes are equally compatible with either scenario (by construct, by the
premise of the question). A simulation good enough to be indistinguishable
from reality, cannot be distinguished from reality -- that's the definition.

The fun thing about this quack philosophy is that's it's falsifiable, that in
a couple decades, we'll actually have brains in vats, and kids will play games
where they shuffle a kid's brain around and have him guess if he's in a vat or
not. And with the higher-end iVats, they'll never be able to tell. And they'll
totally _ridicule_ the tenured philosophy faculty, they'll be so mean, what
the hell have you been doing with your grant money?

~~~
southpawgirl
The brain in a vat that you predict in a couple of decades could be a brain in
a vat that thinks they have put a brain in a vat. This type of assumption is
_not_ falsifiable (like $deity existence and so on) therefore open to
speculation, based on each one their own heuristics and belief, but not rock-
solid science.

I personally believe that the brain in a vat doesn't pass Occam's razor... but
I have no proof.

~~~
Nursie
Occams Razor is a good general rule, but you're right, not proof.

The related "This is all a simulation" argument is interesting, especially as
computer power increases. If we ever get to the stage where we can simulate
even a simpler sentience in a far smaller universe at a fraction of real speed
then how do we know that's not exactly what's happening with us... ?

~~~
MarkPNeyer
if you believe that we can reach this stage, and that we will create
simulations when we do, it seems wildly implausible that we are _not_ in a
simulation - because that would imply we'd be the first people anywhere, in
any universe, to do such a thing.

~~~
Nursie
Well yes, as a thought exercise I started idly wondering the other day how,
if/when we reached the stage of development ourselves, we might go about
making contact with the next layer up...

Probably make a decent sci-fi short story. Now if only I had the talent or
patience for writing...

~~~
msherry
[http://qntm.org/responsibility](http://qntm.org/responsibility) is an
interesting go at it, and where I first learned of the concept. His other
fiction is pretty fun, as well.

~~~
southpawgirl
Very nice story indeed :D In the story, the characters talk about nested
levels of simulation and how slightly disheartening is to know that they are
not in the top layer, therefore not "real".

Afai(!)k, there might as well not be a top layer but infinite nesting -- or,
even more charmingly, endless recursion, each simulation containing not
another layer, but another iteration of itself. This at least would eliminate
the embarrassment of being "less real" than the above layer.

------
rowyourboat
This is the kind of philosophical argument I cannot understand. The question
is essentially: "Given that it is impossible to know X, how could you know if
X"?

There is nothing to think about. The scenario answers its own question.

~~~
frobozz
Your boiled-down version of the question leads on to the following questions:

* Why is impossible to know X?

* How can we prove that X impossible to know?

* How close to knowing X can we get?

* What else can't we know? Is there a model for these kinds of unknowable?

* Given that we can't individually perceive X, can we infer X by other means? Is it valuable to do so?

~~~
nhaehnle
I think the even more interesting question would be:

* Might any of these questions be answered in such a way that the answer ought to influence our future behaviour?

If the answer to that particular question is No, then why bother with the
others? If the answer _seems_ to be No, it makes sense to delegate bothering
about it to a few philosophers in their ivory towers ;-)

------
chiph
Apologies to David Byrne.

    
    
      You may find yourself living as a brain in a vat
      You may find yourself in another biped body
      You may find yourself behind the wheel of a large automobile
      You may find yourself in a beautiful house with a beautiful wife
      You may ask yourself, well, how did I get here?

------
leephillips
The question of whether you are living in a simulation is not merely up for
grabs. There is a fascinating argument here[0] that approaches the problem
probabilistically; the conclusion is that you probably are.

[0][http://people.uncw.edu/guinnc/courses/Spring11/517/Simulatio...](http://people.uncw.edu/guinnc/courses/Spring11/517/Simulation.pdf)

~~~
mcguire
According to Paul Nahin[1], Richard Feynman had a probabilistic argument that
Fermat's Last Theorem was true, which can be dated to before 1963 because
Feynman later wrote that Morgan Ward (who died in 1963) told him that "the
same argument would show that an equation like x^7 + y^13 = z^11 would be
unlikely to have integer solutions..."[2]

[1] Number Crunching, 2011.

[2] The remainder of the quote is, "...but they do, an infinite number of
them!"

------
actualdc1
In my experience (as a student of philosophy at a strong university program),
Stanford Encyclopedia of Philosophy is the go-to reference resource for such
articles. Here's their version:

[http://plato.stanford.edu/entries/skepticism-content-
externa...](http://plato.stanford.edu/entries/skepticism-content-externalism/)

~~~
hyperpape
I think it's a slightly better article in this case. But neither one is really
suitable HN. They both presuppose too much background, I think.

------
mgunes
In "Pandora's Hope", Bruno Latour goes to great length to attack the
artificial subject-object and nature-culture separations that are the legacy
of the Cartesian paradigm. It's a highly intriguing book. Here's the foreword,
which deals directly with the brain-in-a-vat:

[http://www.bruno-latour.fr/sites/default/files/70-DO-YOU-
BEL...](http://www.bruno-latour.fr/sites/default/files/70-DO-YOU-BELIEVE-IN-
REALITY-GB.pdf)

------
paternalist
Let me tell you why you're here. You're here because you know something. What
you know you can't explain, but you feel it. You've felt it your entire life,
that there's something wrong with the world. You don't know what it is, but
it's there, like a splinter in your mind, driving you mad.

~~~
anon4
Stop trolling random kids on the internet, Morpheus. Come to bed and

 _ＹＯＵ ＮＥＥＤ ＴＯ ＷＡＫＥ ＵＰ_

------
araybold
My initial impression of Putnam's argument is that it carefully constructed in
order to avoid confronting the crux of the matter. It is reminiscent of how
Zeno's 'Achilles and the tortoise' paradox fails to actually consider the time
when Achilles catches up with the tortoise.

------
sambeau
This is a modern updating of Decartes's Demon concept from 1641.

[http://en.wikipedia.org/wiki/Evil_demon](http://en.wikipedia.org/wiki/Evil_demon)

~~~
hyperpape
The trope of a Brain In A Vat is an updating of it, yes. But the linked
article is really about Putnam's argument that the thought experiment is
somehow defective.

------
dspeyer
> For there is a good argument to the effect that if metaphysical realism is
> true, then global skepticism is also true, that is, it is possible that all
> of our referential beliefs about the world are false.

But they're not going to tell us what the argument is.

This extremely counter-intuitive claim seems to be the heart of their
argument. I suspect they're using sloppy language to confuse the difference
between knowledge and certainty, but since they never spell it out I don't
know.

------
JulianMorrison
Defining "know" something to mean "know the probability of something is
exactly 1", then yeah, you can't do that. No amount of Bayesian evidence can
raise a probability from uncertain to infinitely certain.

So what you need to do, is tear up that useless definition of "know" and
replace it with something sensible like "I know X if the evidence for it is
very strong."

~~~
jumbled
Then how do you know that the evidence is very strong? You'd have to have
strong evidence that the evidence is strong, ad infinitum. That was
essentially Plato's definition of knowledge, "justified true belief." Then a
guy named Gettier came around and showed that you can have strong evidence
supporting a belief, and still be wrong. Then a guy named Nozick[1] came along
and provided his own definition of knowledge, and that brings us up to the
early 1980's.

There's an entire branch of philosophy dedicated to figuring out exactly what
"knowledge" is, Epistemology. The "brain in the vat" problem has be considered
in context[2].

[1][http://www.iep.utm.edu/nozick/#H3](http://www.iep.utm.edu/nozick/#H3)
[2][http://plato.stanford.edu/entries/epistemology/](http://plato.stanford.edu/entries/epistemology/)

~~~
JulianMorrison
[http://en.wikipedia.org/wiki/Solomonoff's_theory_of_inductiv...](http://en.wikipedia.org/wiki/Solomonoff's_theory_of_inductive_inference)

Crudely, you evolve a simplicity prior over every possible universe-predicting
theory according to Bayesian updates on input evidence.

------
LogicalBorg
If I were a brain in a vat, why would the truth or falsehood of my beliefs
matter? I can't interact with the world from the vat.

~~~
dack
What does it mean for something to "matter" anyway? Does interacting with the
world mean that your beliefs now matter? What if your brain-in-a-vat was
hooked up to a computer that would encode your synapses on paper on the other
side of the world? Does that "matter"?

Not trolling, just trying to make sense of what you mean by your sentence.

------
yaddayadda
In related news, the universe is a hologram -
[http://news.ycombinator.com/item?id=6883611](http://news.ycombinator.com/item?id=6883611)

~~~
fennecfoxen
That's not what that means.

"The universe is a hologram" means that the universe's physics is implemented
with a data store that has fewer dimensions than what the universe offers its
contents. Which is interesting, but orthogonal to it being a "simulation".

Holographic-looking phenomena already recognized by science include black
holes: The maximum entropy -- the same as the maximum information content --
of an area of space is achieved when that space is a black hole. And the
entropy of a black hole is proportional to its surface area. Fascinating. Not
really "related" news though. Unless you want to go for poetry, but poetry
mixed with cosmology and quantum physics is a great recipe for
misunderstanding physics.

------
snarfy
On a somewhat similar note, there is the simulation argument:

[http://www.simulation-argument.com/](http://www.simulation-argument.com/)

------
analog31
In my case, the vat contains coffee. ;-)

~~~
infinity
Yes, this could be the case. Or maybe some vats contain the outside-of-the-
vat-world analogue of LSD or other psychotropic substances. Which leads me to
the following questions:

If I am indeed a brain in a vat, how far can I trust my own thought processes
(like a logical argumentation)?

And taking this further, could it be that I am only a _part of a brain in a
vat_ where the other part has already been replaced by computer controlled
cyborg implants changing the way information is processed?

The impression that an argument is logically correct and plausible may be a
phantom signal resulting from input into a specialized area in my remaining
brain.

Even the thought that I had a thought is then maybe wrong. How would it feel
to have the convincing impression that I am having very deep thoughts right
now without ever been able to perceive one single aspect of these thoughts?

------
JoeAltmaier
It doesn't matter how the idea comes to us; we can still argue and come up
with truth.

------
thenerdfiles
[http://www.hist-analytic.com/MooreExternalWorld.pdf](http://www.hist-
analytic.com/MooreExternalWorld.pdf)

