Hacker News new | past | comments | ask | show | jobs | submit login
The regress argument against Cartesian skepticism (2012) [pdf] (utoronto.ca)
17 points by lainon 6 months ago | hide | past | web | favorite | 13 comments

The paper argues that, because it is possible to doubt whether you are doubting, and to doubt whether you are doubting you are doubting, and so on and so forth, that this is an infinite regress, and because infinite regresses are ugly and inconvenient, we should just throw out the whole chain and declare ourselves sure of the world.

Frankly, this is a bad argument. The author mentions you can solve the regress by simply denying that you can even know your own mental states, but dismisses this on the grounds it doesn't help reach the desired conclusion. (!!!) The author also goes to the trouble of claiming that some infinite regressions are 'psychologically stable' while others are not; no real argument is provided as to why it matters whether a regress is such or not, except that the author wants to get rid of inconvenient ones but not convenient ones.

The way to attack Cartesian skepticism is on its grounds and assumptions, not to accept the premises. Descarte was right, if you assume his priors. There are many people who do this, but the most systematic and determined one is probably Heidegger. I suppose that people who want to believe in 'pure reason' are not especially likely to go down that route, though.

I'm with you. Replace "the external world exists" with some logical tautology such as De Morgan's Law (not (A and B) <=> not A or not B). Formalism has all sorts of gotchas, so I'm going to assert that it's possible to be skeptical of De Morgan's Law being a tautology...

...which - as the author describes - implies I can go down that infinite spiral of regressive speculation. But this is not the reason that De Morgan's Law is a tautology.

Even worse, the author's argument really let's us assert anything, doesn't it? I've read the paper twice and I can't see any protections from using this argument to assert a contradiction.

In short, could your mind be so utterly irrational that that you're mistaken about the most basic questions of what you yourself know or think?

It's far from a purely hypothetical scenario. After all, while dreaming we often follow nonsensical or self-contradictory chains of reasoning while thinking they make perfect sense – hence the term "dream logic". There are ways to try to identify that you're dreaming, and you can train yourself to use them more consistently, but they're far from perfect. Sometimes it just won't occur to you to do a dream test, and sometimes you'll fool yourself into thinking you passed the test.

In my opinion, the only real solution is a pragmatic one. You can never be certain that you currently have any degree of rationality, and so you can never be truly certain about anything, period. However, you may as well assume you're rational for the purpose of making decisions. After all, if you're currently 100% irrational, then you're going to make 100% nonsensical decisions, and that's all there is to it. It doesn't really matter what you believe or disbelieve, so it doesn't hurt to falsely believe that you're rational. And there no point hedging on any action on the chance that you're being irrational, because the idea that "if i'm irrational, it's best to take/not take action X" is itself validated by rationality.

That said, there are degrees between complete rationality and complete irrationality. In dreams you're often still partly rational, and you can also get a sort of replica of rationality by training yourself to follow specific thought patterns during times when you are mostly rational (i.e. awakeness). So it is theoretically possible to hedge based on the possibility that you're dreaming – albeit not very useful, since making bad decisions in a dream doesn't usually have any serious long-term effects. However, if you want to narrow things down to more specific possibilities than "I'm dreaming" – as you get to scenarios in which you're hypothetically being less and less rational, it becomes less and less useful to hedge on those scenarios.

"Does the external world exist, as it so concretely seems?"

I know this is not the direction of the paper (or question) but physics already gives us an astounding 'No!'.

Only if you subscribe to some pretty tortured interpretations of quantum mechanics. QM still describes an objective state of the external world, and exactly how it changes over time.

I don't think that's what the poster meant. I think it was probably more along the lines of how you experience e.g. 'light' vs. what it actually is:

(The vast majority of light is effectively invisible to you.)

Cartesian skepticism always assumes some external entity manipulating your mind. So, at least you can be sure you have a mind and there is an external world.

I find it pretty easy to convince the world exist. If you assume you yourself exist, and think, and if you assume the contrary, that the world out there doesn't exist, you should ask yourself what alcohol and drugs are.

Because if the world out there doesn't exist, these things shouldn't effect the way you think. If something is simulating the world for you, you have to assume every time you drink, that thing alters the way you think somehow, which means it can access your actual real brain, whatever it is.

Then that brain of yours, even if it's outside the world, must be so similar to all the other brains you see, which are effected similarly by those same substances.

> you should ask yourself what alcohol and drugs are.

A way of inducing an altered state of consciousness, just like sleeping and meditating. This isn't problematic for the "brain in vat" scenario.

If the brain is in a vat, why is it affected by alcohol outside the vat?

Sleeping and meditating are both internal to the brain, but alcohol only becomes internal to the brain if it has some physical access to the brain.

For the brain in a vat theory to work, the computer controlling the simulation would have to monitor the amount of alcohol that is in the bloodstream of the simulated being, and adjust the alcohol in the vat accordingly?

I'm confused.

IF a consciousness was housed in a black box (a brain, but we're ignoring it's internals) AND that black box had inputs connected to a computer that can send signals to the black box that it WOULD interpret as signals of sight, sound, touch, hunger, intoxication, etc.

THEN how is one suppose to conclude that they are not just a consciousness in a box because they 'feel' drunk after they 'feel' they've ingested alcohol.

There's no need to monitor alcohol levels because the alcohol hasn't been proven to exist!

I think your confusion arises from the fact that this discussion is about two different thought experiments: the brain in a vat [1] (a la The Matrix) and proper solipsism (which implies no real world at all, not even a vat). [2]

[1] https://en.wikipedia.org/wiki/Solipsism

[2] https://en.wikipedia.org/wiki/Brain_in_a_vat

Alcohol is part of the simulation. Don't get hung up on the biologically human brain in a physical vat scenario. That's only a tiny slice of what's being discussed.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact