Frankly, this is a bad argument. The author mentions you can solve the regress by simply denying that you can even know your own mental states, but dismisses this on the grounds it doesn't help reach the desired conclusion. (!!!) The author also goes to the trouble of claiming that some infinite regressions are 'psychologically stable' while others are not; no real argument is provided as to why it matters whether a regress is such or not, except that the author wants to get rid of inconvenient ones but not convenient ones.
The way to attack Cartesian skepticism is on its grounds and assumptions, not to accept the premises. Descarte was right, if you assume his priors. There are many people who do this, but the most systematic and determined one is probably Heidegger. I suppose that people who want to believe in 'pure reason' are not especially likely to go down that route, though.
...which - as the author describes - implies I can go down that infinite spiral of regressive speculation. But this is not the reason that De Morgan's Law is a tautology.
Even worse, the author's argument really let's us assert anything, doesn't it? I've read the paper twice and I can't see any protections from using this argument to assert a contradiction.
It's far from a purely hypothetical scenario. After all, while dreaming we often follow nonsensical or self-contradictory chains of reasoning while thinking they make perfect sense – hence the term "dream logic". There are ways to try to identify that you're dreaming, and you can train yourself to use them more consistently, but they're far from perfect. Sometimes it just won't occur to you to do a dream test, and sometimes you'll fool yourself into thinking you passed the test.
In my opinion, the only real solution is a pragmatic one. You can never be certain that you currently have any degree of rationality, and so you can never be truly certain about anything, period. However, you may as well assume you're rational for the purpose of making decisions. After all, if you're currently 100% irrational, then you're going to make 100% nonsensical decisions, and that's all there is to it. It doesn't really matter what you believe or disbelieve, so it doesn't hurt to falsely believe that you're rational. And there no point hedging on any action on the chance that you're being irrational, because the idea that "if i'm irrational, it's best to take/not take action X" is itself validated by rationality.
That said, there are degrees between complete rationality and complete irrationality. In dreams you're often still partly rational, and you can also get a sort of replica of rationality by training yourself to follow specific thought patterns during times when you are mostly rational (i.e. awakeness). So it is theoretically possible to hedge based on the possibility that you're dreaming – albeit not very useful, since making bad decisions in a dream doesn't usually have any serious long-term effects. However, if you want to narrow things down to more specific possibilities than "I'm dreaming" – as you get to scenarios in which you're hypothetically being less and less rational, it becomes less and less useful to hedge on those scenarios.
I know this is not the direction of the paper (or question) but physics already gives us an astounding 'No!'.
Because if the world out there doesn't exist, these things shouldn't effect the way you think. If something is simulating the world for you, you have to assume every time you drink, that thing alters the way you think somehow, which means it can access your actual real brain, whatever it is.
Then that brain of yours, even if it's outside the world, must be so similar to all the other brains you see, which are effected similarly by those same substances.
A way of inducing an altered state of consciousness, just like sleeping and meditating. This isn't problematic for the "brain in vat" scenario.
Sleeping and meditating are both internal to the brain, but alcohol only becomes internal to the brain if it has some physical access to the brain.
For the brain in a vat theory to work, the computer controlling the simulation would have to monitor the amount of alcohol that is in the bloodstream of the simulated being, and adjust the alcohol in the vat accordingly?
IF a consciousness was housed in a black box (a brain, but we're ignoring it's internals) AND that black box had inputs connected to a computer that can send signals to the black box that it WOULD interpret as signals of sight, sound, touch, hunger, intoxication, etc.
THEN how is one suppose to conclude that they are not just a consciousness in a box because they 'feel' drunk after they 'feel' they've ingested alcohol.
There's no need to monitor alcohol levels because the alcohol hasn't been proven to exist!