>it seems entirely possible (maybe even likely) that what we experience as consciousness is some kind of illusory side-effect of biological processes as opposed to something autonomous and “real”.
I've heard this idea before but I have never been able to make head or tail of it. Consciousness can't be an illusion, because to have an illusion you must already be conscious. Can a rock have illusions?
Well, it entirely depends on how you even define free will.
Btw, Turing machines provide some inspiration for an interesting definition:
Turing (and Gödel) essentially say that you can't predict what a computer program does: you have to run it to even figure out whether it'll halt. (I think in general, even if you fix some large fixed step size n, you can't even predict whether an arbitrary program will halt after n steps or not, without essentially running it anyway.)
Humans could have free will in the same sense, that you can't predict what they are doing, without actually simulating them. And by an argument implied by Turing in his paper on the Turing test, that simulation would have the same experience as the human would have had.
(To go even further: if quantum fluctuations have an impact on human behaviour, you can't even do that simulation 100% accurately, because of the no cloning theorem.
To be more precise: I'm not saying, like Penrose, that human brains use quantum computing. My much weaker claim is that human brains are likely a chaotic system, so even a very small deviation in starting conditions can quickly lead to differences in outcome.
If you are only interested in approximate predictions, identical twins show that just getting the same DNA and approximation of the environment gets you pretty far in making good predictions. So cell level scans could be even better. But: not perfect.)
> Humans could have free will in the same sense, that you can't predict what they are doing, without actually simulating them.
I think it's a good point, but I would argue it's even more direct than that. Humans themselves can't reliably predict what they are going to do before they do it. That's because any knowledge we have is part of our deliberative decision-making process, so whenever we think we will do X, there is always a possibility that we will use that knowledge to change our mind. In general, you can't feed a machine's output into its input except for a very limited class of fixed point functions, which we aren't.
So the bottom line is that seen from the inside, our self-model is a necessarily nondeterministic machine. We are epistemically uncertain about our own actions, for good reason, and yet we know that we cause them. This forms the basis of our intuition of free will, but we can't tell this epistemic uncertainty apart from metaphysical uncertainty, hence all the debate about whether free will is "real" or an "illusion". I'd say it's a bit of both: a real thing that we misinterpret.
You are right about the internal model, but I wouldn't dismiss the view from the outside.
Ie I wouldn't expect humans without free will to be able to predict themselves very well, either. Exactly as you suggest: having a fixed point (or not) doesn't mean you have free will.
The issue I have with the view from the outside is that it risks leading to a rather anthropomorphic notion of free will, if the criterion boils down to that an entity can only have free will if we can't predict its behavior.
I'm tempted to say an entity has free will if it a) has a self-model, b) uses this self-model as a kind of internal homunculus to evaluate decision options and c) its decisions are for the most part determined by physically internal factors (as opposed as external constraints or publicly available information). It's tempting to add a threshold of complexity, but I don't think there's any objectively correct way to define one.
I don't understand why a self-model would be necessary for free will?
> [...] c) its decisions are for the most part determined by physically internal factors (as opposed as external constraints or publicly available information).
I don't think humans reach that threshold. Though it depends a lot on how you define things.
But as far as I can tell, most of my second-to-second decisions are very much coloured by the fact that we have gravity and an atmosphere at comfortable temperatures (external factors), and if you changed that all of a sudden, I would decide and behave very differently.
> It's tempting to add a threshold of complexity, but I don't think there's any objectively correct way to define one.
Your homunculus is one hell of a complexity threshold.
I've heard this idea before but I have never been able to make head or tail of it. Consciousness can't be an illusion, because to have an illusion you must already be conscious. Can a rock have illusions?