The issue I have with the view from the outside is that it risks leading to a rather anthropomorphic notion of free will, if the criterion boils down to that an entity can only have free will if we can't predict its behavior.
I'm tempted to say an entity has free will if it a) has a self-model, b) uses this self-model as a kind of internal homunculus to evaluate decision options and c) its decisions are for the most part determined by physically internal factors (as opposed as external constraints or publicly available information). It's tempting to add a threshold of complexity, but I don't think there's any objectively correct way to define one.
I don't understand why a self-model would be necessary for free will?
> [...] c) its decisions are for the most part determined by physically internal factors (as opposed as external constraints or publicly available information).
I don't think humans reach that threshold. Though it depends a lot on how you define things.
But as far as I can tell, most of my second-to-second decisions are very much coloured by the fact that we have gravity and an atmosphere at comfortable temperatures (external factors), and if you changed that all of a sudden, I would decide and behave very differently.
> It's tempting to add a threshold of complexity, but I don't think there's any objectively correct way to define one.
Your homunculus is one hell of a complexity threshold.
I'm tempted to say an entity has free will if it a) has a self-model, b) uses this self-model as a kind of internal homunculus to evaluate decision options and c) its decisions are for the most part determined by physically internal factors (as opposed as external constraints or publicly available information). It's tempting to add a threshold of complexity, but I don't think there's any objectively correct way to define one.