If my decisions are not a product of my prior state, then they are not my decisions. The definition of 'me' is my prior state. If my decisions are unpredictable given complete knowledge of my prior state, and the ability to extrapolate it forward, then the decisions do not come from me. If they're not mine, then I have no responsibility for them. Any discussion of my responsibility for my actions must take into account my personal contribution to the decision as a being.
Dualism does not solve this problem. It simply foists a chunk of a person's state into some non-material constituent, but if that constituent does not have a persistent (though presumably malleable) state or does not deterministically contribute to the process, again whence comes responsibility?
(I say all of this as a devil's advocate, and nothing more)
Imagine that heaven exists, and is a kind of meta-universe that our reality is a subset of. It has access to a higher order of systems than those within its subset, and they can say things about truth that the reality subset cannot (because of incompleteness; a system sufficiently powerful to say all true things is unable to prove all of them). Things that we cannot write proofs about, that we must accept as axioms (for example, in the space of ethics and morality) may be trivially expressible in the higher-order logic of meta-heaven.
Say now too that the dualistic, non-material constituent, the Soul, exists in this meta-heaven. It has access to this higher-order reasoning, and can make causal decisions according to it. However, to its physical coupling in the reality subset, these decisions will appear to be random and non-causal. They will be the outcome of nothing from within the universe. Manifestation of Free Will and Randomness can be considered to be indistinguishable in this sense, from the perspective of something inside the system that they are operating on.
It still pushes the problem 'up' one level, but who knows what the physics of a meta-heaven looks like. We can't even imagine it, in the way that a computer can't imagine its own halting. Hence all the talk of ineffability in the older dogmas of the world.
We're trapped in our brain, behind a wall of sensory apparatus. There's no reason to assume that our senses provide the complete totality of reality. And if there are aspects of reality we cannot sense, then there is more to reality than we can ever know. Kant called these attributes the "noumenal" world, while what we can see and observe is the "phenomenal" world. However, these are NOT separate areas - just two aspects of the same thing. Not dualism.
Schopenhauer took this to the next step and said that there are aspects to ourselves we can't observe, so part of our selves exists in the noumenal world. The phenomenal world is deterministic, but if we have free will, it must exist as part of the noumenal world.
We, for instance, can't know or measure the precise state of things as they would've been had something different occured than what actually occured at an earlier time.
This is especially obvious in the realm of quantum mechanics, where you can't know what you would have measured if you had measured earlier or later than you did (I hope my interpretation as a layperson was correct here). That information just doesn't exist in our reality, but it may exist in some higher order place where all those realities can be observed.
These attributes can't have any effect on the phenomenal world, or they'd be detectable, and by definition, they're not detectable by us. Unless, of course, our free will is part of the noumenal world, and our actions in the phenomenal world are simply one way of viewing our will.
Even randomness or the appearance of it doesn't make any difference. A person's state might very well include a source of randomness, in fact it almost certainly does. See my comments on this elsewhere in the thread.
It's why this kind of system usually is accompanied by a whole meta-set of of punishments and rewards as well. Since there's no way to perfectly reason and judge in our system, we have to hope that someone who has access to the higher system will provide the correct judgement, being able to see the total picture of responsibility.
It's also completely untestable, for the very reasons it 'works' as an explanation at all!
If you allow the mutation of the subset reality by a soul in the meta-heaven, then you can have it all operate in a single universe; the soul is capable of adjusting the nondeterminism of particles in the brain, thereby making decisions happen that appear random to us, but are informed by a higher system of reasoning. In the way that a programmer might adjust constants, or a video game player might choose stats.
And just want to reiterate: I'm not trying to advance this explanation! Just fun to think about.
Yes this _will_ happen verse this will _most likely_ happen.
Free will is the ability to influence the probability that a certain outcome is realized. Your decisions could be predictable as a function of probability.
Your definition of 'me' (your prior state) includes the quantum states of your constitute parts. Just because there is a degree of uncertainty in those states does not mean they are not of 'you'.
The degree of pre-determination is just a matter of scale (akin to classical vs quantum). The deeper you go the less pre-determined something can be.
There are two problems with this definition of free will.
First, it implies that your present self (and future selves) are all entirely subservient to your prior state, and surrounding environment. Ie, the present you has no way of overcoming your past self or your environment. This means that you're essentially sitting in a roller-coaster ride, with no levers to pull or opportunity for change. Sure, your past self made the decision to get on the ride. But your present and future self are all helpless from that point on.
Second, did even your past have any free will? The chain of causation extends all the way back. Your past self at time T-X is itself entirely subservient to your past self at time T-X-1. Extend this all the way back, and you realize that you're subservient to your "self" at the moment of conception. At no point did your consciousness have any ability to change the course of the life that was already plotted for it. If I predict at the moment of your birth, every single thing that will happen in your life and every decision you will make, can you really claim to possess free will?
Hence why people try to cling on to randomness and uncertainty. "If the future can be perfectly predicted, people definitely do not have free will => ergo, if the future cannot be perfectly predicted, then maybe we have free will?" I agree with you though that this is shoddy reasoning. Just because there exists randomness at the quantum level, just means that we're passengers in a driverless car that's rolling dice. Randomness that we have no control over, does not grant us free will in any way.
If an entity's decision process is is not not dependent and repeatable based entirely on it's internal state and perceptions of it's environment, then it must be dependent on some external random element, and that element not being part of that entity, means it's choice is not entirely of it's own free will.
This argument rejects duality of course, as one's "will" itself could be the external element.
“There is no contradiction between free will and knowing in advance precisely what one will do. If one knows oneself completely then this is the situation. One does not deliberately do the opposite of what one wants.”
"OK, pick something you absolutely believe, that you have believed as long as you can remember and you are absolutely sure about. It doesn't matter what it is and it can be as trivial as you like and change your belief about it."
Just so I'm understanding you correctly: Are you claiming that even if the future can be perfectly predicted, people can still have free will?
At the moment of your conception, if I were to predict with 100% accuracy and confidence, every decision you will ever make in your life, would you still claim to possess free will?
Not OP, but essentially yes.
Things get weird if you use your predictions to influence what I do. But I'd say that takes away my agency, whilst keeping my free-will intact.
In general, those who claim determinism and free-will do not clash are called 'compatibilists'. Often, the argument is kind-of semantic. It starts with the idea that what most people consider 'free-will' is so ill defined we ought to work on the definition. It then follows that most definitions are very clear on whether we have free-will.
Then, since most people feel like we have free-will, the definition that fits with that is chosen.
At least, that is the case for me. Personally, I think the real difficult discussion lies with the concept of 'x is possible' in a deterministic world. Here, I grasp for bayesian statistics.
Personally, I define free will as having the ability to "change". To overcome your both your past self, and your environment. Ie, a billiards ball has no free will, because its motion is compelled by its past state, and by the environment around it. Similarly for cars, computers and every other inanimate object. Whereas a human can be said to have free will, because regardless of his past nature, and regardless of the environment around him, he still has the free will to transcend all that and do something wholly "unexpected". To me, that's what separates free will from an automaton. Hence why my definition of free will isn't compatible with determinism.
I appreciate your point though that others might define free will in a wholly different manner.
My argument then is that sure some of my choices are random, mainly because I choose to not use my memories and skills I have learned and hand over the choice to the RNG. That's also often a choice though. See my comment elsewhere on this page on No Country for Old Men.
You talk about unexpectedness. How do you define unexpected in such a way that it is distinguished from random? That's crucial. Truly random behaviour is not intentional and not 'ours'.
One way in which people change their minds is though the assimilation of new information. That's fully compatible with my take on free will. Our prior state includes the decision making apparatus that evaluates and incorporates the new information. Our consistency as a persistent self has not been compromised, it's just been transformed by new information or a new way of thinking about things. We have changed, but in a way that we are 'built' to do. Given omniscience (which is impossible, but for the sake of argument), that transformation would have been predictable in principle and again for me that's not a problem. Someone who knew me well might have predicted that I would change my mind.
That is not a problem. There's a quote in a comment nearby from Godel that addresses this very well.
I suppose I am a compatibilist, but I object to the use of the term 'compatible'. I think some degree of determinism is _required_ in order to have free will. It's essential. Without it, 'I' am taken out of the chain of responsibility for my actions. If they do not flow from me, from my state, my memories, my decision making processes and if those things are not to some extent consistent, then those actions are not mine.
I'm claiming that non-deterministic action would necessitate that we are at least partially subject to some external, truly random, processes that we don't control, and that in that case that process is what would ultimately causes our actions to be non-deterministic, not the individual themselves.
If my actions are not deterministic in the sum collection of my prior experiences/knowledge, and current perception, which at any given instant could be exactly known, then there must be an additional random source of input that is not 'me' that at least in part drives my actions, and thus such actions are not entirely driven by me and my "Free Will".
Imagine a deterministic entity, in a otherwise deterministic universe, rolling non-deterministic dice to decide it's actions. Does the non-determinism introduced by those dice make those choices more or less an act of that entity's will?
You are answering to someone that clearly expressed that you are your past self. Why would you want getting rid of yourself?
This means that you're essentially sitting in a roller-coaster ride, with no levers to pull or opportunity for change.
Not at all. Human being can learn, practice and make all kind of efforts to change ourselves. Sports, arts, science are all fields where humans show how to improve.
You are getting to the no-levers problem by drawing a line between "you" and "what you do", as if you have nothing to do with what you do. Why?
To me, "helpless" in the context of metaphysical free will means that an agent's actions are deterministic from the view of an outside observer. That is all it can ever mean AFAICT. Any sadness/frustration we may feel about this is equivalent to the sadness/frustration one feels for Heidegger's cat. That is to say, it's there, but it's philosophically incidental.
"Helpless" in the context of physical free will means the normal meaning of helpless in all its glorious ambiguity. It can mean something close to "deterministic" in certain catastrophic situations. But more often it means something like "under the rule of a tyrant," "enslaved," or even "trapped" for whatever reason. In all those cases the sadness/frustration/etc. one feels is part and parcel of the outcome-- it affects whether one decides to fight, acquiesce, cope, etc. Since nobody within physical reality has sufficient knowledge to predict the future, the problem of metaphysical free will doesn't pop up.
I don't think this is true. We know that thought and memory are constrained (accessed) at a physical level - at least to some extent. That is, damage to your physical brain can result in diminished or eliminated access to those things.
Indeed, thought as we know it seems to be a physically mediated process (perhaps not completely defined by our physical bodies, but at least influenced to some extent).
As well, emotion is a physically mediated phenomena. Even our experience of this existence is physically mediated. Without a functioning body we lack the ability to experience.
So what does that leave you to be? You are, at your basest existence, the ability to choose. A choice engine with a physical layer; a physical memory, physical emotion, the ability to think. But at your foundation, you choose.
You remember, but unless you choose, you are nothing but memory.
Sure, the damage changes you. It becomes a new part of your state prior to making a decision.
>You remember, but unless you choose, you are nothing but memory.
Completely agree, but we don't think really know much about the relationship between how we store memories and how we incorporate those into a decision making process. I may be wrong, I don't know much about neuroscience. My take is though that it doesn't matter. All of that is part of our self. Memories, decision making processes, neurochemistry, it's all part of our state and our decisions are only truly 'ours' to the extent that this state determines or influences them.
I suppose the argument I was trying to make was about what is fundamentally 'you' or 'me' and what could be emergent properties.
I agree that our decisions rest on prior states. Even if we remember nothing of our past, that has an effect on what and how we choose.
I don't see how this is at odds with the prior state view. You are a state machine. As time moves, you have no choice but to choose, and this is entirely dependent on your previous state and your current environment. The physical layer is the choice engine, and it physically must continue to make choices until it is incapacitated or dies.
You are a state machine that must step through its state transitions to the beat of the universe, navigating your small corner of it, enjoying the ride (or not), until you cease to function, and the state machine of the universe runs on without you.
Simply put, current state as an input does not necessitate pre-determination.
I think the point user all2 was making is that if we remove the prior state modifier from the generate state function then the core piece that is left over is "your basest existence".
> [...] if we remove the prior state modifier from the generate state function then the core piece that is left over is "your basest existence".
I think removing the prior state modifier leaves you with nothing. A brain with no prior state is a brain with absolutely no memories, knowledge, or stored procedures. Even if such a brain were possible to construct, plasticity would ensure that unless it was dead, it would immediately begin accumulating prior state with which to make subsequent choices.
This makes sense. There's no choice unless there's a dimension (some state variable) that can we can move in. We have stuff that can be state (matter and energy) and dimension (time) to move in.
In terms of state, my argument is that choice is a core piece of 'me' and 'you'. I don't know how to make an effective argument for the separateness of choice and state, but I think it makes sense to consider them as separate things. We possess state like we possess the space we exist in, in time. You could say that your choices are like driving a car. Navigating streets (or offroad!) is like choosing from possible states.
The prior state idea seems more accurate, but even that (at least in its extreme state) doesn't seem to me to encapsulate the idea of free will because it disallows the idea of a decision being undetermined and free to decide.
I personally think the notion of free will is fallacious because it's poorly defined, or even undefinable. I admit I could be totally wrong about that, but I think lack of predictability leads to an illusion of free will. It's the same illusion as the god illusion, injecting agency as an explanatory mechanism when there is none, either through chaotic processes, true randomness, or epistemological weaknesses.
That doesn't compute. A decision doesn't do deciding. A decider decides decisions. Any phenomena of "will" necessarily implies contingency on a prior. To ask for non-contingency is to ask for freedom _from_ will.
I certainly think randomness comes into play for sure, but conceptually my personal prior state might include a source of randomness. It could be just one factor, and I might make choices based on my memories and preferences to allow greater or lesser input from that randomness, or I might lack the mental tools to prevent that randomness from manifesting in my actions. At an extreme that might be a form of pathology and then we can talk about the limits of personal responsibility in various circumstances. The idea that my state leads to my decisions still holds though.
I think at some point notions of intervention or change become critical. So, for example, I think implicit in the idea of free will is the idea that an individual can freely choose to change their decision in some repeatability sense, and this change cannot necessarily happen in any other way, such as from outside influence.
Just to make it clear, let's say there's two choices or choice types, A and B. A is "bad" and B is "good," in whatever sense (utilitarian, moral, whatever).
Key to the notion of free will it seems is that one could change, of their own volition, their choice, from A->B or B->A. In practice we can't rewind time or transport to possible worlds in a counterfactual sense, but we do talk about decisions as interchangeable and repeatable. E.g., someone can choose whether or not to take a drug at one point, and then, at a later point, choose again whether or not to take the drug. In one metaphysical sense, those decisions are not the same because circumstances have changed, but we treat them as the same.
I think the idea of free will is that someone could change their decision, to an extent that another person could not. That is, I cannot make you change your choice, only you can.
This is fine at some level, but what if someone wants to change their mind but cannot? E.g., a drug addict or someone trying to lose weight? What about something simpler, like decision under a lapse in attention? How do we define volition or lack thereof?
Or, maybe more importantly, let's say the external individual in question is omniscient. If an omniscient external individual cannot alter your behavior, why would you hold that individual responsible in a free will sense? That is, let's say someone truly wants to change, but an omniscient being could not even help and change them. Why should the non-omniscient individual seeking change be held responsible?
This is all stream-of-consciousness, but I think at some point the notion of free will starts to lead to counterintuitive problems and/or becomes very poorly definable. At some level I suspect it implies not only autonomy but complete agency, which is suspect.
In No Country for Old Men a guy flips a coin to decide if he will kill someone. He chose to flip the coin though - delegating a choice to randomness does not abrogate us of responsibility. By doing so he created the framework in which the outcome arrises, whatever that outcome is. He made it possible, even likely and bears responsibility for that by selecting the set of possible outcomes and the degree of randomness. Also he does this many times, so it wasn't a "random choice to randomly choose", it's a consistent repeated pattern of behaviour that directly emerges from his personal ongoing mental state.
I don't go around randomly killing people and that's a consistent pattern that emerges from my persistent ongoing (if evolving) mental state and it _is_ a free choice. The fact that it's pre-determined is fine, because it means it comes from me.
To some extent yes, our choices and scope of action is constrained by our essential nature, some of which is determined before we are even born. However this doesn't absolve us of all accountability. It simply sets up a framework within which we can discuss what accountability is and how we want to manage it.
This is a difficult issue and I don't have all the answers about accountability, I see my take on it as my starting point for the discussion about it not the end. I believe in rehabilitative rather than retributive justice because of this. Also see my reply to fjuerfilis.
I think that morality, love and justice emerge from human nature. I think that I, my self and my consciousness are emergent properties and I'm fine with that. I think that the purpose of human life is chosen for us by the rules of biology, natural selection, game theory and a host of other factors that, seemingly miraculously, lead me to the conclusion that the philosophical Good Life is desirable and largely attainable.
> If you have a prior state that is in part systematic, and part random, is the randomness per se free will?
Let's please replace random with uncertain. If _part_ of an entangled system is uncertain the _whole_ system is uncertain .
> Does that encapsulate the notion of free will?
free will 
the power of acting without the constraint of necessity or fate;
the ability to act at one's own discretion.
> That is, I cannot make you change your choice, only you can.
Your free will attempts at influencing me to make a certain choice would seem to entangle my state with your's and perhaps change the probability I will make one choice over another.
> If an omniscient external individual cannot alter your behavior,
'omniscient external individual' is a quite an assumption here.
> Why should the non-omniscient individual seeking change be held responsible?
Free will is the _only_ way to hold the individual responsible.
> but I think at some point the notion of free will starts to lead to counterintuitive problems and/or becomes very poorly definable. At some level I suspect it implies not only autonomy but complete agency, which is suspect.
(I may be misunderstanding you here) Free will over a decision must ultimately collapse to a choice made. At that point it is determined reality and no amount of free will can undo it.
You either took the Red pill or the Blue pill. If the decision is never observed the choice was never made ergo it didn't happen, it is not real.
In the spirit of Spinoza  I actually find bestowing free will upon particles very intuitive. We then have free will because it is an attribute of our fundamental components. If particles have no free will and are pre-determined we have no free will and are pre-determined. If we have free will then particles must have free will.
Full disclosure, I am an armchair philosopher with an interest in the cross-section with physics.
I know I'm in the minority, but I object to the idea of responsibility actually, and prefer to think of change in a kind of "neurobehavioral engineering" or transformative justice sense. That might seem pathological or even psychopathic (which is ironic because I'm anything but); I just mean that I think responsibility (like randomness maybe, at least with reference to free will) is sort of a red herring. It's one of the reasons I'm interested in free will issues, because I think the notion of responsibility (outside of a very strict causal sense, as in "this geological formation is responsible for this waterfall") is misguided, and also shifts focus away from change efforts, toward retribution. To me, punishing an individual for a crime out of retribution makes as much sense as punishing a car for breaking down; I'd prefer to see the individual "fixed" in the same way I'd prefer to see my car fixed. The primary obstacle to the former from my perspective is lack of knowledge, which will diminish rapidly with time whether we as a society want it or not. I think one of the biggest challenges we will face as a society in the next 200 years (assuming we don't disappear or devolve into a dark age) is how to integrate advances in neuroscience and psychology into our sense of justice and responsibility. E.g., if you could change a person completely for the better, is withholding that change unethical? What's the point of retributional justice then? Aren't you just shooting yourself in the foot, societally speaking?
This is all tangential to the paper, but I think issues of free will become critical when you are faced with the possibility of total change in an individual.
My belief in determinism as a driver of free will is predicated entirely on the principle that without that there is no responsibility. Taking out determinism breaks the connection between my self, my persistent state, and my decisions. My choices have to come from me in a deterministic way, or they are not mine. I want to be responsible for my choices.
This is of course a mechanistic interpretation in the sense you describe, so it also leads me away from a retributive justice stance towards a rehabilitative stance even though I strongly believe in responsibility.
If a decision is undetermined, i.e. not determined, how can it be called a decision?
It's fairly clear that the author's are using the light-cone sort of definition, but they discuss this ambiguity towards the end on p.230:
> Some readers may object to our use of the
term “free will” to describe the indeterminism of
particle responses. Our provocative ascription of
free will to elementary particles is deliberate, since
our theorem asserts that if experimenters have a
certain freedom, then particles have exactly the
same kind of freedom. Indeed, it is natural to
suppose that this latter freedom is the ultimate
explanation of our own.
I don’t see how that conclusion follows from that premise. In fact, the whole free will debate is uninteresting to me exactly because it seems like a dissolvable semantics question regarding what does “me” mean. Is it a totality of some prior physical state? Is it supernatural? Depending on what somebody arbitrarily chooses for that, it seems to more or less force a conclusion about free will. E.g. a supernaturalist might believe my decisions are fully determined by “me” but that “me” is a concept that partially exists outside of the applicability of physical laws.
What distinguishes how we deal with one deterministic system over another? That's what free will is. The debate has progressed quite well over the past century, and most philosophers are Compatibilists like the OP for very good reasons.
"Moral responsibility" is a heuristic for our monkey brains, tapping into evolved ideas of justice, accounting for imperfect prediction of future events, attempting to control human behavior one way or the other. The reason we don't lecture engines about morality is because it is perfectly clear that they obey only one kingdom of rules, the physical.
Humans also must obey physical rules, but there are so many layers between base physics and human behavior that we speak in very fuzzy heuristic terms about morality. This does not mean free will is anywhere to be found in one of those intermediate layers, just that it's too complicated a machine for us to analyze like an engine, so we resort to using the levers and knobs that evolution gave us. IOW, free will is a "god of the gaps", only existing while we still lack the knowledge to better understand the layers between our gross beliefs/actions and base physical processes.
The problem here is that you're carrying some preconceptions of what free will means into this debate. The whole point of the free will debate is to define free will and figure out what it means.
Think of it this way, you can define a term in two ways: via denotative or connotative definitions. Incompatibilists push a connotative definition of free will, where they say, "free will must have such and such properties, oh and look! humans don't have those properties, therefore people don't have free will". They have been wrong every single time they've claimed a property was necessary.
By contrast, most people typically reason with denotative definitions saying, "that thing I do when I make a choice free from coercion, that's free will". And then we explore precisely what this means and what properties this requires.
As for it being just a heuristic, I'm not sure that's accurate. Take a sorting algorithm and point out precisely which step actually does the sorting. Well that's nonsense isn't it? Every step is essential to a sorting algorithm, and removing any single step breaks the sorting property.
Your god of the gaps argument is essentially trying to do the same thing with free will: you can't point to any specific brain state and say that's free will in action, but the ensemble is what produces what we recognize as free will. All brain states we see when a decision is being made are essential to making a free choice.
As cars become more than just human-controlled engines (self-driving) they are likely to be called on to make arguably moral judgements. They are being given rules to follow. For now these systems are still largely deterministic in a comprehensible way, but with deep learning neural networks playing an increasing role we might begin to lose the ability to fully and deterministically anticipate how they will behave. Not because it isn't possible in principle, but because it will become too complex to do in practice. Then what do we do?
if (!is_fully_deterministic(universe) &&
However, even in a fully-deterministic universe, or a mostly-deterministic universe, we will appear to have free will in so far as we cannot read minds and we cannot predict next states for the universe on any scale, with any accuracy, and in any amount of time that would be practical for determining e.g. whether someone is at fault for their actions. And it's pretty clear that this will always be so.
The question of whether we have free will, then, is sort of moot. From a social perspective we have little to choice but to act as if we do have free will.
Exactly! We don't hold babies or the insane responsible for their choices precisely because they don't make choices for well-founded reasons, they simply act somewhat randomly and so clearly don't have a well-founded, deterministic decision procedure. Holding people responsible is how we refine the deterministic decision procedure of a functional brain.
I think it's important people understand that the term "free will" as used in physics simply isn't the "free will" used in law, philosophy or by lay people. They're talking about a type of freedom which lets them set parameters of an experiment independently of the system they're measuring.
>> again whence comes responsibility?
The only reason I can think of to assign responsibility is to "correctly" respond. Particularly in the form of reward or punishment. But if a person is not responsible for their actions, then neither is anyone responding to them. "Responsibility" comes from some sense of morality, which doesn't really exist without free will does it?
Conversely, is the judgement that another entity is free-willed no more than an admission that one is unable to predict that other's actions, though that other can predict its own?
For example you wouldn't say a plant has free will. It just sits where it is planted, collecting sunlight, water, CO2, and minerals, and grows as specified in its genome. It doesn't deviate from the evolutionary strategy of its species.
You probably wouldn't say an ant has free will. It can move around, sure, but it seemingly does so according to a complex set of rules that are consistently embodied by all members of its species.
Does a dog have free will? I think we're getting closer to the line. A dog can clearly make choices against the simplest evolutionary strategy. A dog needs food to eat, it's driven to prefer certain foods... but some dogs can choose to let a tasty treat sit on the table (or even on its nose!) without consuming it. Some dogs seem to act in a way that we would call altruistic, endangering itself to save a human child, for example.
And of course, humans are capable of making all sorts of decisions that seem counter to a simple conception of an evolutionary strategy. On the positive side, choosing to forego reproducing in favor of pursuing some social goal, like a career. On the negative side, choosing to kill someone despite knowing they could be caught and punished (and therefore not reproduce).
And I think the opposite holds too--when we imagine a being without free will, the defining characteristic seems to be that it always follows a rigid set of rules, with no possibility of deviation.
I keep referring to the "concept" of a simple evolutionary strategy; does that mean this is not a rigorous definition of free will? Well I would say that free will is itself a human concept, so I think it's valid to explore its meaning in the context of other human concepts.
I think it's entirely possible that humans are actually deterministic and predictable, but the rules of our behavior are so complex that we could never know or apply them in a practical way, the way we know and use the rules that govern plants, for instance.
Whereas myself... I know people in general can have a variety of responses to any given stimuli. But I know exactly what response I will have to a stimuli that has just occurred. From my point of view, I am very free-willed! After all, I'm "choosing" one out of a multitude of possible responses to this stimuli. But really, my "choice" is determined by physics, and the illusion of free will is exactly that I -- as the sole entity with the capability to observe my own internal state -- can predict my response, and no-one else can.
I think also, an additional factor is that of volition: the actions I take should somehow derive from, and act to fulfill, what I consider to be my (chemically deterministically determined) desires. Otherwise I might perceive "someone else" to be in control of my person.
But, if we were to speed your brain up a million times, when you look at my brain, it might seem a lot more predictable - much more like an inert machine than a dynamic animal. So there's a kind of relativism to whether one subject perceives another subject as "alive" or free-willed, given their own context.
Causal chains are required for free will to make sense, but causal chains do not require robust pre-determination. We regularly simulate causal chains that make a narrative sense and are predictive, but only within complexity limited bounds, without the infinite precision needed for robust pre-determination.
How would you say pre-determined actions and free will are compatible? I would say they are not (and indeed randomness doesn't really help). In contrast, I would claim that the only sensible definition of free will is based on dualism: Many people seem believe there's some inherent aspect of them that is separate from the physical world and is the source of their self and will. If, like me, you don't believe in dualism, then the conclusion is that we don't have free will.
It is an incredibly fascinating debate.
In classical physics a closed system is purely determined by its initial state. So considering the universe as such a system, everything that is happening, including me typing this, is a deterministic and predictable function of the state of the universe at the big bang, and thus free will is an illusion.
In that sense, actual free will requires unpredictability in the physical world because free will is unpredictable behaviour. That's where quantum physics becomes crucial.
What does it mean for "will" to be "free"? If you could rewind time around one of your decisions, and replay it, you'd make the same decision every time, unless there was a random element involved, because the state would be the same every time.
And if there's a random element involved, it's not really your will, is it?
Most, if not all, real-world decisions take a non-zero uncertainty about the outcome into account.
You look at the roulette board. You bet on red.
We rewind time.
You still bet on red. The state is identical.
Did _you_ come up with that decision? Of course you did, who else could have?
My claim is not that there's no "you".
It's that "free will" is meaningless unless we use a weird interpretation of terminology.
If you'd do the same thing every time, there's no 'freedom'.
If you do different things, but it's based on some sort of tiebreaker RNG, there's no 'will'.
Note that there could be random elements in my mental makeup. i.e. my mental state might include input from an RNG, but it's still part of me and as long as it's not the only input into any of my choices, we can still talk about responsibility.
See my comment about No Country for old Men on choosing to use randomness.
The point you seem to be making:
A) If I make the same choice everytime there is no free will.
B) If I make different choices everytime there is no free will? Or perhaps even there is no "me"?
Free will could entail the will to choose the same thing every time AND the will to choose something different.
If we can't accept that then perhaps an argument could be made that there is no such thing as "you".
From the paper:
Some readers may object to our use of the
term “free will” to describe the indeterminism of
particle responses. Our provocative ascription of
free will to elementary particles is deliberate, since
our theorem asserts that if experimenters have a
certain freedom, then particles have exactly the
same kind of freedom. Indeed, it is natural to
suppose that this latter freedom is the ultimate
explanation of our own.
Although, as we show in , determinism may
formally be shown to be consistent, there is no
longer any evidence that supports it, in view of the
fact that classical physics has been superseded by
quantum mechanics, a non-deterministic theory.
Quantum physics doesn't allow for free will in an otherwise deterministic universe, any more than hooking up a RNG to a computer allows for the computer to have free will.
This is just scientism if you ask me.
I will almost certainly be wrong about whatever it is. However I may have more updates on how wrong and what it is I am wrong about, in maybe a decade or so ;)
Garbage in, garbage out. It isn't clarified whether an experimenter in a nondegenerate superposition of states (will choose x, will choose y, will choose z) is considered to have "free will". If you call that free will, then the result that result that the particle can end up in a superposition of states is not novel. If you don't call that free will, then the assumption is false and the theorem vacuous.
It's a conspiratorial line of thought on a par with Descartes' evil demon, but in a rigorous proof you have to rule these sorts of things out.
The axiom comes into play at the top-right of p.229:
> ...since by MIN the response of b cannot vary with x, y, z, θ^G_0 is a function just of w
Assumption: free will is defined by unpredictability and indeterminacy. (Actually not a given. This is hotly debated, to put it mildly.)
Therefore if a physical system behaves in an intrinsically unpredictable way (quantum systems do...) it is showing evidence of free will.
If you start from the premise that the universe and everything in it is sentient (etc, etc) this makes perfect sense. Of a sort.
But it's not in any way a proof of that premise.
In fact it doesn't even come close to being a proof. All it does is take the long way around proving that constrained unpredictable systems are in fact unpredictable but constrained - something that's proven by Kochen-Sprecker anyway.
Then it pulls the free will rabbit out of the hat and says "Of course! This is just like human free will!"
I'm going to go with "no" on this one.
The irony is that KS is usually taken as a proof that quantum systems aren't just indeterminate, they're essentially partially unknowable. There is no possible one-to-one mapping between states and observables.
If particle free will existed there would either be a tighter constraint on the possible mappings, which would disprove KS, or the free will would be indistinguishable from quantum randomness, which makes it an untestable metaphysical epiphenomenon.
Either way it doesn't add anything to our understanding.
Of course if someone designs an experiment where particles use observables to signal their consciousness to human experimenters, I'll change my mind about that.
There is a great conceptual distance between the self-assertion popuparized by Descartes and the theories and observations of physics. Science, even cosmology, has a very long way to go before its methods begin to answer the questions of philosophy.
And does QM indeterminism really play a role on a neuronal level? This paper by Tegmark argues it doesn't - https://arxiv.org/abs/quant-ph/9907009
The theory seems to be developed within the Copenhagen Interpretation framework of metaphysics, and I wonder if it is compatible with Many Worlds.
Why would we evolve this hallucination that we have free will, if in fact we fundamentally don't? It seems like a lot of effort and metabolic energy on a moment-by-moment basis to maintain the illusion. I guess you could make an argument we're in some sort of simulation the maker of which required this illusion to be present for reasons of their own.
But the conscious awareness of our moment-by-moment availability of choice is one of the most difficult things to deny even if you're incredibly skeptical about everything else you can observe through the senses (a la descartes).
As a close analogy, we've all said something like, "My car wants to pull to the right." Nobody claims the car has free will, but "wanting to pull to the right" is a useful way to both communicate and to think about it. What is is in the car's makeup that causes it to want to pull to the right?
There are plenty other examples where people have false beliefs yet still cling to them. If you ask drivers if they are better or worse than average, about 10% will say worse. My point is, having false beliefs doesn't always exert a strong evolutionary pressure.
Free will is something that happens OVER TIME.
In other words you have free will to the extent that your previous decisions allowed you to survive which was again based on other previous decisions and so you are constantly accumulating and changing "code to your program" so to speak because you can reflect on the results of your pre-defined behavior.
You both open the boxes and describe what you see. "Oh my half has the stalk", "oh, mine doesn't". "I see 5 pips", "so do I". There are a bunch correlated (and anti-correlated) observations about the two halves.
Nothing surprising about this.
The quantum equivalent seems to work the same way at first glance. However on closer inspection there are some weird correlations that suggest that:
* The apple wasn't cut in advance, but the two halves were created and put in the box just before they were opened.
* Or maybe the apple was cut in advance, but somehow the person cutting knew what observations I would make ('I have the stalk').
* Or maybe the boxes were never separated but remained paired in 5-d space and only on opening did the apple split.
* Or maybe I messaged the apple-cutter back-in-time to say what I would do.
* Or something else.
My box is so remote from the other relativity (and practicality) surely means I'm causally independent of the other box. And I got to chose my observations freely, nothing influenced me. And yet we know one - or both - of these can't be true.
The mention of free-will in the paper is a bit cheeky, really.
IIRC in the literature 'parameter-independence' and 'outcome-independence'have been to used to name the sort of factors that feel bad to violate (did I have a free choice about what to observe?) verses supposedly more innocuous quantum magic.
That about sums it up. I think (and I could be very wrong) that the last sentence refers to a particle's lack of dependence on previous state; that for a measurement, only the present state will inform the measure.
That seems to infer something about the idea of 'free will', that is, 'free will' in this context seems to refer to an independence from prior states. Loosely, we can choose to step out of the path of our past's falling dominoes and change the course of our lives. Or something like that.
I open to the possibility that the answer is never.