Hacker News new | comments | show | ask | jobs | submit login
The Strong Free Will Theorem (2009) [pdf] (ams.org)
110 points by lainon 35 days ago | hide | past | web | favorite | 114 comments



I really think this, along with much philosophical musing about free will, is entirely missing the point. Pre-determination is essential to free will. You literally cannot have free will without it. Random choice, or even non-predetermined correlated outcomes of entangled objects don't lead to free will, they lead to uncertain decisions and those are not the same thing.

If my decisions are not a product of my prior state, then they are not my decisions. The definition of 'me' is my prior state. If my decisions are unpredictable given complete knowledge of my prior state, and the ability to extrapolate it forward, then the decisions do not come from me. If they're not mine, then I have no responsibility for them. Any discussion of my responsibility for my actions must take into account my personal contribution to the decision as a being.

Dualism does not solve this problem. It simply foists a chunk of a person's state into some non-material constituent, but if that constituent does not have a persistent (though presumably malleable) state or does not deterministically contribute to the process, again whence comes responsibility?


I'd say dualism absolutely solves this problem, or at least can provide the tools to do so.

(I say all of this as a devil's advocate, and nothing more)

Imagine that heaven exists, and is a kind of meta-universe that our reality is a subset of. It has access to a higher order of systems than those within its subset, and they can say things about truth that the reality subset cannot (because of incompleteness; a system sufficiently powerful to say all true things is unable to prove all of them). Things that we cannot write proofs about, that we must accept as axioms (for example, in the space of ethics and morality) may be trivially expressible in the higher-order logic of meta-heaven.

Say now too that the dualistic, non-material constituent, the Soul, exists in this meta-heaven. It has access to this higher-order reasoning, and can make causal decisions according to it. However, to its physical coupling in the reality subset, these decisions will appear to be random and non-causal. They will be the outcome of nothing from within the universe. Manifestation of Free Will and Randomness can be considered to be indistinguishable in this sense, from the perspective of something inside the system that they are operating on.

It still pushes the problem 'up' one level, but who knows what the physics of a meta-heaven looks like. We can't even imagine it, in the way that a computer can't imagine its own halting. Hence all the talk of ineffability in the older dogmas of the world.


Similar to your idea is Kant's "thing-in-itself", which is just a way of saying that for any given object, there are attributes to it we're not able to observe - indeed, attributes we can't even imagine.

We're trapped in our brain, behind a wall of sensory apparatus. There's no reason to assume that our senses provide the complete totality of reality. And if there are aspects of reality we cannot sense, then there is more to reality than we can ever know. Kant called these attributes the "noumenal" world, while what we can see and observe is the "phenomenal" world. However, these are NOT separate areas - just two aspects of the same thing. Not dualism.

Schopenhauer took this to the next step and said that there are aspects to ourselves we can't observe, so part of our selves exists in the noumenal world. The phenomenal world is deterministic, but if we have free will, it must exist as part of the noumenal world.


Isn't that ignoring the fact that we can build tools to translate unsensable things into sensable ones? E.g. infrared cameras allow us to sense infrared which we normally aren't able to. Theoretically we should be able to build tools to sense anything which has an effect on the world. If it has no effect on the world then does it even exist?


There's still things that are impossible to accurately measure from within our reality.

We, for instance, can't know or measure the precise state of things as they would've been had something different occured than what actually occured at an earlier time.

This is especially obvious in the realm of quantum mechanics, where you can't know what you would have measured if you had measured earlier or later than you did (I hope my interpretation as a layperson was correct here). That information just doesn't exist in our reality, but it may exist in some higher order place where all those realities can be observed.


Viewing infrared extends one of our existing senses. Kant's point was that could be attributes of the thing that do not, in anyway, relate to our senses.

These attributes can't have any effect on the phenomenal world, or they'd be detectable, and by definition, they're not detectable by us. Unless, of course, our free will is part of the noumenal world, and our actions in the phenomenal world are simply one way of viewing our will.


I think that's just hiding the internals of the non-physical part of us from view, but it doesn't change anything. I already addressed the status of non-material components of a person in my post - if they are part of the person then they are part of their state. Whether you can 'see into' that state or not doesn't make any difference to the fact that it's still part of you and still a precondition of and an input into your decisions. They may be the outcome of nothing in 'this universe' (ridiculous concept, what does universe even mean if the definition isn't, well, universal?), but they are still the outcome of something in 'a' universe and that something has state.

Even randomness or the appearance of it doesn't make any difference. A person's state might very well include a source of randomness, in fact it almost certainly does. See my comments on this elsewhere in the thread.


Separating the two bits like this lets you have your cake and eat it too; it allows for a causal explanation between a person's (comprising both their physical and non-physical) state, and their actions, but that total state (and therefore, responsibility) can't be completely reasoned about in the reality that we exist in. They appear indistinguishable from the 'uncertain decisions' you mention in your first post, but are (in the whole picture) simultaneously causal/responsibility-inducing, and completely inaccessible to some Laplace's Demon.

It's why this kind of system usually is accompanied by a whole meta-set of of punishments and rewards as well. Since there's no way to perfectly reason and judge in our system, we have to hope that someone who has access to the higher system will provide the correct judgement, being able to see the total picture of responsibility.

It's also completely untestable, for the very reasons it 'works' as an explanation at all!


Untestable theories don’t explain anything by definition. If it influences the physical world in an ongoing way, in principle it must be testable. If it’s not testable, that can only be because it is not making a difference.


What if an "out-of-this-world" thing is a part of the state? You can't observe its internal state, so you can't emulate its functioning, even if it's completely deterministic "in a different dimension". This can be literally compared to having a complex state variable and only being capable of observing its real part, or functions of it that are entirely real. (See quantum mechanics for examples of this.)


Do you mean something like: There exists a multiverse in which the soul gets to choose which element of the multiverse it resides in moment to moment. So while each element of the multiverse (a particular reality) is completely deterministic, the one experienced by the soul is chosen.


Sure, if you want to keep the 'influence' between the two systems to a minimum.

If you allow the mutation of the subset reality by a soul in the meta-heaven, then you can have it all operate in a single universe; the soul is capable of adjusting the nondeterminism of particles in the brain, thereby making decisions happen that appear random to us, but are informed by a higher system of reasoning. In the way that a programmer might adjust constants, or a video game player might choose stats.

And just want to reiterate: I'm not trying to advance this explanation! Just fun to think about.


We are probably arguing semantics but the alternative to pre-determination is not free will as much as it is probabilistic.

Yes this _will_ happen verse this will _most likely_ happen.

Free will is the ability to influence the probability that a certain outcome is realized. Your decisions could be predictable as a function of probability.

Your definition of 'me' (your prior state) includes the quantum states of your constitute parts. Just because there is a degree of uncertainty in those states does not mean they are not of 'you'.

The degree of pre-determination is just a matter of scale (akin to classical vs quantum). The deeper you go the less pre-determined something can be.


> If my decisions are not a product of my prior state, then they are not my decisions. The definition of 'me' is my prior state.

There are two problems with this definition of free will.

First, it implies that your present self (and future selves) are all entirely subservient to your prior state, and surrounding environment. Ie, the present you has no way of overcoming your past self or your environment. This means that you're essentially sitting in a roller-coaster ride, with no levers to pull or opportunity for change. Sure, your past self made the decision to get on the ride. But your present and future self are all helpless from that point on.

Second, did even your past have any free will? The chain of causation extends all the way back. Your past self at time T-X is itself entirely subservient to your past self at time T-X-1. Extend this all the way back, and you realize that you're subservient to your "self" at the moment of conception. At no point did your consciousness have any ability to change the course of the life that was already plotted for it. If I predict at the moment of your birth, every single thing that will happen in your life and every decision you will make, can you really claim to possess free will?

Hence why people try to cling on to randomness and uncertainty. "If the future can be perfectly predicted, people definitely do not have free will => ergo, if the future cannot be perfectly predicted, then maybe we have free will?" I agree with you though that this is shoddy reasoning. Just because there exists randomness at the quantum level, just means that we're passengers in a driverless car that's rolling dice. Randomness that we have no control over, does not grant us free will in any way.


I think simonh was not arguing against the chain of causation, but rather agreeing with it, and arguing against "If the future can be perfectly predicted => people definitely do not have free will" and in favor of "If the future can not be perfectly predicted => people definitely do not have free will"

If an entity's decision process is is not not dependent and repeatable based entirely on it's internal state and perceptions of it's environment, then it must be dependent on some external random element, and that element not being part of that entity, means it's choice is not entirely of it's own free will.

This argument rejects duality of course, as one's "will" itself could be the external element.


Gödel had an interesting argument that I encountered via Rudy Rucker's blog;

“There is no contradiction between free will and knowing in advance precisely what one will do. If one knows oneself completely then this is the situation. One does not deliberately do the opposite of what one wants.”

http://www.rudyrucker.com/blog/2012/08/01/memories-of-kurt-g...


this quote has stayed with me ever since I first read it and I catch myself at points where I think I've just made a free decision thinking "I should try to overcome my will now and do the opposite", which I can never do, but wouldn't overcoming one's will do something be even more free will ?


My brother has a great anecdote. He was at work talking to some colleagues and one of them said free will was all about freely choosing what you believe. Everyone else agreed. My brother argued you can't necessarily choose what you believe, to much derision from his colleagues. So he challenged them.

"OK, pick something you absolutely believe, that you have believed as long as you can remember and you are absolutely sure about. It doesn't matter what it is and it can be as trivial as you like and change your belief about it."


> I think simonh was ... arguing against "If the future can be perfectly predicted => people definitely do not have free will"

Just so I'm understanding you correctly: Are you claiming that even if the future can be perfectly predicted, people can still have free will?

At the moment of your conception, if I were to predict with 100% accuracy and confidence, every decision you will ever make in your life, would you still claim to possess free will?


> even if the future can be perfectly predicted, people can still have free will?

Not OP, but essentially yes.

Things get weird if you use your predictions to influence what I do. But I'd say that takes away my agency, whilst keeping my free-will intact.

In general, those who claim determinism and free-will do not clash are called 'compatibilists'. Often, the argument is kind-of semantic. It starts with the idea that what most people consider 'free-will' is so ill defined we ought to work on the definition. It then follows that most definitions are very clear on whether we have free-will. Then, since most people feel like we have free-will, the definition that fits with that is chosen.

At least, that is the case for me. Personally, I think the real difficult discussion lies with the concept of 'x is possible' in a deterministic world. Here, I grasp for bayesian statistics.


That's an interesting perspective. I don't agree with it myself, but I agree that it's impossible to resolve this discussion without a clear definition.

Personally, I define free will as having the ability to "change". To overcome your both your past self, and your environment. Ie, a billiards ball has no free will, because its motion is compelled by its past state, and by the environment around it. Similarly for cars, computers and every other inanimate object. Whereas a human can be said to have free will, because regardless of his past nature, and regardless of the environment around him, he still has the free will to transcend all that and do something wholly "unexpected". To me, that's what separates free will from an automaton. Hence why my definition of free will isn't compatible with determinism.

I appreciate your point though that others might define free will in a wholly different manner.


When we take this argument out of imagined philosophical universes with perfectly deterministic Newtonian-like rules, we find that in fact randomness is everywhere. Quantum mechanics tells us that the behaviour of every particle and photon in our bodies behaves under some level of random influence. Perfect predictability is a fantasy. However conceptually we can simplify this and just imagine that my sate, the definition of how I am as a being, includes a source of randomness.

My argument then is that sure some of my choices are random, mainly because I choose to not use my memories and skills I have learned and hand over the choice to the RNG. That's also often a choice though. See my comment elsewhere on this page on No Country for Old Men.

You talk about unexpectedness. How do you define unexpected in such a way that it is distinguished from random? That's crucial. Truly random behaviour is not intentional and not 'ours'.

One way in which people change their minds is though the assimilation of new information. That's fully compatible with my take on free will. Our prior state includes the decision making apparatus that evaluates and incorporates the new information. Our consistency as a persistent self has not been compromised, it's just been transformed by new information or a new way of thinking about things. We have changed, but in a way that we are 'built' to do. Given omniscience (which is impossible, but for the sake of argument), that transformation would have been predictable in principle and again for me that's not a problem. Someone who knew me well might have predicted that I would change my mind.

That is not a problem. There's a quote in a comment nearby from Godel that addresses this very well.

I suppose I am a compatibilist, but I object to the use of the term 'compatible'. I think some degree of determinism is _required_ in order to have free will. It's essential. Without it, 'I' am taken out of the chain of responsibility for my actions. If they do not flow from me, from my state, my memories, my decision making processes and if those things are not to some extent consistent, then those actions are not mine.


Yes, I would. I'm actually arguing something a bit more though, that if the future can not be perfectly predicted, people can not have an entirely free will.

I'm claiming that non-deterministic action would necessitate that we are at least partially subject to some external, truly random, processes that we don't control, and that in that case that process is what would ultimately causes our actions to be non-deterministic, not the individual themselves.

If my actions are not deterministic in the sum collection of my prior experiences/knowledge, and current perception, which at any given instant could be exactly known, then there must be an additional random source of input that is not 'me' that at least in part drives my actions, and thus such actions are not entirely driven by me and my "Free Will".

Imagine a deterministic entity, in a otherwise deterministic universe, rolling non-deterministic dice to decide it's actions. Does the non-determinism introduced by those dice make those choices more or less an act of that entity's will?


Ie, the present you has no way of overcoming your past self or your environment.

You are answering to someone that clearly expressed that you are your past self. Why would you want getting rid of yourself?

This means that you're essentially sitting in a roller-coaster ride, with no levers to pull or opportunity for change.

Not at all. Human being can learn, practice and make all kind of efforts to change ourselves. Sports, arts, science are all fields where humans show how to improve.

You are getting to the no-levers problem by drawing a line between "you" and "what you do", as if you have nothing to do with what you do. Why?


> But your present and future self are all helpless from that point on.

To me, "helpless" in the context of metaphysical free will means that an agent's actions are deterministic from the view of an outside observer. That is all it can ever mean AFAICT. Any sadness/frustration we may feel about this is equivalent to the sadness/frustration one feels for Heidegger's cat. That is to say, it's there, but it's philosophically incidental.

"Helpless" in the context of physical free will means the normal meaning of helpless in all its glorious ambiguity. It can mean something close to "deterministic" in certain catastrophic situations. But more often it means something like "under the rule of a tyrant," "enslaved," or even "trapped" for whatever reason. In all those cases the sadness/frustration/etc. one feels is part and parcel of the outcome-- it affects whether one decides to fight, acquiesce, cope, etc. Since nobody within physical reality has sufficient knowledge to predict the future, the problem of metaphysical free will doesn't pop up.


What if current behavior is not completely determined by previous state, and yet is also not random?


Saying what a thing is not can be helpful, but what is it? Like dualism, this is just kicking the philosophical can down the road.


> The definition of 'me' is my prior state.

I don't think this is true. We know that thought and memory are constrained (accessed) at a physical level - at least to some extent. That is, damage to your physical brain can result in diminished or eliminated access to those things.

Indeed, thought as we know it seems to be a physically mediated process (perhaps not completely defined by our physical bodies, but at least influenced to some extent).

As well, emotion is a physically mediated phenomena. Even our experience of this existence is physically mediated. Without a functioning body we lack the ability to experience.

So what does that leave you to be? You are, at your basest existence, the ability to choose. A choice engine with a physical layer; a physical memory, physical emotion, the ability to think. But at your foundation, you choose.

You remember, but unless you choose, you are nothing but memory.


>I don't think this is true. We know that thought and memory are constrained (accessed) at a physical level - at least to some extent. That is, damage to your physical brain can result in diminished or eliminated access to those things.

Sure, the damage changes you. It becomes a new part of your state prior to making a decision.

>You remember, but unless you choose, you are nothing but memory.

Completely agree, but we don't think really know much about the relationship between how we store memories and how we incorporate those into a decision making process. I may be wrong, I don't know much about neuroscience. My take is though that it doesn't matter. All of that is part of our self. Memories, decision making processes, neurochemistry, it's all part of our state and our decisions are only truly 'ours' to the extent that this state determines or influences them.


>it's all part of our state and our decisions are only truly 'ours' to the extent that this state determines or influences them.

I suppose the argument I was trying to make was about what is fundamentally 'you' or 'me' and what could be emergent properties.

I agree that our decisions rest on prior states. Even if we remember nothing of our past, that has an effect on what and how we choose.


> You are, at your basest existence, the ability to choose. A choice engine with a physical layer; a physical memory, physical emotion, the ability to think. But at your foundation, you choose.

I don't see how this is at odds with the prior state view. You are a state machine. As time moves, you have no choice but to choose, and this is entirely dependent on your previous state and your current environment. The physical layer is the choice engine, and it physically must continue to make choices until it is incapacitated or dies.

You are a state machine that must step through its state transitions to the beat of the universe, navigating your small corner of it, enjoying the ride (or not), until you cease to function, and the state machine of the universe runs on without you.


Even if it is correct to say that, "You are a state machine." and "prior state" is an input to the state generation function of the machine, this is _not_ incompatible with free will, and does _not_ require a pre-determined universe.

Simply put, current state as an input does not necessitate pre-determination.

I think the point user all2 was making is that if we remove the prior state modifier from the generate state function then the core piece that is left over is "your basest existence".


I agree pre-determination is orthogonal to the question. If the state machine has a truly random component, pre-determination goes out the window.

> [...] if we remove the prior state modifier from the generate state function then the core piece that is left over is "your basest existence".

I think removing the prior state modifier leaves you with nothing. A brain with no prior state is a brain with absolutely no memories, knowledge, or stored procedures. Even if such a brain were possible to construct, plasticity would ensure that unless it was dead, it would immediately begin accumulating prior state with which to make subsequent choices.


> You are a state machine. As time moves, you have no choice but to choose, and this is entirely dependent on your previous state and your current environment.

This makes sense. There's no choice unless there's a dimension (some state variable) that can we can move in. We have stuff that can be state (matter and energy) and dimension (time) to move in.

In terms of state, my argument is that choice is a core piece of 'me' and 'you'. I don't know how to make an effective argument for the separateness of choice and state, but I think it makes sense to consider them as separate things. We possess state like we possess the space we exist in, in time. You could say that your choices are like driving a car. Navigating streets (or offroad!) is like choosing from possible states.


I think we're in agreement, except that I believe that whatever capacity you have for choice is also stored as state. What you call state and choice, I might call state and procedures. Both are stored in the substrate of the brain as state, just like data structures and the code that operates on them are both stored as data.


I had a similar reaction. Nondeterministic conceptions of free will similar to that of this paper are really more like an intoxicated state or something, where behavioral outcomes are totally unpredictable. It's the absence of free will, but due to complete randomness rather than complete control.

The prior state idea seems more accurate, but even that (at least in its extreme state) doesn't seem to me to encapsulate the idea of free will because it disallows the idea of a decision being undetermined and free to decide.

I personally think the notion of free will is fallacious because it's poorly defined, or even undefinable. I admit I could be totally wrong about that, but I think lack of predictability leads to an illusion of free will. It's the same illusion as the god illusion, injecting agency as an explanatory mechanism when there is none, either through chaotic processes, true randomness, or epistemological weaknesses.


> it disallows the idea of a decision being undetermined and free to decide

That doesn't compute. A decision doesn't do deciding. A decider decides decisions. Any phenomena of "will" necessarily implies contingency on a prior. To ask for non-contingency is to ask for freedom _from_ will.


>The prior state idea seems more accurate, but even that (at least in its extreme state) doesn't seem to me to encapsulate the idea of free will because it disallows the idea of a decision being undetermined and free to decide.

I certainly think randomness comes into play for sure, but conceptually my personal prior state might include a source of randomness. It could be just one factor, and I might make choices based on my memories and preferences to allow greater or lesser input from that randomness, or I might lack the mental tools to prevent that randomness from manifesting in my actions. At an extreme that might be a form of pathology and then we can talk about the limits of personal responsibility in various circumstances. The idea that my state leads to my decisions still holds though.


Yes, I agree that is one model, but what I wrestle with ultimately I suppose is whether indeterminism per se is what defines free will, in part or whole. If you have a prior state that is in part systematic, and part random, is the randomness per se free will? Does that encapsulate the notion of free will? Maybe? But both of those are still causally deterministic in a sense, in that even with randomness, there's some point at which the random components of the system causally impinge on the systematic part.

I think at some point notions of intervention or change become critical. So, for example, I think implicit in the idea of free will is the idea that an individual can freely choose to change their decision in some repeatability sense, and this change cannot necessarily happen in any other way, such as from outside influence.

Just to make it clear, let's say there's two choices or choice types, A and B. A is "bad" and B is "good," in whatever sense (utilitarian, moral, whatever).

Key to the notion of free will it seems is that one could change, of their own volition, their choice, from A->B or B->A. In practice we can't rewind time or transport to possible worlds in a counterfactual sense, but we do talk about decisions as interchangeable and repeatable. E.g., someone can choose whether or not to take a drug at one point, and then, at a later point, choose again whether or not to take the drug. In one metaphysical sense, those decisions are not the same because circumstances have changed, but we treat them as the same.

I think the idea of free will is that someone could change their decision, to an extent that another person could not. That is, I cannot make you change your choice, only you can.

This is fine at some level, but what if someone wants to change their mind but cannot? E.g., a drug addict or someone trying to lose weight? What about something simpler, like decision under a lapse in attention? How do we define volition or lack thereof?

Or, maybe more importantly, let's say the external individual in question is omniscient. If an omniscient external individual cannot alter your behavior, why would you hold that individual responsible in a free will sense? That is, let's say someone truly wants to change, but an omniscient being could not even help and change them. Why should the non-omniscient individual seeking change be held responsible?

This is all stream-of-consciousness, but I think at some point the notion of free will starts to lead to counterintuitive problems and/or becomes very poorly definable. At some level I suspect it implies not only autonomy but complete agency, which is suspect.


I think randomness really is a red herring. To the extent that a decision of ours is random, it is not ours. That doesn't necessarily absolve us of responsibility.

In No Country for Old Men a guy flips a coin to decide if he will kill someone. He chose to flip the coin though - delegating a choice to randomness does not abrogate us of responsibility. By doing so he created the framework in which the outcome arrises, whatever that outcome is. He made it possible, even likely and bears responsibility for that by selecting the set of possible outcomes and the degree of randomness. Also he does this many times, so it wasn't a "random choice to randomly choose", it's a consistent repeated pattern of behaviour that directly emerges from his personal ongoing mental state.

I don't go around randomly killing people and that's a consistent pattern that emerges from my persistent ongoing (if evolving) mental state and it _is_ a free choice. The fact that it's pre-determined is fine, because it means it comes from me.


He did not make the choice to murder if all the previous states, linked together to the outcome and thus it couldn’t have been different. If I’m a product of every moment, where the previous moments factor into my current moment.. I would only consider myself free if I had full control before birth into how my life plays out. Nobody is making choices is what I observe everyday, only forces are occurring and under the illusion or simply a disguise of choice.


>He did not make the choice to murder if all the previous states, linked together to the outcome and thus it couldn’t have been different.

To some extent yes, our choices and scope of action is constrained by our essential nature, some of which is determined before we are even born. However this doesn't absolve us of all accountability. It simply sets up a framework within which we can discuss what accountability is and how we want to manage it.

This is a difficult issue and I don't have all the answers about accountability, I see my take on it as my starting point for the discussion about it not the end. I believe in rehabilitative rather than retributive justice because of this. Also see my reply to fjuerfilis.


To all extent there is no choice. Randomness is a fallacy in this world, it doesn’t exist and or should be interpreted as what occurs associated as deterministic. People associate the word “random” to events they cannot process but still they’re set variables that factored into the final outcome with no other possible result when the exact variables are simulated again if possible. I think accountability isn’t a real thing in the sense of morally right and it’s just human conditioning to blame for us thinking it should ever exist when it’s morally not correct. Rehabilitation should always be the action humans take if we love ourselves and others. Else we’re playing with fire because the only thing separating us from a person who becomes a murder is our birth into this world and when society is imperfect to prevent inequality, genetics, environment, and travesty altering a state to poor health where one will commit murder or even suicide occurs. Similar to a chemical equation but longer to write out the formula and process to demonstrate again and again.


I take it you're not a fan of Quantum Mechanics?

I think that morality, love and justice emerge from human nature. I think that I, my self and my consciousness are emergent properties and I'm fine with that. I think that the purpose of human life is chosen for us by the rules of biology, natural selection, game theory and a host of other factors that, seemingly miraculously, lead me to the conclusion that the philosophical Good Life is desirable and largely attainable.


I don’t think it’s attainable but only what is destined happens. Quantum mechanics to me is something us humans do not have the ability to analyze to the depth of other things today. I assume it’s determinism as well. It’s possible to have local hidden variables that play into the determined outcome and which are not measurable in our universe or with our capabilities but still would be determinism at the heart. I think free will only exists if you have the perfect life, where one always enjoyed it and that to me is ignoring what I would associate “free” as in the definition. It’s more of like thinking well you can’t necessarily say the person didn’t want the existence.


A bit of a tangent but, you’ll notice he also becomes agitated when the coin doesn’t “allow” him to kill (I think, it’s been awhile) the shopkeeper. This implies he was looking for reasons to kill people. The coin flipping was his way of shifting the blame of his true desires - to kill people.


There is a lot to respond to here (and I think you're looking for responses) so I am just gonna stick with what I think are the key points and be brief.

> If you have a prior state that is in part systematic, and part random, is the randomness per se free will?

Let's please replace random with uncertain. If _part_ of an entangled system is uncertain the _whole_ system is uncertain [1].

> Does that encapsulate the notion of free will?

    free will [2] 
    the power of acting without the constraint of necessity or fate; 
    the ability to act at one's own discretion.
To me this subjectively means: Are my choices the result of something greater than the material sum of my parts? Yes.

> That is, I cannot make you change your choice, only you can.

Your free will attempts at influencing me to make a certain choice would seem to entangle my state with your's and perhaps change the probability I will make one choice over another.

> If an omniscient external individual cannot alter your behavior,

'omniscient external individual' is a quite an assumption here.

> Why should the non-omniscient individual seeking change be held responsible?

Free will is the _only_ way to hold the individual responsible.

> but I think at some point the notion of free will starts to lead to counterintuitive problems and/or becomes very poorly definable. At some level I suspect it implies not only autonomy but complete agency, which is suspect.

(I may be misunderstanding you here) Free will over a decision must ultimately collapse to a choice made. At that point it is determined reality and no amount of free will can undo it.

You either took the Red pill or the Blue pill. If the decision is never observed the choice was never made ergo it didn't happen, it is not real.

In the spirit of Spinoza [3] I actually find bestowing free will upon particles very intuitive. We then have free will because it is an attribute of our fundamental components. If particles have no free will and are pre-determined we have no free will and are pre-determined. If we have free will then particles must have free will.

Full disclosure, I am an armchair philosopher with an interest in the cross-section with physics.

[1] https://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s_cat [2] https://www.google.com/search?q=define%3A+free+will [3] https://en.wikipedia.org/wiki/Ethics_(Spinoza)


The idea of free will sort of deriving from some properties of particles is interesting. I admit there's a lot to our understanding of things from a fundamental physical perspective that is lacking, so although I doubt it's the case I don't think it can be ruled out on a logical basis.

I know I'm in the minority, but I object to the idea of responsibility actually, and prefer to think of change in a kind of "neurobehavioral engineering" or transformative justice sense. That might seem pathological or even psychopathic (which is ironic because I'm anything but); I just mean that I think responsibility (like randomness maybe, at least with reference to free will) is sort of a red herring. It's one of the reasons I'm interested in free will issues, because I think the notion of responsibility (outside of a very strict causal sense, as in "this geological formation is responsible for this waterfall") is misguided, and also shifts focus away from change efforts, toward retribution. To me, punishing an individual for a crime out of retribution makes as much sense as punishing a car for breaking down; I'd prefer to see the individual "fixed" in the same way I'd prefer to see my car fixed. The primary obstacle to the former from my perspective is lack of knowledge, which will diminish rapidly with time whether we as a society want it or not. I think one of the biggest challenges we will face as a society in the next 200 years (assuming we don't disappear or devolve into a dark age) is how to integrate advances in neuroscience and psychology into our sense of justice and responsibility. E.g., if you could change a person completely for the better, is withholding that change unethical? What's the point of retributional justice then? Aren't you just shooting yourself in the foot, societally speaking?

This is all tangential to the paper, but I think issues of free will become critical when you are faced with the possibility of total change in an individual.


>...I think the notion of responsibility (outside of a very strict causal sense, as in "this geological formation is responsible for this waterfall") is misguided, and also shifts focus away from change efforts, toward retribution.

My belief in determinism as a driver of free will is predicated entirely on the principle that without that there is no responsibility. Taking out determinism breaks the connection between my self, my persistent state, and my decisions. My choices have to come from me in a deterministic way, or they are not mine. I want to be responsible for my choices.

This is of course a mechanistic interpretation in the sense you describe, so it also leads me away from a retributive justice stance towards a rehabilitative stance even though I strongly believe in responsibility.


> a decision being undetermined

If a decision is undetermined, i.e. not determined, how can it be called a decision?


This is sort of a semantic quibble over what the "free" in "free will" really means. Does it mean freedom from the external environment outside my head? Freedom from my past light cone? Freedom from legal, cultural or other sorts of coercion? Or does it really mean "self-determination"?

It's fairly clear that the author's are using the light-cone sort of definition, but they discuss this ambiguity towards the end on p.230:

> Some readers may object to our use of the term “free will” to describe the indeterminism of particle responses. Our provocative ascription of free will to elementary particles is deliberate, since our theorem asserts that if experimenters have a certain freedom, then particles have exactly the same kind of freedom. Indeed, it is natural to suppose that this latter freedom is the ultimate explanation of our own.


> “If my decisions are unpredictable given complete knowledge of my prior state, and the ability to extrapolate it forward, then the decisions do not come from me.”

I don’t see how that conclusion follows from that premise. In fact, the whole free will debate is uninteresting to me exactly because it seems like a dissolvable semantics question regarding what does “me” mean. Is it a totality of some prior physical state? Is it supernatural? Depending on what somebody arbitrarily chooses for that, it seems to more or less force a conclusion about free will. E.g. a supernaturalist might believe my decisions are fully determined by “me” but that “me” is a concept that partially exists outside of the applicability of physical laws.


You're better off thinking about the logical properties that entail moral responsibility. Consider that it makes no sense to hold a car engine morally responsible for malfunctioning and killing its passengers, but it does make sense to hold a driver responsible (and it might make sense to do the same for a general AI in the future). Why?

What distinguishes how we deal with one deterministic system over another? That's what free will is. The debate has progressed quite well over the past century, and most philosophers are Compatibilists like the OP for very good reasons.


I love your framing of the question, but am frustrated with the Compatibilist conclusion.

"Moral responsibility" is a heuristic for our monkey brains, tapping into evolved ideas of justice, accounting for imperfect prediction of future events, attempting to control human behavior one way or the other. The reason we don't lecture engines about morality is because it is perfectly clear that they obey only one kingdom of rules, the physical.

Humans also must obey physical rules, but there are so many layers between base physics and human behavior that we speak in very fuzzy heuristic terms about morality. This does not mean free will is anywhere to be found in one of those intermediate layers, just that it's too complicated a machine for us to analyze like an engine, so we resort to using the levers and knobs that evolution gave us. IOW, free will is a "god of the gaps", only existing while we still lack the knowledge to better understand the layers between our gross beliefs/actions and base physical processes.


> This does not mean free will is anywhere to be found in one of those intermediate layers

The problem here is that you're carrying some preconceptions of what free will means into this debate. The whole point of the free will debate is to define free will and figure out what it means.

Think of it this way, you can define a term in two ways: via denotative or connotative definitions. Incompatibilists push a connotative definition of free will, where they say, "free will must have such and such properties, oh and look! humans don't have those properties, therefore people don't have free will". They have been wrong every single time they've claimed a property was necessary.

By contrast, most people typically reason with denotative definitions saying, "that thing I do when I make a choice free from coercion, that's free will". And then we explore precisely what this means and what properties this requires.

As for it being just a heuristic, I'm not sure that's accurate. Take a sorting algorithm and point out precisely which step actually does the sorting. Well that's nonsense isn't it? Every step is essential to a sorting algorithm, and removing any single step breaks the sorting property.

Your god of the gaps argument is essentially trying to do the same thing with free will: you can't point to any specific brain state and say that's free will in action, but the ensemble is what produces what we recognize as free will. All brain states we see when a decision is being made are essential to making a free choice.


I like your "God of the Gaps" analogy. That's exactly the problem I have with dualism. We don't understand the physical causes, therefore mysticism.

As cars become more than just human-controlled engines (self-driving) they are likely to be called on to make arguably moral judgements. They are being given rules to follow. For now these systems are still largely deterministic in a comprehensible way, but with deep learning neural networks playing an increasing role we might begin to lose the ability to fully and deterministically anticipate how they will behave. Not because it isn't possible in principle, but because it will become too complex to do in practice. Then what do we do?


I think there's no such thing as free will in any case. At least for the religions notion of free will floating around in my mind:

  if (!is_fully_deterministic(universe) &&
      have(universe, RANDOM_CHOICE))
      assert(!have(universe, FREE_WILL));
  if (is_fully_deterministic(universe))
      assert(!have(universe, FREE_WILL));
The only remaining way to have that sort of free will is to have non-random laws that somehow are not fully deterministic. I'm not sure what that would mean. Non-random -> deterministic, no?

However, even in a fully-deterministic universe, or a mostly-deterministic universe, we will appear to have free will in so far as we cannot read minds and we cannot predict next states for the universe on any scale, with any accuracy, and in any amount of time that would be practical for determining e.g. whether someone is at fault for their actions. And it's pretty clear that this will always be so.

The question of whether we have free will, then, is sort of moot. From a social perspective we have little to choice but to act as if we do have free will.


> I really think this, along with much philosophical musing about free will, is entirely missing the point.

Exactly! We don't hold babies or the insane responsible for their choices precisely because they don't make choices for well-founded reasons, they simply act somewhat randomly and so clearly don't have a well-founded, deterministic decision procedure. Holding people responsible is how we refine the deterministic decision procedure of a functional brain.

I think it's important people understand that the term "free will" as used in physics simply isn't the "free will" used in law, philosophy or by lay people. They're talking about a type of freedom which lets them set parameters of an experiment independently of the system they're measuring.


>> Any discussion of my responsibility for my actions must take into account my personal contribution to the decision as a being.

>> again whence comes responsibility?

The only reason I can think of to assign responsibility is to "correctly" respond. Particularly in the form of reward or punishment. But if a person is not responsible for their actions, then neither is anyone responding to them. "Responsibility" comes from some sense of morality, which doesn't really exist without free will does it?


Also known as Free Will Requires Determinism [0]. There's an Existential Comic about it [1].

[0] http://www.oxfordscholarship.com/view/10.1093/acprof:oso/978...

[1] http://existentialcomics.com/comic/70


Following that train of thought, could free will simply be the capacity to predict one's own (deterministic) actions, based on one's current state, and ascribing that to some internal volition? Similarly, the "inner monologue" simply the capacity to observe one's own (deterministic) thought process?

Conversely, is the judgement that another entity is free-willed no more than an admission that one is unable to predict that other's actions, though that other can predict its own?


Maybe when people talk about "free will", what they are really talking about is the ability to deviate from the simplest conception of an evolutionary strategy.

For example you wouldn't say a plant has free will. It just sits where it is planted, collecting sunlight, water, CO2, and minerals, and grows as specified in its genome. It doesn't deviate from the evolutionary strategy of its species.

You probably wouldn't say an ant has free will. It can move around, sure, but it seemingly does so according to a complex set of rules that are consistently embodied by all members of its species.

Does a dog have free will? I think we're getting closer to the line. A dog can clearly make choices against the simplest evolutionary strategy. A dog needs food to eat, it's driven to prefer certain foods... but some dogs can choose to let a tasty treat sit on the table (or even on its nose!) without consuming it. Some dogs seem to act in a way that we would call altruistic, endangering itself to save a human child, for example.

And of course, humans are capable of making all sorts of decisions that seem counter to a simple conception of an evolutionary strategy. On the positive side, choosing to forego reproducing in favor of pursuing some social goal, like a career. On the negative side, choosing to kill someone despite knowing they could be caught and punished (and therefore not reproduce).

And I think the opposite holds too--when we imagine a being without free will, the defining characteristic seems to be that it always follows a rigid set of rules, with no possibility of deviation.

I keep referring to the "concept" of a simple evolutionary strategy; does that mean this is not a rigorous definition of free will? Well I would say that free will is itself a human concept, so I think it's valid to explore its meaning in the context of other human concepts.

I think it's entirely possible that humans are actually deterministic and predictable, but the rules of our behavior are so complex that we could never know or apply them in a practical way, the way we know and use the rules that govern plants, for instance.


How free-willed or "alive" an object is, from one subject's perspective, will have a lot to do with how _unpredictable_ the possible futures of the object seem, relative to that subject.


Yep. This is what I mean by my second paragraph -- when I believe something else to be free-willed is exactly when I cannot predict its actions, but believe it to be capable of doing so (i.e., it believes itself to have free will). So, a pachinko ball, while its haphazard movement is unpredictable by me, does not have free will, because it does not have the capability to predict its own movement. Nor does an individual ant, since I can predict its actions in response to chemical stimuli, regardless of whether it can.

Whereas myself... I know people in general can have a variety of responses to any given stimuli. But I know exactly what response I will have to a stimuli that has just occurred. From my point of view, I am very free-willed! After all, I'm "choosing" one out of a multitude of possible responses to this stimuli. But really, my "choice" is determined by physics, and the illusion of free will is exactly that I -- as the sole entity with the capability to observe my own internal state -- can predict my response, and no-one else can.

I think also, an additional factor is that of volition: the actions I take should somehow derive from, and act to fulfill, what I consider to be my (chemically deterministically determined) desires. Otherwise I might perceive "someone else" to be in control of my person.


I like to think of things like hurricanes as kinda proto-alive things, in the sense that they are somewhat unpredictable, though have dynamic structure and somewhat maintains that structure in the face of chaotic environment.

But, if we were to speed your brain up a million times, when you look at my brain, it might seem a lot more predictable - much more like an inert machine than a dynamic animal. So there's a kind of relativism to whether one subject perceives another subject as "alive" or free-willed, given their own context.


>I really think this, along with much philosophical musing about free will, is entirely missing the point. Pre-determination is essential to free will.

Causal chains are required for free will to make sense, but causal chains do not require robust pre-determination. We regularly simulate causal chains that make a narrative sense and are predictive, but only within complexity limited bounds, without the infinite precision needed for robust pre-determination.


I think you're taking the wrong message away from the following simple idea: "Good decisions depend on taking the available information into account." If we take this for granted, then it's not clear why free will is a good thing.

How would you say pre-determined actions and free will are compatible? I would say they are not (and indeed randomness doesn't really help). In contrast, I would claim that the only sensible definition of free will is based on dualism: Many people seem believe there's some inherent aspect of them that is separate from the physical world and is the source of their self and will. If, like me, you don't believe in dualism, then the conclusion is that we don't have free will.


The only conclusion here is that you believe free will can only be explained by dualism. Quantum nonlocality [1] provides at least one other explanation as a place free will may hide.

[1] https://en.wikipedia.org/wiki/Quantum_nonlocality


Could you elaborate on what the definition of free will you're going by is? And how it's possibly supported by quantum non-locality?


I had this exact discussion with my grandfather (a catholic) when I was 11 years old (I'm 41 now), and rejecting most religious stuff I was fed with at the time.

It is an incredibly fascinating debate.


You are assuming your conclusion. If free will exists, it might exist outside the current understanding of physics. Most likely, it does (if it exists!)


I did address dualism. I'm not making any assumptions about whether dualism is true or not. If dualism is true, the immaterial part is just another form of state that is part of the self from which the decision emerges.


Physics assumes the conclusion. If free will exists (which it appears to), it must exist within the contingencies of physics. To assume otherwise it to say that you don't like physics.


Non-determined is not the same as random. Randomness is undetermined, but not coextensive with the concept. A common error I see in these debates.


Which is why this paper proves the Copenhagen interpretation of QM is wrong.


How can you disprove an interpretation?


Well, disprove in the sense of showing that inconsistencies will lead us to believe that the interpretation is not likely to be as accurate as some other interpretations.


You guys can philosophize until you are blue in the face but the type of free will that laypeople think of when you say free will, libertarian free will, is virtually certain not to exist. Everything else is just semantics.


It depends if your discuss the issue purely on philosophical terms or based on physics.

In classical physics a closed system is purely determined by its initial state. So considering the universe as such a system, everything that is happening, including me typing this, is a deterministic and predictable function of the state of the universe at the big bang, and thus free will is an illusion.

In that sense, actual free will requires unpredictability in the physical world because free will is unpredictable behaviour. That's where quantum physics becomes crucial.


Free will is not well defined.

What does it mean for "will" to be "free"? If you could rewind time around one of your decisions, and replay it, you'd make the same decision every time, unless there was a random element involved, because the state would be the same every time.

And if there's a random element involved, it's not really your will, is it?


Your decision is between black and red while looking at the spinning roulette wheel. The random element is the outcome of your bet, which can not possibly be inferred from the current state at the time of making the decision. So if I could rewind and replay, and actually make a different decision each time, it would really be my will, I think.

Most, if not all, real-world decisions take a non-zero uncertainty about the outcome into account.


In that case, you didn't "will" the different decision, it was a result of your internal RNG.


It's not random though - it is probabilistic. And even if it was an internal 'RNG' it is is still _their's_ not someone else's.


If you take an action based on determining the probabilities of an event, you would take that action every single time because the state would be identical every time.

You look at the roulette board. You bet on red.

We rewind time.

You still bet on red. The state is identical.

Did _you_ come up with that decision? Of course you did, who else could have?

My claim is not that there's no "you".

It's that "free will" is meaningless unless we use a weird interpretation of terminology.

If you'd do the same thing every time, there's no 'freedom'.

If you do different things, but it's based on some sort of tiebreaker RNG, there's no 'will'.


That is true, but only for some definitions of free and will. I don't accept definitions of 'free' that exclude inputs from my personal mental state. If my mental state doesn't determine the choice, then the choice isn't mine.

Note that there could be random elements in my mental makeup. i.e. my mental state might include input from an RNG, but it's still part of me and as long as it's not the only input into any of my choices, we can still talk about responsibility.

See my comment about No Country for old Men on choosing to use randomness.


Let's excuse that we have to invoke time travel, and that once the choice has been made it is now part of the past of our collapsed reality...

The point you seem to be making:

A) If I make the same choice everytime there is no free will.

B) If I make different choices everytime there is no free will? Or perhaps even there is no "me"?

Free will could entail the will to choose the same thing every time AND the will to choose something different.

If we can't accept that then perhaps an argument could be made that there is no such thing as "you".


If probabilistic outcomes are the same as having a free will, then elementary particles also have free will. That doesn't fit the intuitive fuzzy definition of free will.


That is exactly the point!

From the paper:

Some readers may object to our use of the term “free will” to describe the indeterminism of particle responses. Our provocative ascription of free will to elementary particles is deliberate, since our theorem asserts that if experimenters have a certain freedom, then particles have exactly the same kind of freedom. Indeed, it is natural to suppose that this latter freedom is the ultimate explanation of our own.


   Although, as we show in [1], determinism may
   formally be shown to be consistent, there is no
   longer any evidence that supports it, in view of the
   fact that classical physics has been superseded by
   quantum mechanics, a non-deterministic theory.
I am surprised at the number of folks who still try to argue that science (particularly physics) has "proved" that these is no free will. This may in fact be true, but you can't argue that it follows from Physics in a post-Quantum world.


The followup argument is usually something along the lines of quantum mechanics is just probabilistic as far as we know.

Quantum physics doesn't allow for free will in an otherwise deterministic universe, any more than hooking up a RNG to a computer allows for the computer to have free will.


Not in my understanding. The probabilities are only involved when a quantum superposition collapses to a classic state. Note that the paper makes it clear that probabilities are not used anywhere in their argument.


Yeah exactly, what does quantum mechanics even have to do with it?

This is just scientism if you ask me.


I been juggling the following three papers for a bit and trying to brush my maths up. I don't know why I feel a hunch between them, but I do know that the hunch is to do with this problem.

https://arxiv.org/pdf/1803.06824.pdf

https://arxiv.org/pdf/1306.0533.pdf

https://arxiv.org/pdf/1204.2779.pdf

I will almost certainly be wrong about whatever it is. However I may have more updates on how wrong and what it is I am wrong about, in maybe a decade or so ;)


> Another customarily tacit assumption is that experimenters are free to choose between possible experiments. To be precise, we mean that the choice an experimenter makes is not a function of the past

Garbage in, garbage out. It isn't clarified whether an experimenter in a nondegenerate superposition of states (will choose x, will choose y, will choose z) is considered to have "free will". If you call that free will, then the result that result that the particle can end up in a superposition of states is not novel. If you don't call that free will, then the assumption is false and the theorem vacuous.


You need this sort of axiom to eliminate superdeterminism. If an experimenter's choice of direction is fully determined by their past light-cone then a shared physical state in the overlapping past light-cones of A and B could conceivably drum up any correlations it wants.

It's a conspiratorial line of thought on a par with Descartes' evil demon, but in a rigorous proof you have to rule these sorts of things out.

The axiom comes into play at the top-right of p.229:

> ...since by MIN the response of b cannot vary with x, y, z, θ^G_0 is a function just of w


Right, I see how it's being used in the proof, my point is that what the theorem proves, regardless of how the notion of "is determined by" is clarified, is not at all interesting or surprising.


It's essentially a recasting of Bell's Theorem; in that sense it's both interesting and surprising if you thought that local hidden variables might still explain quantum theory. All you need - for both this and Bell's proof - is that directions are chosen with sufficient "randomness" or "freedom" so as to be considered independent of their past light cones. At least, independent enough so that the previously quoted step is possible.


I've honestly had trouble understanding how one would go about defining free will in a way that it could be observed. I can explain how I made my decisions and non-decisions, I know the difference between freedom and duress, but I have yet to see a definition of what free will is meant to mean, and why I am supposed to have it, while dice, my phone, and a plant growing towards the sun are not. It probably also requires a strong definition of consciousness, right?


This is a subtle version of the Post Hoc Ergo Propter Hoc fallacy.

Assumption: free will is defined by unpredictability and indeterminacy. (Actually not a given. This is hotly debated, to put it mildly.)

Therefore if a physical system behaves in an intrinsically unpredictable way (quantum systems do...) it is showing evidence of free will.

If you start from the premise that the universe and everything in it is sentient (etc, etc) this makes perfect sense. Of a sort.

But it's not in any way a proof of that premise.

In fact it doesn't even come close to being a proof. All it does is take the long way around proving that constrained unpredictable systems are in fact unpredictable but constrained - something that's proven by Kochen-Sprecker anyway.

Then it pulls the free will rabbit out of the hat and says "Of course! This is just like human free will!"

I'm going to go with "no" on this one.

The irony is that KS is usually taken as a proof that quantum systems aren't just indeterminate, they're essentially partially unknowable. There is no possible one-to-one mapping between states and observables.

If particle free will existed there would either be a tighter constraint on the possible mappings, which would disprove KS, or the free will would be indistinguishable from quantum randomness, which makes it an untestable metaphysical epiphenomenon.

Either way it doesn't add anything to our understanding.

Of course if someone designs an experiment where particles use observables to signal their consciousness to human experimenters, I'll change my mind about that.


If life has meaning, will is not free. - This is the strongest statement that I think can be said about the subject.

There is a great conceptual distance between the self-assertion popuparized by Descartes and the theories and observations of physics. Science, even cosmology, has a very long way to go before its methods begin to answer the questions of philosophy.


I am myself skeptic of this. First of all - How does on control indeterminism?

And does QM indeterminism really play a role on a neuronal level? This paper by Tegmark argues it doesn't - https://arxiv.org/abs/quant-ph/9907009


This seems to be a weak free will theorem: "Our provocative ascription of free will to elementary particles is deliberate, since our theorem asserts that if experimenters have a certain freedom, then particles have exactly the same kind of freedom." To me, they seem to be saying that the freedom of free will is a limited freedom from determinism, not the freedom of being able to choose a future.

The theory seems to be developed within the Copenhagen Interpretation framework of metaphysics, and I wonder if it is compatible with Many Worlds.


Quite apart from the math and science of deterministic physics and probabilistic quantum physics, there's a broader evolutionary question.

Why would we evolve this hallucination that we have free will, if in fact we fundamentally don't? It seems like a lot of effort and metabolic energy on a moment-by-moment basis to maintain the illusion. I guess you could make an argument we're in some sort of simulation the maker of which required this illusion to be present for reasons of their own.

But the conscious awareness of our moment-by-moment availability of choice is one of the most difficult things to deny even if you're incredibly skeptical about everything else you can observe through the senses (a la descartes).


Why do you think maintaining the sense of free will is evolutionarily costly? If anything, to me, it seems like a heuristic shortcut that is imperfect but is useful most of the time, and saves resources overall.

As a close analogy, we've all said something like, "My car wants to pull to the right." Nobody claims the car has free will, but "wanting to pull to the right" is a useful way to both communicate and to think about it. What is is in the car's makeup that causes it to want to pull to the right?

There are plenty other examples where people have false beliefs yet still cling to them. If you ask drivers if they are better or worse than average, about 10% will say worse. My point is, having false beliefs doesn't always exert a strong evolutionary pressure.


Derk Pereboom posits that there is no free will but that fact is not inconsistent with moral responsibility.

http://derk-pereboom.net/views/


Most people miss the point when they discuss free will because they focus on the idea of whether we are free to make decisions in real time.

Free will is something that happens OVER TIME.

In other words you have free will to the extent that your previous decisions allowed you to survive which was again based on other previous decisions and so you are constantly accumulating and changing "code to your program" so to speak because you can reflect on the results of your pre-defined behavior.


Is this another counter-Sokalian paper for fun?


Can anyone dumb this down for me?


Suppose you're told that somebody has halved an apple and put the two pieces in separate boxes. You are given a box, and the other is sent to a friend on the other side of the planet, who phones you when it arrives.

You both open the boxes and describe what you see. "Oh my half has the stalk", "oh, mine doesn't". "I see 5 pips", "so do I". There are a bunch correlated (and anti-correlated) observations about the two halves.

Nothing surprising about this.

The quantum equivalent seems to work the same way at first glance. However on closer inspection there are some weird correlations that suggest that:

* The apple wasn't cut in advance, but the two halves were created and put in the box just before they were opened.

* Or maybe the apple was cut in advance, but somehow the person cutting knew what observations I would make ('I have the stalk').

* Or maybe the boxes were never separated but remained paired in 5-d space and only on opening did the apple split.

* Or maybe I messaged the apple-cutter back-in-time to say what I would do.

* Or something else.

My box is so remote from the other relativity (and practicality) surely means I'm causally independent of the other box. And I got to chose my observations freely, nothing influenced me. And yet we know one - or both - of these can't be true.

The mention of free-will in the paper is a bit cheeky, really.

IIRC in the literature 'parameter-independence' and 'outcome-independence'have been to used to name the sort of factors that feel bad to violate (did I have a free choice about what to observe?) verses supposedly more innocuous quantum magic.


> It asserts, roughly, that if indeed we humans have free will, then elementary particles already have their own small share of this valuable commodity. More precisely, if the experimenter can freely choose the directions in which to orient his apparatus in a certain measurement, then the particle’s response (to be pedantic—the universe’s response near the particle) is not determined by the entire previous history of the universe.

That about sums it up. I think (and I could be very wrong) that the last sentence refers to a particle's lack of dependence on previous state; that for a measurement, only the present state will inform the measure.

That seems to infer something about the idea of 'free will', that is, 'free will' in this context seems to refer to an independence from prior states. Loosely, we can choose to step out of the path of our past's falling dominoes and change the course of our lives. Or something like that.


Is that basically saying that at some level, we should be able to see effects that have no cause? Is that what freewill boils down to?


I don't know. "effects with no cause" makes me think of the 'quantum soup' at zero-point energy levels: things just appearing and disappearing in some random way. I don't know if that could be classified as 'causeless' though.


More like effects whose cause we can't ascribe... would have a free will component to their cause. It seems a bit silly because wherever you can ascribe a cause, there's no free will, so when is there free will?


> so when is there free will?

I open to the possibility that the answer is never.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: