The summary is basically that to our best knowledge there is no free will because decisions are caused by neural activities which in turn are caused by sensory input and noise (but unlikely by quantum noise). However, humans have evolved a strong sense of agency because that's simply an efficient way to reason about machines that produce actions in response to the entirety of their sensory input (especially regarding parent child relationships and mutual behavior correction). This neuroarchitectural bias is essentially an illusion of free will that is so firmly wired into our brains that we cannot escape it. It is also the reason why the idea of a God comes so intuitively to many of us: an invisible actor which can be used to explain inexplicable chains of causation and can serve as a very effective metaphor for behavioral error correction (as a proxy for actual social repercussions, and hence relieved from all the complicated and hence fallible power relations to actual social error correction instances).
There are obviously some mechanisms for creating new information and action, based on past experience as well (thus also creating new unforeseen behavior). These mechanisms can clearly be implemented on the neural machine of the brain, since it's evident that it can even already be approximated in Google Deep Dream to create new unforeseen images based on previous inputs.
Whether the human can always verbally describe the decision tree (or whatever other decision mechanism is used), is another question. But even if it cannot describe, so what? The decision is done somewhere deep in the net, and the verbal processor does not have access to it. It's still the network making a decision...
So what makes you say that free will is an "illusion"? Our brains obviously soak up the information and then make future decisions based on that information (subject to effectiveness of learning, etc...).
To summarize, given all this new informatio: no, deterministic machines do not contradict free will. Because those machines are intelligent and have feedback loops and can (deterministically) make intelligent decisions based on the information.
I am the pre-existing condition that (largely) determines my choices. That's what makes my decisions mine and not just 'free' decisions devoid of context, responsibility or attribution.
So if my decisions are made by me and are not coerced or biased by limited access to information, then they are mine and they are free and I will happily accept responsibility for them. But magically occurring decisions free of conditions and not influenced in any way by my actual mental state or faculties are simply not my decisions.
A construct that can react in multiple ways to a single given set of inputs, but does so by combining them with some internal inputs which are non-deterministic but generally random in nature, also intuitively doesn't have 'free will'.
What you need to really satisfy the intuitive concept of 'free will' is some analytical agency, external to our physical reality, which affects the outcome in some purposeful way. So, basically, a 'soul'.
Of course, to move past the 'intuitive' sense we're gonna need to actually rigorously define 'free will', which is something that is curiously lacking in virtually all discussions of this kind of stuff.
Then intuition is wrong, as is often the case. Either the decision is random, or it is mine, or it is somebody or something else's. There can be combinations of those factors, and of course that is actually the usual case. In fact arguably in practice it's always the case.
> What you need to really satisfy the intuitive concept of 'free will' is some analytical agency, external to our physical reality, which affects the outcome in some purposeful way. So, basically, a 'soul'.
But any analytical agency is going to encapsulate state. Moving that outside our physical bodies is just kicking the can down the metaphysical road but doesn't actually solve anything.
If we want to invoke pure randomness, there's always Quantum Mechanics. In fact there's a post on the intersection of quantum mechanics and biology on the front page right now . But of course random input doesn't seem very 'free' either.
My point is that 'free will' in the abstract isn't an agency. To have agency there must be an agent and agents have state. When we are talking about free will, we should be talking about the free will of the agent. To the extent that the agent made the decision without undue external influence then they have free will. Can we ever be free of ourselves?
Edit: byt +1, good post and I definitely concur with your last point.
Think about it. Isn't there something curious about this (probably innate) way of representing actions that were initiated inside our brains?
There are more examples like these, e.g. in split-brain patients where the speaking side simply rationalizes whatever the non-speaking side does. All of that points toward the same idea: Our feeling of being an entity that can choose in each moment what to do next misrepresents what actually happens inside our brains. Under the hood, every thought and every movement can be tracked back to either noise or to a recurrent control program that in turn was shaped by a reward maximizing mechanism and by learned statistics of real-world dynamics. On top of that there is an episodic self-representation which basically tells a story about itself and which has a bias to represent autonomously moving objects such as the structure it is part of, by a free, self-initiated intentionality. That story it tells itself about itself can of course affect future actions, but again the way it does is fully contingent on the laws of nature, on genetics, on the individual life experience etc.
If someone stopped caring about their life after reading this because they are not in control of their thoughts and movements anyway, then even that is fully contingent on the experience of reading whatever I am writing right now and society might decide to contain said subject for everyone's safety (i.e. to ensure their continued flow of reward signals). This exposes the role of this innate feeling of free agency as a mechanism of behavioral control. We need this representation to be efficient at attributing certain outcomes to certain individuals so that we can correct their behavior.
But we don't need to give up on anything knowing this. We still can do blame attribution. Actually we can probably gain something by improving the representation of ourselves and agents in general: Everybody is deeply shaped by their individual experiences and often it is actually quite insightful to go on 'auto-pilot' and ignore our representational urge to be our own initiator for a while and just see what the recurrent circuitry in ours brain can come up with.
For many people the physical description of the brain does actually have free will.
The point is to change people's intuition about free will, based on all this new information we've discovered in the past century.
"external to our physical reality"
There is no need for that.
Is the world inside of "No mans sky" game "external" or "internal" to our physical reality?
Soul is something that is run on the hardware of the brain, and thus can have different properties than the physical properties of neurons. It's virtual. No less real for that though.
Well no, that's kind of the point. If it's run on the brain's hardware (wetware?) then it's just a physical machine following physical laws, and it's no more or less conscious and has no more or less free will than a computer.
As we understand the 'natural' world, there's no room for free will. Your three sources of data for choosing your state n+1 are your state n, your perceptions of the world around you, and maybe some truly random factor.
Exactly. The only difference in our modern times is that a brain is orders of magnitude more complex than a computer. None of the scientists who are working on replicating the consciousness think that nowadays computers have comparable consciousness to a human. They all understand that the complexity needs to go up MANY times before we can talk about it. But it's still very clear that a consciousness (with free will and all other aspects of it) is definitely possible to have in a (future, much more advanced) computer.
> Your three sources of data for choosing your state n+1 are your state n, your perceptions of the world around you, and maybe some truly random factor.
That IS free will!
Perhaps you define it in some other way? If you say that "free will" is not possible if the decisions of such entity are based on some past experience (partly)? Well in that case there is no living entity that we know that have your definition of "free will", so what's the point of trying to recreate it? How would it even look?
And the thing about it is, the idea that a person reaches a decision unconsciously several moments before they consciously "feel" they "make the decision" is threatening to the idea of "free will"
But what exactly is being threatened? Does a person expect their decision to reached without any physical precursors? Do they expect one magic addition of pros and cons to be registered at the moment they subjectively experience "a decision"? I'm using hyperbole not to discount the importance of this phenomena but to highlight how you have a "highly value experience" that is simultaneously extremely vague. Psychologists would do well to study why people value such experiences.
 The User Illusion: Cutting Consciousness Down to Size, Tor Nørretranders, http://www.goodreads.com/book/show/106732.The_User_Illusion
If this was the case, freedom would not exist since our whole life experiences predisposes us to unconsciously exercise our freedom of will in certain ways.
Here is an amusing example:
I'm 12 and I want to try using a big person's hammer for the first time. My annoying little brother is beside me (as always... sigh).
In mid-air, as I swing the hammer towards the nail, he yells (right in my ear): "You're going to hammer in that nail and because I knew this before it happenned, you didn't decide to hammer it on your own".
In this example, the lack of causality is evident and the amount of LBAF is enormous (LBAF: Little Brother Annoyance Factor)
The parallel can be made with the mind. It's not because we become cognitively aware of our choices fractions of seconds after some brain activity that seem to be decisional that we didn't "will" it, for all we know, this activity is the gist of willing.
Furthermore, there is no indication that our cognitive experiences do not mold our subconscious behaviours, so much that this subconscious activity naturally corresponds to our actual will.
Clearly, it is not sufficient to break a misconceived definition of will in order to claim that freedom of will does not exist, with the argument used, one would also need to prove that this subconscious brain activity is incoherent with our conscious activity.
The problem I have with statements like that is that such statements tend not to provide an alternative meaning for freedom that most people would accept.
It seems better to say that most people comprehend freedom in a fashion that's some combination of incoherent, self-contradictory and trivial.
IE, it's better to say people comprehend freedom as you say but such a comprehension doesn't make sense if you look at it logically.
One thing you might say is that the concept of free and unfree choice makes sense in informal human concept of control and blame - those who freely choose things we don't like get blamed for it, saying people should be free is saying their behavior should be regulated by informal, unconscious interactions rather than formal, rules-based systems.
Now of course, in general, it's not a very good solution, because it only masks a problem. The problem itself is that you don't fully accept and understand all your feelings, and instead are labeling some of them as bad. Additionally to this, if you don't accept the feelings, but instead try to mask them with thoughts - that just makes them grow, because the feelings are not being let out of the system and start living their own lives.
What happens during meditation is that you force out all of the thoughts, so that feelings that exist in you become much more clear. You can finally switch your attention to the feelings that have been craving it. Then you become more aligned with them, understand them better and they stop being problematic. Each time you stop meditating - it's just so much more easier to notice that certain thoughts come from feelings, as opposed to "just randomly initiate".
So it's not really about being aware of the thought process during the meditation, it's more about using the experience gained from meditation, as a reference point, in order to notice deviations from it and how they are experienced.
Small recent example, I experienced my illness not being labeled at all. It's just a feeling, from an experential non-language standpoint / 'a feeling standpoint', a mindful standpoint perhaps -- it's hard to talk about stuff that I don't experience in language. Equanimity is awesome.
- Downplays how inaccurate the classifiers are (55-60% accuracy according to the figures).
- No table of actual left / right frequency pushing frequency. So we can't compare this to the empirical rate of chance (they assume 50% - I would bet £100 that it's not 50%).
- Exclusion policy of people who don't push buttons the right way was inadequately justified, particularly given the above.
- Vague explanation of their statistical methods, even in the supplementary material. In particular I can imagine several ways to interpret what they said about the ANOVA - it shouldn't be up to me to guess what they did, the paper should tell me.
- No use of hold-out data sets.
After a few repetitions, it became apparent why this may be so. It's the instructions. I'm to move the finger "immediately". My brain prepares the movement because it wants to do it "immediately" after "deciding".
Here's a more relaxed setup. Alternate between looking at two fingers at a relaxed pace. At any moment, decide to move the finger you're currently looking at after silently counting to five.
Introspectively, I find no imagined movement before the decision with this task.
Is there an expert here who can comment on the accuracy of this 2008 study, given the more recent news about bugs in fMRI software?