Hacker News new | past | comments | ask | show | jobs | submit login
Unconscious determinants of free decisions in the human brain (2008) [pdf] (rifters.com)
62 points by jimothyhalpert7 on Aug 18, 2016 | hide | past | web | favorite | 25 comments



I liked this interview with Judea Pearl on free will: https://youtube.com/watch?v=sg7Oq4suH_E

The summary is basically that to our best knowledge there is no free will because decisions are caused by neural activities which in turn are caused by sensory input and noise (but unlikely by quantum noise). However, humans have evolved a strong sense of agency because that's simply an efficient way to reason about machines that produce actions in response to the entirety of their sensory input (especially regarding parent child relationships and mutual behavior correction). This neuroarchitectural bias is essentially an illusion of free will that is so firmly wired into our brains that we cannot escape it. It is also the reason why the idea of a God comes so intuitively to many of us: an invisible actor which can be used to explain inexplicable chains of causation and can serve as a very effective metaphor for behavioral error correction (as a proxy for actual social repercussions, and hence relieved from all the complicated and hence fallible power relations to actual social error correction instances).


How does seeing a human as "neural activities which in turn are caused by sensory input and noise" contradict an idea of free will? The neural activity is obviously so complicated that it can make "sense" of the sensory input and the noise, and make intelligent decisions based on past experience, on it's own generatlizations of experience, on external ideas, etc. etc.

There are obviously some mechanisms for creating new information and action, based on past experience as well (thus also creating new unforeseen behavior). These mechanisms can clearly be implemented on the neural machine of the brain, since it's evident that it can even already be approximated in Google Deep Dream to create new unforeseen images based on previous inputs.

Whether the human can always verbally describe the decision tree (or whatever other decision mechanism is used), is another question. But even if it cannot describe, so what? The decision is done somewhere deep in the net, and the verbal processor does not have access to it. It's still the network making a decision...

So what makes you say that free will is an "illusion"? Our brains obviously soak up the information and then make future decisions based on that information (subject to effectiveness of learning, etc...).


That quote makes brains sound like deterministic machines. Which would contradict free will.


The notion that determinism leads to seeing the world in terms of simple predictable clockwork universe is obsolete. Look up "chaos theory". It describes how even very few deterministic rules applied to a vast number of elements can very quickly produce chaotic, unpredictable and non-deterministic results. Especially if those rules include feedback loops. BBC has a very good documentary on this called "The secret life of chaos".

To summarize, given all this new informatio: no, deterministic machines do not contradict free will. Because those machines are intelligent and have feedback loops and can (deterministically) make intelligent decisions based on the information.


For some frankly ridiculous definition of free will, yes. I think definitions of free will that this assumes - impossible to predict from pre-existing conditions - is synonymous with random choices. But random choices don't seem to be very free. Where is the agency?

I am the pre-existing condition that (largely) determines my choices. That's what makes my decisions mine and not just 'free' decisions devoid of context, responsibility or attribution.

So if my decisions are made by me and are not coerced or biased by limited access to information, then they are mine and they are free and I will happily accept responsibility for them. But magically occurring decisions free of conditions and not influenced in any way by my actual mental state or faculties are simply not my decisions.


A construct that can react in only one way to any given set of inputs (including its internal state, etc.) intuitively doesn't have 'free will'.

A construct that can react in multiple ways to a single given set of inputs, but does so by combining them with some internal inputs which are non-deterministic but generally random in nature, also intuitively doesn't have 'free will'.

What you need to really satisfy the intuitive concept of 'free will' is some analytical agency, external to our physical reality, which affects the outcome in some purposeful way. So, basically, a 'soul'.

Of course, to move past the 'intuitive' sense we're gonna need to actually rigorously define 'free will', which is something that is curiously lacking in virtually all discussions of this kind of stuff.


> A construct that can react in only one way to any given set of inputs (including its internal state, etc.) intuitively doesn't have 'free will'.

Then intuition is wrong, as is often the case. Either the decision is random, or it is mine, or it is somebody or something else's. There can be combinations of those factors, and of course that is actually the usual case. In fact arguably in practice it's always the case.

> What you need to really satisfy the intuitive concept of 'free will' is some analytical agency, external to our physical reality, which affects the outcome in some purposeful way. So, basically, a 'soul'.

But any analytical agency is going to encapsulate state. Moving that outside our physical bodies is just kicking the can down the metaphysical road but doesn't actually solve anything.

If we want to invoke pure randomness, there's always Quantum Mechanics. In fact there's a post on the intersection of quantum mechanics and biology on the front page right now [0]. But of course random input doesn't seem very 'free' either.

My point is that 'free will' in the abstract isn't an agency. To have agency there must be an agent and agents have state. When we are talking about free will, we should be talking about the free will of the agent. To the extent that the agent made the decision without undue external influence then they have free will. Can we ever be free of ourselves?

Edit: byt +1, good post and I definitely concur with your last point.

[0] https://news.ycombinator.com/item?id=12314600


The illusion is actually measurable. For example when you cause activity in cortical motor neurons by direct stimulation, then the subject will move e.g. their arm involuntarily, but they will later convincingly explain that they've moved their arm on purpose because of an itch or because they just felt like moving their arm.

Think about it. Isn't there something curious about this (probably innate) way of representing actions that were initiated inside our brains?

There are more examples like these, e.g. in split-brain patients where the speaking side simply rationalizes whatever the non-speaking side does. All of that points toward the same idea: Our feeling of being an entity that can choose in each moment what to do next misrepresents what actually happens inside our brains. Under the hood, every thought and every movement can be tracked back to either noise or to a recurrent control program that in turn was shaped by a reward maximizing mechanism and by learned statistics of real-world dynamics. On top of that there is an episodic self-representation which basically tells a story about itself and which has a bias to represent autonomously moving objects such as the structure it is part of, by a free, self-initiated intentionality. That story it tells itself about itself can of course affect future actions, but again the way it does is fully contingent on the laws of nature, on genetics, on the individual life experience etc.

If someone stopped caring about their life after reading this because they are not in control of their thoughts and movements anyway, then even that is fully contingent on the experience of reading whatever I am writing right now and society might decide to contain said subject for everyone's safety (i.e. to ensure their continued flow of reward signals). This exposes the role of this innate feeling of free agency as a mechanism of behavioral control. We need this representation to be efficient at attributing certain outcomes to certain individuals so that we can correct their behavior.

But we don't need to give up on anything knowing this. We still can do blame attribution. Actually we can probably gain something by improving the representation of ourselves and agents in general: Everybody is deeply shaped by their individual experiences and often it is actually quite insightful to go on 'auto-pilot' and ignore our representational urge to be our own initiator for a while and just see what the recurrent circuitry in ours brain can come up with.


Intuition is subjective. A long time ago it was very un-intuitive that the earth was spherical.

For many people the physical description of the brain does actually have free will.

The point is to change people's intuition about free will, based on all this new information we've discovered in the past century.

"external to our physical reality" There is no need for that.

Is the world inside of "No mans sky" game "external" or "internal" to our physical reality? Soul is something that is run on the hardware of the brain, and thus can have different properties than the physical properties of neurons. It's virtual. No less real for that though.


> Soul is something that is run on the hardware of the brain

Well no, that's kind of the point. If it's run on the brain's hardware (wetware?) then it's just a physical machine following physical laws, and it's no more or less conscious and has no more or less free will than a computer.

As we understand the 'natural' world, there's no room for free will. Your three sources of data for choosing your state n+1 are your state n, your perceptions of the world around you, and maybe some truly random factor.


> has no more or less free will than a computer

Exactly. The only difference in our modern times is that a brain is orders of magnitude more complex than a computer. None of the scientists who are working on replicating the consciousness think that nowadays computers have comparable consciousness to a human. They all understand that the complexity needs to go up MANY times before we can talk about it. But it's still very clear that a consciousness (with free will and all other aspects of it) is definitely possible to have in a (future, much more advanced) computer.

> Your three sources of data for choosing your state n+1 are your state n, your perceptions of the world around you, and maybe some truly random factor.

That IS free will!

Perhaps you define it in some other way? If you say that "free will" is not possible if the decisions of such entity are based on some past experience (partly)? Well in that case there is no living entity that we know that have your definition of "free will", so what's the point of trying to recreate it? How would it even look?


I've heard of this phenomena from "The User Illusion"[1], so it was known of (in less detail) years before 2008.

And the thing about it is, the idea that a person reaches a decision unconsciously several moments before they consciously "feel" they "make the decision" is threatening to the idea of "free will"

But what exactly is being threatened? Does a person expect their decision to reached without any physical precursors? Do they expect one magic addition of pros and cons to be registered at the moment they subjectively experience "a decision"? I'm using hyperbole not to discount the importance of this phenomena but to highlight how you have a "highly value experience" that is simultaneously extremely vague. Psychologists would do well to study why people value such experiences.

[1] The User Illusion: Cutting Consciousness Down to Size, Tor Nørretranders, http://www.goodreads.com/book/show/106732.The_User_Illusion


People tend to miscomprehend freedom as an act that is performed according to our will without any form of constraint or predisposition.

If this was the case, freedom would not exist since our whole life experiences predisposes us to unconsciously exercise our freedom of will in certain ways.

Here is an amusing example:

I'm 12 and I want to try using a big person's hammer for the first time. My annoying little brother is beside me (as always... sigh).

In mid-air, as I swing the hammer towards the nail, he yells (right in my ear): "You're going to hammer in that nail and because I knew this before it happenned, you didn't decide to hammer it on your own".

In this example, the lack of causality is evident and the amount of LBAF is enormous (LBAF: Little Brother Annoyance Factor)

The parallel can be made with the mind. It's not because we become cognitively aware of our choices fractions of seconds after some brain activity that seem to be decisional that we didn't "will" it, for all we know, this activity is the gist of willing.

Furthermore, there is no indication that our cognitive experiences do not mold our subconscious behaviours, so much that this subconscious activity naturally corresponds to our actual will.

Clearly, it is not sufficient to break a misconceived definition of will in order to claim that freedom of will does not exist, with the argument used, one would also need to prove that this subconscious brain activity is incoherent with our conscious activity.


People tend to miscomprehend freedom as an act that is performed according to our will without any form of constraint or predisposition.

The problem I have with statements like that is that such statements tend not to provide an alternative meaning for freedom that most people would accept.

It seems better to say that most people comprehend freedom in a fashion that's some combination of incoherent, self-contradictory and trivial.

IE, it's better to say people comprehend freedom as you say but such a comprehension doesn't make sense if you look at it logically.

One thing you might say is that the concept of free and unfree choice makes sense in informal human concept of control and blame - those who freely choose things we don't like get blamed for it, saying people should be free is saying their behavior should be regulated by informal, unconscious interactions rather than formal, rules-based systems.


I agree


Supplementary information and methods for the paper can be found here [0]. A more condensed version of the methods (in presentation form) can be found here [1].

[0]: http://www.nature.com/neuro/journal/v11/n5/extref/nn.2112-S1...

[1]: https://courses.cs.ut.ee/MTAT.03.292/2014_fall/uploads/Main/...


For me, taking a meditation retreat in total silence showed me a lot about how thoughts just spontaneously jump up in my mind. Normally, I don't notice it that often. And in normal situations I tend to act on a lot -- but not all -- of those thoughts which then invokes a thought train which may or may not invoke an action.


If you will keep meditating and will discover more about your mind, you will find that almost none of the thoughts that pop into our minds are random. They are all processes that have quite specific purposes. A lot of it comes from suppressed feelings, past trauma, beliefs about what you "have to do", or how you "supposed to feel", etc.


What kind of meditation do you do? When I do Vipassana I can't be fully aware of my thought processes since I'm focusing on 'feeling my body' in a non-judgmental way. I've been most aware of my thought processes in between breaks in meditation retreats.


This is obviously an area too big to adequately address in a HN comment, but just the broad strokes: Spontaneous thoughts that you were referring too, can basically (simplifying) be seen as the mind's way to avoid certain feelings that it has labeled as "bad". For example there can be some situation which you translate into feeling "alone" (only an example), and that feeling is labeled as "bad". Every time such situation will happen, that feeling will start to manifest in your system, but the brain will usually very quickly start to think some thoughts in order to distract you from such feeling. For the brain it sounds like a good idea, because there will be less of that "bad" feeling in your conscioussness, while you think.

Now of course, in general, it's not a very good solution, because it only masks a problem. The problem itself is that you don't fully accept and understand all your feelings, and instead are labeling some of them as bad. Additionally to this, if you don't accept the feelings, but instead try to mask them with thoughts - that just makes them grow, because the feelings are not being let out of the system and start living their own lives.

What happens during meditation is that you force out all of the thoughts, so that feelings that exist in you become much more clear. You can finally switch your attention to the feelings that have been craving it. Then you become more aligned with them, understand them better and they stop being problematic. Each time you stop meditating - it's just so much more easier to notice that certain thoughts come from feelings, as opposed to "just randomly initiate".

So it's not really about being aware of the thought process during the meditation, it's more about using the experience gained from meditation, as a reference point, in order to notice deviations from it and how they are experienced.


Thanks for showing the broad strokes. A lot of this seems very recognizable, although I couldn't put it into words like that.

Small recent example, I experienced my illness not being labeled at all. It's just a feeling, from an experential non-language standpoint / 'a feeling standpoint', a mindful standpoint perhaps -- it's hard to talk about stuff that I don't experience in language. Equanimity is awesome.


Really interested in this topic, not sure about this paper:

- Downplays how inaccurate the classifiers are (55-60% accuracy according to the figures).

- No table of actual left / right frequency pushing frequency. So we can't compare this to the empirical rate of chance (they assume 50% - I would bet £100 that it's not 50%).

- Exclusion policy of people who don't push buttons the right way was inadequately justified, particularly given the above.

- Vague explanation of their statistical methods, even in the supplementary material. In particular I can imagine several ways to interpret what they said about the ANOVA - it shouldn't be up to me to guess what they did, the paper should tell me.

- No use of hold-out data sets.


I find it easy to replicate this for myself. I cannot stop me from sort-of imagining the movement before the "decision".

After a few repetitions, it became apparent why this may be so. It's the instructions. I'm to move the finger "immediately". My brain prepares the movement because it wants to do it "immediately" after "deciding".

Here's a more relaxed setup. Alternate between looking at two fingers at a relaxed pace. At any moment, decide to move the finger you're currently looking at after silently counting to five.

Introspectively, I find no imagined movement before the decision with this task.


This research uses fMRI, but recently some bugs were found[0] which have called into question a lot of the result. This includes, I think, many of the results regarding free will.

Is there an expert here who can comment on the accuracy of this 2008 study, given the more recent news about bugs in fMRI software?

[0] https://news.ycombinator.com/item?id=12032269


Common experience tells us we make decisions without consciously thinking about them all the time. When you do something a bit silly and someone asks "why did you do that?", you try to make up reasons when really you didn't know why you did it. It was some subconscious process.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: