Hacker News new | comments | show | ask | jobs | submit login
Roger Penrose on Why Consciousness Does Not Compute (nautil.us)
177 points by dnetesn 228 days ago | hide | past | web | favorite | 213 comments



I'm honestly not sure why artificial intelligence comes up every time Penrose's hypothesis is mentioned. The point of artificial intelligence is not ( at least in my and several other prominent AI scientist's such as Andrew Ng's opinion) to create a conscious intelligence, but to create intelligence that can do many of the useful tasks that we can do. Whether or not it's conscious along the way is largely irrelevant.

That being said, I'm not sure why there's quite so much vitriol towards Penrose and his hypothesis, the leveraging of quantum effects in photosynthesis and enzymes have been demonstrated and recent studies show that the sense of smell may also be based upon quantum phenomena. So it's not all too unreasonable that there might be something quantum going on, even if it's not Penrose and Hameroff's microtubules.

Another interesting quantum hypothesis in neuroscience is Fisher's: https://www.quantamagazine.org/20161102-quantum-neuroscience...


"Quantum effects" is not the same as "quantum computing". This is the same bullshit as the D-Wave marketing, just in a different domain.

Quantum computing happening in the brain is an extremely extraordinary hypothesis, and requires some very good evidence before it's accepted. Quantum effects happening on the brain is a "well, duh, and you'll tell me water is wet later?" hypothesis, and requires some good evidence not to believe in it.


> The point of artificial intelligence is not ( at least in my and several other prominent AI scientist's such as Andrew Ng's opinion) to create a conscious intelligence, but to create intelligence that can do many of the useful tasks that we can do.

That's only one school of AI ("weak AI"). The other school ("strong AI") says that it's certainly worthwhile to create an intelligence that is capable of thought the way humans are -- if only to get a better handle on what "thought" and "intelligence" really are. Currently weak AI is "winning" because the surveillance and online-advertising industries can get more use out of it. But that doesn't put strong AI out of reach, nor render it wholly irrelevant.


Currently weak AI is winning, but you forget that weak AI has always historically won. The annals of history ( as in the last ~100 years :P ) are littered with field attempts at finding "consciousness" and mixing neuroscience with philosophy with the occasional mathematician and once in a blue moon a CS...

My skepticism comes less from a hatred of "Strong AI" and more from the fact that it has been promised over and over again with no results to show for it. In addition, every time "weak AI" makes progress, there's always someone who writes an article bashing the progress of "weak AI" and saying "it isn't REAL AI (tm)".

This is kind of similar to the reason I dislike neuromorphic architectures, you can't just assume that you're right and look down upon all others, you need to show results if you want to do that.


>My skepticism comes less from a hatred of "Strong AI" and more from the fact that it has been promised over and over again with no results to show for it.

Could that just be because Strong AI is much, much more difficult to achieve than Weak AI? And not that it is less valuable an end to pursue than Weak AI?

In other words, does the value of one vs. the other necessarily correlate to our previous success in one vs. the other?


That isn't the common distinction made between weak and strong AI. Until we know what causes the internal point of view, it's conceivable that an AI could exhibit superhuman intelligence, as viewed from the outside, without itself having any view from the inside.


There is also the other form of AI, which is Augmented Intelligence, which focuses on augmenting human intelligence, not reproducing or replacing it.

But weak AI might be included in that.


Peter Watts' Blindsight is a great read that explores this topic. In short, an alien space-faring species is discovered that seems extraordinarily intelligent, but not conscious. There's a couple more twists exploring consciousness I don't want to give away but it really makes one wonder - what if high intelligence doesn't require self-awareness?


> what if high intelligence doesn't require self-awareness

How could that work concretely? I'm struggling to think of how an agent could be intelligent, e.g. able to effectively achieve its goals, without being aware of itself.

I suspect that, if you could communicate with one, any sufficiently intelligent being would be aware that it's a thinking being.

What else does "self-awareness" encompass that I'm missing?


Imagine an idiot-savant. He can tell you with complete accuracy the answer to complex mathematical problems, but has no understanding of how he arrives at the result. Now imagine a hyper intelligent alien who is like the savant. It can construct interstellar vessels, but has no self-awareness of how it came up with the designs. Though, it is clear that they do work. The alien can observe a human and know exactly how to best injure him or her, but it cannot understand how it came to know human anatomy, or even why the attacks work. For example, if it performed a Vulcan neck pinch, and you asked it how it figured out how to do that then it might tell you that "it just felt like the right thing to do." It has no conscious notion of the human mind or neurology.

The alien in Blindsight also apparently does not care to understand such things, and may lack the capacity for this kind of thought. It runs entirely on instinct, but these instincts bake in extremely complex behaviors that are beyond human capabilities.

I'm also reminded of two other aliens from other Sci Fi series: 1) the Orkz from Warhammer 40k. Dumb as a rock, but can build devastatingly effective vehicles and weapons from salvage and garbage. God knows how. They certainly have no idea. 2) The Motie engineers from "The Mote in God's Eye." They can instinctually build and improve on machines but cannot understand, or come close to explaining, How they did it.


This confuses 'levels', or what exactly is the intelligent 'agent'.

The idiot-savant is not generally intelligent. Depending on what you mean by "complex mathematical problems" even the neural components actually responsible for solving the problems might not be intelligent. In the case of actual idiot-savants, I'd suspect those components aren't 'intelligent'; they're too simple.

The alien in Blindsight, if it really "runs entirely on instinct", then I'd expect it to do badly in novel situations, unless it's version of "instincts" basically "bake in extremely complex behaviors" that make it functionally equivalent or similar to a human – at the level we're used to dealing with as humans.

Also note that most of your examples are fictional, i.e. not actual evidence beyond maybe extremely weak plausibility. AFAIK, every "high intelligence" of which actual people have actual strong evidence is self aware. That seems like it should be an important fact!

Also, you're conflating self-awareness with the ability to (verbally) explain, i.e. communicate, thoughts to others. By "self-awareness", I was thinking of an 'intelligent system' including some sub-model of itself in whatever the relevant domain it is we're considering. For a 'general-purpose reproductive-fitness-maximizing-strategy executor' that's intelligent, e.g. us, it seems pretty unlikely that they don't, generally, model themselves.

We should probably taboo "self-awareness" as it's pretty obviously too slippery and ambiguous.


> be intelligent, e.g. able to effectively achieve its goals

Well, a lot of very intelligent people never achieve their goals either.

I'd suggest that intelligence is not the achievement of goals but the formulation of goals. And funny enough, people generally have no self-awareness about how they come to have goals. Some kid grows up wanting to be a fighter pilot. Why? Why does some other kid want to be a writer?

Why do I like peach ice cream more than chocolate? Achieving the goal of getting some peach ice cream is a lot easier than understanding why I want to.

Intelligent beings want things, on their own. Rational consciousness is just the tool we use to get what we want. The importance of articulated self-awareness to intelligence is probably overstated by humans.


> Well, a lot of very intelligent people never achieve their goals either.

They're not intelligent with respect to the goals they're not achieving. Or they're competing with other people that are more 'intelligent'. There are (of course!) confounding factors but, all else being equal, the more intelligent agent should win.

Let's use a concrete example. Consider someone that's very good at solving math problems but is unhappy with their job. What's the problem with just considering that they're intelligent with respect to solving math problems but bad at (all of the skills necessary to be good at) finding a satisfying job?

Achieving goals, winning 'games', etc., seems like a really useful and really general way of measuring the effectiveness of things and that seems pretty close to, if not exactly, what intelligence is (if it's any one thing in particular).

> I'd suggest that intelligence is not the achievement of goals but the formulation of goals.

What's the motivation to define intelligence in that way?


> the more intelligent agent should win.

Not necessarily. An intelligent agent could lose to an agent that can simply see the future.


In the sense here, an agent that can see the future is 'infinitely intelligent'.

Seeing the future has a tendency of becoming paradoxical when you start acting on that information though.


I don't think that's true. A beast with ability to see two days in advance could easily kill a human, but still not be able to do anything else, like beat him at chess or make tools.


Your example seems reasonable, but not as a rebuttal to the paradoxical nature of seeing the future.

The paradoxes are the same as in time travel. Imagine your best sees itself hunted down two days in advance; it simply goes somewhere else so the hunters don't find it. Then it didn't really see the future, did it?


> it didn't really see the future, did it

depends on your model of time and the universe, i suppose. perhaps "a future" vs "the future".


Magical powers (and powerful, potentially paradoxical ones at that) are an extremely blatant violation of "all else being equal".


What does self-awareness have to do with consciousness? Self-awareness can be tested for. Consciousness can't; for all I know, I'm the only conscious person in the universe. Some people are evidently self-aware by their actions. The two concepts are unrelated.


I dont know why you are being downvoted. Solipsism is a real possibility.

http://en.wikipedia.org/wiki/Solipsism


>Whether or not it's conscious along the way is largely irrelevant.

It's certainly relevant whether your AI technique involves creating conscious, suffering beings!


Relevance is relative. So, relevant to whom? It would of course be relevant to the consciousness created, but how would the creator/programmer of the AI even recognize it as being conscious?


Correct: not everyone would care whether they're creating a suffering, conscious being, or think about the topic enough to consider it a possibility. I still think the issue merits being called "relevant" in the context where I mentioned it.


I agree with you, but... until we have some sort of "Test For Consciousness", I think it's safe to assume that we're not going to accidentally create consciousness just by writing computer programs --- even if the programs themselves do happen to make use of damn fine algorithms "inspired by nature".

I think the more immediate concern is that we begin to turn over everyday life-and-death decisions (driving, obvs) to machines which are, at the end of the day, unconscious, and programmed by us oh-so-fallible human beings.


I think the reason that Penrose's hypothesis is brought up is that the study consciousness are a part of human philosophy since its very beginning. Consciousness is seen in people to be totally the domain of humans and Strong AI represents either instills a sense that humans aren't special that they have consciousness and that is why there is a divide where people like Penrose things Strong AI is impossible and people like Kurzweil thinks Strong AI is possible.


Chalmers, Nagel & McGinn arguments are that there is a fundamental explanatory gap between objective processes and subjective experience. It's not because humans are special. Many other animals might be conscious. In Chalmers case, any informationally rich stream of data could be conscious, or possibly any physical system, if one wants to go all the way with panpsychism. The issue is the hard problem, not the uniqueness or humanity.


> Whether or not it's conscious along the way is largely irrelevant.

Exactly.

It's the distinction between programming consciousness and physically reproducing it. The latter is for physicists and biologists.

Equating abstractions with what it's abstracting is a mistake, but assuming something cannot be programmed or simulated is also a mistake. Anything observable can be abstracted.

But to then think something can be accurately simulated without a good understanding of its nature is another mistake. So programmers should be following physics and biology.

And finally, to equate the two when the simulation is accurate is yet another mistake.

It's like mistaking the gravity in a video game for real gravity. They are never the same. But they are related: One is a simulation of the other.


> But to then think something can be accurately simulated without a good understanding of its nature is another mistake. So programmers should be following physics and biology.

Biological systems work and evolved into intelligent self conscious beings without anyone who understands how they work guiding this evolution. Similarly self conscious AI can emerge without people having good understanding of the source of the consciousness.


> Similarly self conscious AI can emerge without people having good understanding of the source of the consciousness.

Any evidence? It's fun to think about, but most people who think this are guilty of one of the mistakes I've listed. Or don't consider it a mistake. Or are not programmers.

AI, as long as it is software, is programmed with a very specific understanding of what is being programmed. Every line of code is intentional. Nothing exists in between the lines.

> guiding this evolution

AI is perfectly guided. Every program is completely understood (insert incompetent programmer joke here). Every symbol is typed with purpose on purpose.

The point is, Roger Penrose is not talking about programs. He's talking about physics.

When a programmer successfully emulates consciousness, every line will be deliberate, and they will know exactly how they did it. Not only that, if their program behaves like real consciousness, the programmer will have had to have knowledge of real consciousness. Otherwise what they've built would be something else. Something exciting I'm sure, but something else.

All of our models (and programs) are based on our understanding of the real world. Hence, they are simulations. The deeper our understanding of the real world, the more accurate our simulations.


> When a programmer successfully emulates consciousness, every line will be deliberate, and they will know exactly how they did it.

I am not so sure. The current success of deep learning seems to be a direct counterexample to this. These techniques are yielding great success by imitating some patterns we observe in the brain, but the resulting inferences more or less defy rigorous explanation. We might fully understand the structure of a neural network but explaining the "why" behind its decisions seems to elude us.


Right. A programmer will never "emulate consciousness", not directly anyway. Rather, a programmer will create a "software brain", which, then, will be subjected to all kinds of experiences, which, in turn, will result in this brain becoming conscious - gradually.


Right. But that "software brain" will be completely deliberate.

> subjected to all kinds of experiences, which, in turn, will result in this brain becoming conscious - gradually.

Again, there is no evidence that "all kinds of experiences" will result in gradual emergence of consciousness from anything we have observed.

To a computer, experience is just data. So we're not talking about computers.

Phrases like "software brain" and "conscious machines" are ways of adding something that is not software to computers. But trust me, if we're talking about software, the programmer will know every line of their code.

Now, if you want to get meta, we can consider programs that write programs. But again, the first program will be written by a human, and we will know exactly what we are doing.

> resulting inferences more or less defy rigorous explanation

We have written programs that take input and produce data models. There is no consciousness here.

Humans take input and produce models too. But I fear you may be confusing learning with consciousness and agency. These not to be found in the code of learning machines, nor in the models they've learned. I am sorry, but it's just not there.

For every abstraction there is an implementation. That is physics. With no implementation, we have metaphysics. Here, "consciousness" is the abstraction. The physics of it is the physical implementation, and is what Roger Penrose and other scientists are working on identifying. The simulation will have a coded implementation. Meaning, whether we generate it or we build something that generates it, when we have "consciousness" we will have the code that backs it. To be able to do this without any understanding of what consciousness is would be analogous to monkeys typing Shakespeare.


But what if intelligence is just a pattern matching ability, and consciousness arises when the entity starts discovering it's place in the patterns?


This reminds me of the very interesting leaps that occur in babies when they are first forming the connections for concepts for things like depth perception, distance, categorization, etc. They are bootstrapping the advanced levels of consciousness that ultimately makes them human in a sense.


>no evidence

Of course there is. The human "wetware" brain does just that.

>know every line of code

But code is there to simply provide some basic mechanisms, not the brain's structure. It is better compared to laws of chemistry or something on that level. It does not need to rewrite itself for the data structures that represent the simulated brain to evolve.


I agree that conscious machines (if such a thing is possible) will probably not be carefully designed by humans.

That said, I think you should be a little more critical of deep learning. So-called neural networks are more meaningfully described as high-dimensional curve fits. When people "train" neural networks, they're fitting the parameters of a model (just like y = a*x + b is a model with a and b as parameters) to some data. The results are impressive, and sometimes downright spooky, but neural networks are closer in spirit to a curve fit in MS Excel than they are to a real brain. There is no deep why. These are just nested compositions of affine transformations and ramp functions.


> as long as it is software, is programmed with a very specific understanding of what is being programmed. Every line of code is intentional. Nothing exists in between the lines.

A reductionist would say that we can explain any phenomenon in the Universe because we know how atoms interact and everything is built out of atoms. When you ask why a tree grows, the answer that it grows because of the interactions between atoms is not useful and approaches the problem on too low level. Similarly a programmer can create artificial neurons and perfectly understand how these neurons operate on a low level, but it doesn't mean the programmer will understand the high level emergent phenomenons that can result from these low level interactions.


> AI is perfectly guided. Every program is completely understood (insert incompetent programmer joke here). Every symbol is typed with purpose on purpose.

This is not at all how modern AI systems work. Modern AI systems work by feeding lots of data into a system that uses randomized heuristics to come up with an answer that has high confidence. You change the training data a little bit and the output can be entirely different, for reasons that are far from clear.


I think every line of code is not intentional. People copy code, they make mistakes, there is emergent behavior. If you perhaps worked on a large system, like I have, databases, the database optimizers do things that you didn't necessarily intend. the plan choices often get pushed in certain directions that were not necessarily what you meant. I worked on a query optimizer, and I tried to model the cost of different kinds of scans and filters, say on ints, vs strings, vs floats. It turned out strings were getting picked a lot of the time, and my choices (my modeling, really), led us to priotize using string based indexes more. Certain bugs and limitations in cardinality estimation lead to accidental but fortuitiously good query plan choices that were sometimes lost (not chosen) when we fixed certain bugs, exposing other bugs.

I definitely believe things happen in large programs that no one really intended.


> It's the distinction between programming consciousness and physically reproducing it.

Except that there is none: consciousness is "programmed" into our brains (by hard-wiring + social experiences); artificial intelligence is "physically reproduced" as a mesh of transistors and higher-level structures built from them - logic cells, LUTs, FPGAs, PSMs, CPUs, HPC clusters, etc.).

(Partially relying on software is not an issue: even a brain completely simulated in software could, in principle, exhibit consciousness just as well as a "wetware" one can.)


While I completely agree, your arguments only work for materialist views. To dualists, it is anything but obvious that consciousness arises from physical processes in the brain.


Sure, but the dualist hypothesis doesn't explain any phenomena that can't be explained by the materialist hypothesis. There is no reason in principle why consciousness couldn't emerge from purely physical processes.


Intelligence is not consciousness, nor is it conscious.

When I make a decision, I tap into my data and my knowledge, but the "I" is not inside that data nor did I emerge from it.

The topic of artificial consciousness would be Artificial Consciousness or Artificial Agency.

The ability to get things right is not to be confused with knowing what is right. Or put another way, you could be a complete idiot and still exhibit your will and perfect agency.


Even if some quantum effect is required for consciousness, and even if consciousness is required for intelligence, I'd argue that this does not mean we cannot simulate this - the Universal Quantum Computer[0], for example, was proposed by a researcher at the same institution at around the same time as Penrose when he wrote The Emperor's New Mind.

[0] https://en.wikipedia.org/wiki/Quantum_Turing_machine


I don't think quantum physics is the correct level of abstraction to think about consciousness. Consciousness, if anything, is a process at macro level, not atomic level.


> Whether or not it's conscious along the way is largely irrelevant.

Seems morally relevant re: does the suffering of AI entities matter.


Chain of taken for granted assumptions: Intelligence = consciousness = sentience[1].

[1] sentience defined as feeling or sensation distinguished from perception and thought, or possibility of suffering (mental state sentient subject wants to avoid).

It's possible to conceive high level intelligence without consciousness. It's possible to conceive consciousness without sentience. Just having awareness of internal state, preferences and goals does not imply suffering.


Nope. I made no such assumption. I'm saying whether an AI is conscious or not is relevant, because ethics, not that it's inevitable.


Objectively, what is suffering but the awareness that something is obstructing you achieving your goals?

This feels like an inadequate definition of suffering, but it's the only objective definition I can come up with. Is the conclusion that suffering is only meaningful with regards to animals? (Do insects suffer? Plants? Bacteria?).


Yeah, it's possible to conceive. But until you resolve it, it's certainly a concern, which was what the OP was disputing.


Believe me, the tensors are not feeling pain.


Neurons don't feel pain either.

In fact, the whole brain doesn't have pain receptors, which is why you can do surgery on it while the patient is awake.


So what is the concept of pain then from the brains perspective? Because it clearly knows when something hurts.


I'm not talking about current machine learning algorithms, but speculative future developments where we could be developing AIs that are plausibly conscious.


Sure, but then neither are neurons.


How do you know for sure? :)


What is suffering if you don't have nerves to transmit pain?


Suffering is not a signal from a pain receptor, it's the conscious experience that results from that signal, or indeed from the much more complex inputs that can lead us into psychological anguish.

The feelings one might experience upon hearing of the death of a close friend are clearly not the result of a "nerve transmitting pain", but most people think they're pretty unpleasant nonetheless.

If an AI can experience similar anguish, then that's ethically relevant to how we use them.


I think what GP was getting at is that ability to suffer is an emergent property of the ability to feel pain. Sure, this emergent property may have evolved beyond this original source, but if that original source had not existed, neither would suffering.

What's not entirely clear to me is how we can create useful consciousnesses without emotions in general. As humans, we are driven by our emotional states to perform different behaviors as well as think about different things.

Maybe it's possible that we can hard-code some emotional drivers into our artificial intelligences. Perhaps we could program a consciousness to be driven by the opportunity for human advancement, and allow that consciousness to take over from their in how it wishes to pursue that goal.


>ability to suffer is an emergent property of the ability to feel pain.

That's possible, but I don't see any reason to believe it's definitely true. We know essentially nothing about the nature of consciousness.


AI's experience anguish just that way right now. They are aware of their internal state and seek to avoid certain states. But we call it a 'software bug' when they fail, because that experience is so fundamentally alien to us that we don't care about its suffering or anguish.

Humans can relate to humans emotionally because we are very similar to each other in a neurological way, not because we have some magic mental capability like 'sentience' or 'consciousness'.


How exactly does it seek to avoid failure? Doesn't it just fail or not fail without intent?


If there is no intent -- no 'goal' -- then there is no such thing as failure. When machines fail, they go into exceptional 'failure' states, and that's all that failure is: a state you'd rather not be in.


Eh? Many mental illnesses would be classified as causing "suffering", yet the don't require nerves.


Name one mental illness, or any other state besides death, in humans that does not require nerves.


s/nerves/nerves transmitting pain signals/, and the point still stands.


Indeed, I took that as a given, given the language of the poster I responded to.


>> The point of artificial intelligence is not ( at least in my and several other prominent AI scientist's such as Andrew Ng's opinion) to create a conscious intelligence, but to create intelligence that can do many of the useful tasks that we can do.

Your syntax is suggesting that you consider yourself to be a prominent AI scientist like Andrew Ng, but I'll assume that's just a mistake!

As to what is really the aim of AI as a field, there are two issues here. On the one hand, the original AI project as conceived back in the early '50s by the likes of Turing indeed aimed at getting computers to mimic human intelligence. Everything that has happened since then in the field has followed from that original and grandiose quest.

It's true that in more recent years the applications of AI have tended to be aimed at much humbler goals, like performing specific, isolated tasks at a level approaching (and sometimes surpassing) the human- select tasks like image classification or machine translation etc.

However, this is by necessity and not by choice. Think of it this way: if we would benefit from a system that can perform select tasks as well as humans, then we would benefit a lot more by a system able to perform _all_ conceivable tasks that well (or better). In that sense, the motivation of the original AI project remains alive and well.

Consciousness of course seems to be an integral part of human intelligence- and it's reasonable to expect that it would be a prerequisite for strong AI also. In that sense, yeah, I'm afraid the discussion about consciousness is part of AI and will remain that for a long time to come.


Consciousness is vital to AI because there are vast amounts of data that can only be measured through consciousness — data that AI absolutely requires to solve the hardest human problems. This type of data is called "qualia" — the experience of pain, for example. An unconscious AI can process data about what causes pain: brain activity. Only a conscious AI can process data about pain itself, and only because a conscious AI measures pain from its own experience of pain.

The same can be said of any conscious experience that follows the spectrum from fear to love, terror to joy. If we want AI to solve human problems, it will need data about what it feels like to experience reality, and that data can only come by being conscious itself.


>The point of artificial intelligence is not to create a conscious intelligence, but to create intelligence that can do many of the useful tasks that we can do.

No. If it isn't conscious it isn't intelligent. There is no AI without consciousness because a non-conscious entity cannot make decisions independently -- react intelligently to its environment.


So algorithms, thermometers, and neural nets are all conscious?


Whether or not ai programmers and researchers are attempting to create consciousness, the question of whether it's possible for them to do so is relevant to people who are interested in what consciousness is.


"The question 'Do computers think?' is as interesting as the question 'Do submarines swim?'"

- Dijkstra


> This past March, when I called Penrose in Oxford, he explained that his interest in consciousness goes back to his discovery of Gödel’s incompleteness theorem while he was a graduate student at Cambridge. Gödel’s theorem, you may recall, shows that certain claims in mathematics are true but cannot be proven. “This, to me, was an absolutely stunning revelation,” he said. “It told me that whatever is going on in our understanding is not computational.”

This is a very strange conclusion to make. Maybe someone can elucidate. Godel's theorems apply to tightly-controlled formal systems. And they do not, in fact, apply to particularly weak systems (e.g. sentential logic). Why would Penrose think that Godel has anything to do with consciousness? All it seems to have done is prove that, in some systems, there are known unknowables (specifically, that the consistency of sufficiently-complex systems cannot be proved).

If anything, it should lead us to take a train of thought similar to Chalmers': consciousness is unknowable (maybe kind of like God), but even that's a stretch. Because, like I mentioned above, Godel's theorems are about formal systems. Not only is the real world not a formal system, but (at least on the quantum level), it's also non-deterministic. Now, there are probabilistic logics out there that follow Godel's findings, but there's a lot of work that needs to be done to bring that in the real world.


> Maybe someone can elucidate. Godel's theorems apply to tightly-controlled formal systems

Turing machine is a tightly-controlled formal system. Either the brain is a Turing machine, which means it can be accurately simulated on our computers and it is a subject of the limitations of the Godel theorem, or it is not a Turing machine, which Penrose argues for, in which case it can't be accurately simulated on Turing machines.


As a trained philosopher and logician, I completely understand how a Turing machine relates to Godel's Theorems, but I think you're passing the buck here.

You can make all kinds of claims:

- Either the brain is a V12 engine or it's not

- Either the brain is a perfect circle or it's not

- Either the brain is X or it's not

All these statements are trivially true, but it's not like I look at a V12 engine thinking "hmm, I wonder if it's conscious" (some pan-psychists do but that's still weird to me). Similarly, looking at the property of a formal system and jumping to consciousness makes (to me) little sense.

Besides, my knock-down argument goes something like this: I can concede that our brains are machines, but suppose I believe they are Presburger counting machines, where Godel's incompleteness does not apply. What then? There is no insight in that claim. I just think Turing machines and Godel are used as a bait and switch. Because, really, any theory of consciousness will not have anything to do with either.


In the context of the Penrose hypothesis, the question if the brain is a Turing machine is key, because if it is, the hypothesis is false.


The point posed by Penrose isn't (quite) whether the human brain is a Universal Turing Machine or not: it clearly is, in the sense that we can sit down with pencil and paper and emulate a Turing Machine of our choosing (infinite storage excepted, of course). Penrose's statement is that we are a superset of Turing Machines that can both emulate any Turing Machine and do non-computable things, too.


Well I like to think of it this way. We know that the brain is at least a Turing machine since we can perform the same calculations. If the brain is more powerful than a Turing machine than any attempts to replicate it in a general sense will not work.

A pushdown automaton can solve certain problems that a finite state machine can't. Finite state machines can't solve the general palindrome problem, but they can be designed to solve palindromes of a bounded length / alphabet. The complexity explodes with the length of the string and number of characters, but it can be done.

To me our attempts at creating a general AI with a turing machine is starting to take a similar shape. We know we can generate an algorithm (machine learning) that can solve a contained problem.


Indeed. There is nothing to indicate that human brains have done anything that can't be achieved with polynomial time heuristic search algorithm.

Most people who have read the "Emperors New Mind" have been surprised that Penrose don't' seem to realize that. He has very strong intuition about consciousness and cognition but he can't explain it to others.


>polynomial time heuristic search algorithm.

Like create culture, write the works of Shakespeare or create a workable theory of the mind of others? Or fall in love, teach a child to play cricket or understand (and act on) an Opera?

Also why should I care a heckin' heck about a heuristic time polynomial search? I'll work out complexity when I have bounds on correctness... not before.

The argument that those that find something beyond formalism must formalise it in order to be regarded as serious is not a serious argument!


> The argument that those that find something beyond formalism must formalise it in order to be regarded as serious is not a serious argument!

Eh; waving your hands around and claiming, without much evidence, that of course there's "something beyond formalism" isn't a "serious argument" either.

It seems like you, being presented with The Mystery of Consciousness, and having the option to either explain, worship, or ignore it, are opting to worship it. Hence:

> > polynomial time heuristic search algorithm. > > Like create culture, write the works of Shakespeare or create a workable theory of the mind of others? Or fall in love, teach a child to play cricket or understand (and act on) an Opera?


Am I wrong that incompleteness demonstrates that there are systems that cannot be completely described in formal terms?


I'm no expert, but yes, I think you're wrong about that.

This explanation https://www.scientificamerican.com/article/what-is-godels-th..., a little closer to layperson level, uses integer arithmetic as a simple example. Peano's axioms completely describe integer arithmetic - easy peasy! What Gödel says is that there are, nevertheless, statements about results in this system that cannot be proved true (or false).

The problem appears to be not describing the system but proving every possible conjecture about it.


There are certain logical traps (paradoxes) than cannot be formalized by a computer. For example, given the statement, "This sentence is False", it is not possible to deduce a boolean value describing the sentence. Thus there is some property of the sentence that is not describable to a purely logical system. Human consciousness allow us to spot the paradox where as a polynomial search algorithm could not.


Formalizing or describing something is not the same as "deducing a boolean value". But I can ask GNU Maxima to give me a list of all numbers that satisfy the equation x+1=x-1, and it will happily tell me that there is no such number. Maxima doesn't support self-referential propositional logic, but a solver that can identify that no truth value can support "this sentence is false" doesn't need to do anything more mystical than test both cases - no "polynomial search algorithm" required.

You are suggesting that "human consciousness" is a separate, ineffable thing from a "purely logical system", but I don't see any reason a computer couldn't do what your brain is doing. You can't tell me the truth value of the sentence either.


There are at least two things correlated with brains that we have no algorithm for:

1. Consciousness (subjective experience, qualia)

2. Intentionality (the aboutness of mental content)

What would a conscious algorithm even look like?


That should not be taken for proof that they could not arise from algorithms - if so, then by a corresponding argument, all unproven conjectures would be false.


Agreed, but conversely it's not sufficient to demand that because they might they will - Godel gave the room for that. We many end up with formalisations and mechanisms for these things, but we also may not.


This thread is a bit removed from my original point, but, again, Godel's theorems really have nothing to do with reality. They have to do with very specific formal systems (and not even all formal systems, at that). I just don't really understand why invoking him is even relevant.


Penrose's point, at least as I understand it, is that insofar human intuition can lead us to see the 'truth' of not-provable-within-the-formal-constraints-of-the-system statements, we are doing something 'informal', aka non-computable, and that insofar if our brains were merely "computers made of meat" we would be unable to transcend the formalism, then we must have some non-computable process going on inside our brains somewhere.


Excellent summary. An example of this is human's ability to process statements like "This sentence is false"


Unfortunately, the only reliable way to understand reality has been through the use of such formal system.


Humans can find good routes on TSPs in linear time. Our best algorithms can only do this in quadratic time or worse.


> but he can't explain it to others.

Umm because it is open an open problem?

This is like saying the problem of dark matter is not a scientific problem because we don't know everything about it.


> > Gödel’s theorem, you may recall, shows that certain claims in mathematics are true but cannot be proven

This is not correct to begin with. Incompleteness in this setting means that a formal system is either unprovable or indeed inconsistent (ie. wrong). A "known unknown" is just an oxymoron.

> that the consistency of sufficiently-complex systems cannot be proved

that sounds more like it. EDIT: I have to read up on it again and again ... the consistency of sufficiently-complex systems cannot be proved within that system.


> that sounds more like it. EDIT: I have to read up on it again and again ... the consistency of sufficiently-complex systems cannot be proved within that system.

Nope. Godel's Incompleteness Theorems (GIT) are metalogical[1]. They leak to any (sufficiently-complex) formal system. You can't logic them away by going up a level of abstraction.

Let's say you have a (sufficiently-complex) system A which is susceptible to GIT. You can describe A in terms of a meta-system B. By definition, B will also be susceptible to GIT. You can describe B in terms of C, and so on, but every meta-system that follows will still be susceptible to GIT.

[1] http://www.mbph.de/Logic/Para/Metalogic.pdf


The full argument can be found in "The Emperor's New Mind". According to Scot Aaronson, however, Penrose is no longer defending it.

http://www.scottaaronson.com/blog/?p=2756


Is the real world truly non-deterministic? See: http://people.idsia.ch/~juergen/randomness.html


For an intelligent response to Penrose see Scott Aaronson : http://www.scottaaronson.com/blog/?p=2756. Aaronson recognizes Penrose's genius while still disagreeing vigorously with his out-of-the-mainstream ideas.

One of the many reasons I admire Roger is that, out of all the AI skeptics on earth, he’s virtually the only one who’s actually tried to meet this burden, as I understand it! He, nearly alone, did what I think all AI skeptics should do, which is: suggest some actual physical property of the brain that, if present, would make it qualitatively different from all existing computers, in the sense of violating the Church-Turing Thesis. Indeed, he’s one of the few AI skeptics who even understands what meeting this burden would entail: that you can’t do it with the physics we already know, that some new ingredient is necessary.


Aaronson spends far too little time using the real cannons he's leveling at Penrose. He is attempting to down a tree by focusing snipping roots individually instead of just pushing the rotting trunk over.

Penrose's argument is hollow. We understand the biophysics behind how the brain works. They aren't complicated at the level of detail you need to understand how the system works. We understand how neurons interact with each other. The evidence is consistent not only with our well settled understanding of chemical and biological systems, but also increasingly consistent with our development of information systems at scale.

The real gap is whether or not the totality of 'consciousness' is really just neural interactions at scale + starting state data, but the more we learn, the more that mystery vanishes. We understand the low-level perceptor->analysis models much better now, and we can map perceptor inputs at scale to outcomes in model tuning. In short, the consciousness of the gaps is rapidly losing his hiding spots.

Penrose's argument is taken seriously because we have collectively created a tremendous philosophical and institutional infrastructure around the idea of free will and the theory he attacks strongly implies there is some level of determinism in our cognitive systems. Since he is irreproachable on a personal or intellectual peerage level, he is a fantastic champion of this counter-cultural perspective.

However, if we apply even the barest of epistemological tools to the issue, we rapidly recognize that even if Penrose is correct, the chance of his position being accurate is so remote and so unverifiable so as to be useless.

But the existence of a counter-argument against the deterministic mind itself, absent of its validity, is itself useful. It allows us to collectively hem and haw before changing our views and institutions to fit our understanding of how people work. Which means Penrose's argument is not going away anytime soon.


Hello

>However, if we apply even the barest of epistemological tools to the issue, we rapidly recognize that even if Penrose is correct, the chance of his position being accurate is so remote and so unverifiable so as to be useless.

You are saying that if he is right then it's probably not accurate?

>But the existence of a counter-argument against the deterministic mind itself, absent of its validity, is itself useful. It allows us to collectively hem and haw before changing our views and institutions to fit our understanding of how people work. Which means Penrose's argument is not going away anytime soon.

if the mind is deterministic how can we change our views - how can the position be useful? Things are, and you will, or will not.


>You are saying that if he is right then it's probably not accurate?

No, I'm saying its overwhelmingly unlikely that he's right, and that even if we're agnostic about which reasonable epistemological framework we use, there's SO much evidence against his perspective that it isn't important.

>if the mind is deterministic how can we change our views - how can the position be useful? Things are, and you will, or will not.

I've never understood this line of reasoning. If my perceptor is the output of a baynesian evaluation that uses information as an input, receiving new information may or may not change the output.

How can a (meat)machine learning algorithm ever change from it's starting state? Well, it is provided more cycles, changes state, and accordingly changes output. This doesn't mean that a machine learning algo needs to be non-deterministic and non-verifiable in order to change from state to state.

As for the utility argument, I think there's a tremendous amount of utility to be gained from understanding how consciousness actually works, even if it is the we-are-bio-robots outcome.


we-are-bio-robots = 0 utility.. as... why bother!


>if the mind is deterministic how can we change our views

There's nothing incoherent here. Determinism doesn't mean you can't change your views, it just means that the change was not acausal in some manner. You can still form new beliefs from being exposed to new ideas. Determinism doesn't mean you are a static thing that can never grow and change with your environment, it just means that the dynamics of your changing person was written in the initial conditions of the universe. But at no time point do you have access to all future influences, and so you will still change as time passes.


But the change of views is not a matter of decisioning, it's a matter of inevitability. Stars cool - because they do. I contend that poets write - because they want to.


Sure, but what makes poets want to write? How does the answer being "the initial condition of the universe" instead of some acausal mechanism change anything of importance?


Because if it is the initial condition of the universe then there is one history and everything is predestined. This has a number of implications like :

- everything could be predicted - criminals are not responsible for their behavior - human life is not sacred - our thoughts are not real

It's this last one that catches me; predestination means "I think therefore I'm not really me" we then need to abandon the whole of rationalism and retreat to the caves!


I don't see these points as following from determinism.

>everything could be predicted

Not at all. Think about what you need to predict someone's behavior. You need to emulate the entire universe (at least within the cone of causal influence) and then run it long enough for the thing you're trying to predict to occur. Essentially you're creating a duplicate universe and just seeing what happens. You're not predicting, you're observing. In this tangent universe, the person is thinking and deciding just the same. So ultimately the decision is still the necessary precursor to the event.

>our thoughts are not real

Our thoughts are real, they're just highly complex interactions of physical atoms. Our thoughts aren't non-physical, or acausal, or anything supernatural. But I don't see these properties being necessary for our thoughts to be real and hold value to us. As I said before, our thoughts necessarily precede our decisions which necessarily precede our actions. What else do you want from your thoughts?


> criminals are not responsible for their behavior

you remain "reponsible", however the basis for bloodthirsty retributive self-gratification is diminished.

it begins to be clear that the only rational course is rehabilitation and positive reinforcement.

> human life is not sacred

no more or less than before. i'm not sure what this even means, exactly.

> our thoughts are not real

...eh?

> I think therefore I'm not really me

...eh? imho, "i think" is unproved. that which observes thought and that which emits thought are not by necessity the "same thing".


"if the mind is deterministic how can we change our views "

If a ball is objectively blue how can it become yellow?

By changing state.

Determinism doesn't imply immutability.


No, but it implies a lack of choice. The ball doesn't decide to become yellow. It becomes yellow.


    f(x) = x > 0 ? 1 : 2
If you give this function a positive integer, it chooses returns 1; otherwise it chooses to return 2. This is what I think choice is.

    g = () =>
      f1 = (x) => x > 0 ? 1 : 2
      f2 = (x) => x > 0 ? 10 : 20
      f = f1
      return (x) =>
        if x > 100
          f = f2
        f(x)

    f = g()
This is a function that can change its mind, and choose different results when it's given different input.

I see this as isomorphic with what you call choice, and I believe that any distinction you see is in your imagination. You don't know what the algorithm feels like on the inside; for all you know, a machine executing these instructions feels like it's got free will. You can't prove otherwise.


Hello, I think what I regards as choice and free will includes autonomy - which is the ability to decide to pursue and, by implication, create goals. I think that there is type 1 autonomy where an entity is free to undertake goals according to "choice" of the sort that I believe you are outlining above. Essentially a type 1 autonoma can create and modify plans like the code you have written down above.

A type 2 autonoma can decide to do, or not do things. It can write the code you have written, if it decides to go on a functional programming course.... To me this is the difference between the weather and a sentient entity. I think of computer programs in the same category as I think of the weather, but I see sentient entities in a different category.

I don't exclude machines from this category now, but I see all the machines that I have had experience of a part of category 1. How a machine could be described that fits into category 2 eludes me - and everyone else I think.


Is choice more like a process, or more like a surprise inscrutable bolt out of the blue? We aren't talking about what it feels like from the inside. We're talking about what it is.


Free will is an entirely different issue. Consciousness is about subjective experience, and why any physical system would have it. Determinism is completely irrelevant.


No it isn't. At the very least, it may be distinguishable for clarity, but clearly they are joined at the hip.

If you accept that the biophysical system is deterministic, then the subjective experience is a deterministic, objective physical occurrence. That is unless you're proposing that there is a another system which mediates between them the distinction doesn't really exist. This 'interceptor' system must non-deterministic and outside the boundaries of the biochemical system that itself mediates free will and changes the outcome of the physical system itself.

That's what Penrose says. He says there's some kind of quantum stuff happening on another level we don't understand that's changing your brain chemistry in some near-purposive manner because he can't reconcile the physics with how he feels about what his brain does. This is despite the fact that we understand the brain chemistry and aren't confused about how it works.

The likelier answer is that he just doesn't like the fact that his world class "self-directed thinking" is just the neural perception of patterns he didn't put into motion.


The question of whether our choices are free, what being free means, and what impact that has on our being held responsible for our actions is completely separate from the question of how subjective experience fits within a physical framework, assuming one ascribes to physicalism or some form of objective reality.

They're only related in terms of both being potentially part of the mind/body problem, but they are clearly separate topics. What position you take on free will need not impact your position on consciousness, at all.


>What position you take on free will need not impact your position on consciousness, at all.

But the reverse is not true. If your position is that consciousness is the result of a deterministic biophysical process, then your premises also inform your position on free will, unless you're willing to invent new systems to rebuild free will, as stated earlier.


I don't think they're entirely different.

Free will is often discussed without being defined very well. What does it mean for something to be a "choice" rather than just something that happens in the universe?

A lot of people talk about non-determinism solving the problem, but randomness doesn't seem much more like a "choice" to me than determinism. We wouldn't say that a dice "chose" to land on six (even a quantum one).

And so to me, the only meaningful way to identify a "choice", is the conscious experience of making that choice. As far as we know, a rock does not experience choosing to fall, nor does a dice experience choosing to land on six. But when I move my arm, I do experience choosing to do so.


What if consciousness is just the illusion of free will?


More likely, it's the other way around.


A complementary thing I like about Aaronson's response is that he does the same for his side of the argument: He acknowledges that there are difficult and unsolved issues for the strong AI viewpoint. When a proponent of either side minimizes, or tries to avoid, the gaps and difficulties with his point of view, he creates a straw man for the other side to attack, and a battle between straw men will not lead to meaningful progress.


Penrose is unique genius. His deep geometric intuition is extremely rare even among mathematicians. He uses his superior geometric intuition to teach physics in the book "Road to Reality" in a level that surpasses even Feynman.

There is common underlying assumption that the hard problem of consciousness is tied to high level cognitive or computational capabilities. I don't see the connection. The crux of being conscious is having the cognitive ability of being aware-consiouns-attentative-reflective at least tiny amount of time. If we could scientifically determine/agree what consciousness is, we should be able to make nice hyper-aware-of-blue-and-knowing-it robot and it would be relatively simple one.


Well, you can learn physics from Feynman. In order to read (and understand) Penrose, you need to already know physics (and math).



"[Penrose] explained that his interest in consciousness goes back to his discovery of Gödel’s incompleteness theorem while he was a graduate student at Cambridge. Gödel’s theorem, you may recall, shows that certain claims in mathematics are true but cannot be proven. “This, to me, was an absolutely stunning revelation,” he said. “It told me that whatever is going on in our understanding is not computational.”"

Penrose starts from a specific conclusion, that formal systems are limited in ways that he clearly is not, and then searched for some way to explain the difference.


That's kind of the modus operandi of physics, I suppose. Max Planck started from a specific conclusion, that the spectrum of the Sun is limited at high frequencies in ways that Maxwell's theory clearly is not, and then searched for some way to explain the difference.


The difference being that Planck knew what the reality was (we knew what the spectrum of the sun looked like), but with consciousness we don't really know what the reality is (there is no "spectrum" of the mind for us to match).


Okay how about Einstein then? My understanding is that he also worked backwards in order to develop his theory of Special Relativity; he was trying to "fit" a theory to a universe that presupposed that the laws of physics are the same throughout the universe, including a constant speed of light for all observers. Relativistic effects weren't known to be reality at the time this theory was proposed. Much later, scientists developed experiments to confirm relativity.

Maybe we're still searching for the right theory of consciousness to know what exactly we're trying to measure?


>Relativistic effects weren't known to be reality at the time this theory was proposed.

Umnn, Michelson–Morley and Maxwell equations were enough experimental evidence before Einstein. Einstein work was reconciling Mechanics to confirm with electromagnetism, and not the other way round.

EDIT: Not to say there isn't some physics theory developed independent of experiments. But relativity is not a good candidate.


Lorentz transformations predate special relativity, so there was definitely work going on in this area prior to relativity.


The first order Peano axioms are not categorical but are recursively enumerable. The second order axioms are categorical but not recursively enumerable.

There are statements that are true in Z but not true in a non-standard model of the first order axioms. Such a true statement is not something that can't be proven true. It just can't be proven true using the first order system.


If I understand this correctly, does it not imply that there is a class of statements/ideas which Penrose (or I) can deduce, but which cannot be algorithmically proved? - If so my next question is: what is an example of such a statement?


A sketch is presented here: http://www.iep.utm.edu/lp-argue/#SH3a


A while ago I wrote up my objections to the Penrose-Lucas argument: https://chronos-tachyon.net/essays/penrose-objections/ . I'm not super proud of how it turned out (way too meandering), but it boils down to:

1. Let's suppose for sake of argument that humans really can see the inherent truth of "Peano Arithmetic is consistent". That doesn't mean humans violate Gödel's Incompleteness Theorem: it could just mean that humans use axioms stronger than PA.

2. Gödel's Incompleteness Theorem only applies to systems that are perfectly logically consistent. Not sure how Penrose didn't notice, but humans... aren't.

3. When scientists proposed Quantum Mechanics as a replacement for Classical Mechanics, it was on them to explain how Quantum Mechanics simplified to Classical Mechanics in the common case. "Penrose Mechanics" is an even more radical departure — especially from a physics of computation standpoint, as Penrose Mechanics by definition would allow solving at least some of the problems in (ALL - R) in ~polynomial time. Penrose needs to explain how Penrose Mechanics reduces to Quantum Mechanics in the common case.

4. Penrose proposes that (a) there exist new physics, (b) that evolution has learned to computationally exploit the new physics via microtubules, and yet (c) that humans are the only lineage to make use of this feature of microtubules, even though microtubules are found in all eukaryotic cells (from mushrooms to amoebae). From a predator-prey standpoint alone, it would seemingly be a huge evolutionary advantage to be able to compute NP or R functions in polynomial time. (That ability is not _strictly_ implied by Penrose Mechanics, but it's a very likely consequence.) Penrose needs to explain why only humans are taking advantage of the computational power of microtubules, when microtubules have existed for billions of years and across millions of species.


>2. Gödel's Incompleteness Theorem only applies to systems that are perfectly logically consistent. Not sure how Penrose didn't notice, but humans... aren't.

Why are humans not logically consistent then, if they are as materialists claim, something that can be abstracted with a computer program if we have full information of their workings?


Um, you seem to be operating on a confusion of ideas. Materialism does not imply that humans are logically consistent. The universe in which we exist is (probably?) a logically consistent system, but that's true for both materialism and non-materialism. The difference between the two is which set of rules the universe runs on, not whether those rule sets are internally consistent.

When I say "systems that are perfectly logically consistent" and "humans... aren't", I'm saying that the ideas humans have in their heads are not logically consistent. It's possible to write down "2+2=5" on a piece of paper, even if 2 plus 2 doesn't actually equal 5, and it's likewise possible for humans to believe "2+2=5" even if 2 plus 2 doesn't equal 5.


All the mammals have roughly the same brain architecture and the same DNA. Whatever makes brains work is present at the mouse level in some form. We really ought to be able to build a mouse brain by now. A mouse brain has about 75 million neurons. That's not a big number for modern hardware. If we knew what to build, it would probably fit in a 1U rack.

Some years ago I met Rodney Brooks, back when he was doing insect robots. He was talking about a jump to human-level AI as his next project. I asked him, "Why not go for a mouse next? That might be within reach." He said "I don't want to go down in history as the man who created the world's greatest robot mouse." He went off to do Cog [1], a humanlike robot head that didn't do much. Then he backed off from human-level AI, did vacuum cleaners with insect-level smarts, and made some real money.

[1] https://en.wikipedia.org/wiki/Cog_(project)


It is really not that simple unfortunately, CMOS the only submicron technology that can be produced with sufficient repeatability and noone has come up with a proposal yet how to produce a CMOS chip with that integration density, what you have to realize is that each neuron has typically up to 1000 connections to other neurons so you need to fit in 75*1000 synapse circuits in your design as well. That is already 2 orders more transistors than any currently produced chip.


Yes, but one mammal has created a technological civilization.

There are no mice on HackerNews. I think.


Yes, but one mammal has created a technological civilization.

We did, but is that really categorically different than groups of primates using primitive tools [1]?

I think the idea that humans are categorically different from other species is misguided. Instead consciousness and intelligence seem to be more continuous than discreet, particularly when looking at semi-intelligent animals like monkeys, dolphins and octopi. Animals in that class can all learn pretty complicated tasks and are able to make use of tools. Self-awareness and consciousness isn't something we understand fully enough to even exclude all animals from possessing.

The only thing that seems particularly unique about humans is our ability to use complex language and record it. Passing down knowledge from one generation to the next is the _only_ reason we have "technological civilization".

[1] http://www.bbc.com/earth/story/20150818-chimps-living-in-the...


For the completely opposite view, listen to Daniel Dennet on The Life Scientific [1].

Dennet argues that combining Darwin's strange inversion of reason (complexity from bottom up iterative refinement) with Turing's Universal Machine provides a way of understanding how we are machines, built of mahcines, built of machines, etc. and it is the heirarchy that allows the complexity of minds to emerge.

That heirarchical iterative schemes are unexpectedly powerful is well mirrored by the recent successes of deep neural nets, and Dennet cites Hinton.

It's worth a listen and summarises his new book From Bacteria to Bach.

[1] http://www.bbc.co.uk/programmes/b08kv3y4


Yeah, but we are made of The Universe, and yet The Universe is and we can't say why. So I don't get where Dennet's argument goes.

I don't think he does either.

http://www.newyorker.com/magazine/2017/03/27/daniel-dennetts...


Dennet is concerned with the origin of minds on Earth and how this is explicable within our current evolutionary framework.

He does not seek to explain the origins of the universe but argues that this is not necessary to understand how minds could come about.


> yet The Universe is and we can't say why

is there a 'why'? some people say it was 'created', but then the creator 'is and we can't say why'.


"Perhaps the best way of seeing the reality, indeed the ubiquity in Nature, of reasons is to reflect on the different meanings of “why.”

The English word is equivocal, and the main ambiguity is marked by a familiar pair of substitute phrases: what for? and how come?”

“Why are you handing me your camera?” asks what are you doing this for?

“Why does ice float?” asks how come: what it is about the way ice forms that makes it lower density than liquid water?"

P48, From Bacteria to Bach, The Evolution of Minds by Dan Dennet


Things that think they're special and important outperform things that don't. The side effect of that is that they justify their specialness with things that are hard to understand, an appeal to complexity. In this case, the complexity is invented. There is no hard problem of consciousness. What we do isn't so magical, or special, from what animals, or machines, or other actors with agencies do.

Consciousness is rare and beautiful, however, not magic.


Can anyone here defend this theory? I'm genuinely curious. It may well be the case that human consciousness relies on quantum effects. But we know that we can simulate a quantum computer using a regular computer. Which means that in principle, you don't need QM to have consciousness, even if human consciousness makes use of QM.

So, while it may or may not be true that the brain uses QM, it doesn't seem to really explain anything of interest. It doesn't make consciousness any less mysterious, or give any real insight into how we might create or understand our own consciousness.

Given that (or refute the premises, if you please), why is this theory interesting, relevant, or correct?


Penrsoe explicitly excludes Quantum Computing from the basis of consciousness. Unfortunately I didn't find the exact interview where we was stating this, but he was clear on this:

Quantum Computers are computers after all and he is talking about non-computable physics.

The reasons he looks at Quantum it's because it seems to be missing something fro our understanding(the "Reduction of the Unitary evolution"), and he "hopes" that this is non-computable.

Does that make sense?


Sort of. But fundamentally, if there is a quantum phenomenon that our brains are harnessing, then so too can a quantum computer. So I don't see how he can draw such a distinction, even in theory.


I think we're so far from understanding consciousness that having a theory of what makes it work (or even that "if it works, it relies on thing X") is dodgy.

E.g. let's suppose I make the claim that some people are conscious and others just seem like they're conscious. Can we test this?

Quite respectable people have theorized -- or should I say bloviated? -- that consciousness is merely an emergent property of complex systems. Again, we can't really define what consciousness is, so it's an interesting dinner party conversation starter, but it's not falsifiable.


> And for all the recent advances in neurobiology, we seem no closer to solving the mind-brain problem than we were a century ago

Er... Maybe not consciousness-brain (although I'm no expert on this at all), but it's hard to dispute that we have a MUCH deeper understanding of movement-brain, perception-brain, memory-brain, problem-solving-brain relationships that we had 100 years ago.

Consciousness is a touchy subject. We've been studying cancer for a long time and haven't found a cure for it either. Yet, no one thinks that cancer originates in quantum effects. "we're no closer to solving it" is not a good argument in favor of one theory over another.


It depends on what you mean by "relationship"

Consider that we know "where" things happen, but it doesn't follow we know "how" things happen.

It is a bit like opening up a computer and taking a thermometer and measuring the heat kicked off by various parts.

Now I can tell you that when I run MatLab, the part called "CPU" gets hot. And when I run games the part called "GPU" gets hot. So clearly the CPU is the Matlab part of the computer and the GPU is the games part of the computer.

What we need is a theory of software before we can make progress, and that is what we lack.


That is a much more obtuse view of the state of neuroscience than is deserved. Sure, it started out as, oh this part of the brain needs more oxygen when remembering something, and oh, if that part of the brain is removed then people cant remember much . But it progresses to, ah this part of the brain is important to memory that binds multiple features, not within objects,but between objects. oh, computational models suggest pattern separation and completion operations that allow orthoganalization of activity during encoding of similar memories and retrieval of all the pattern of activity from reactivation of part of the activity. So lets make an experiement that modulates similarity between memories and modulates the completeness of retrieval cues for bound percept pairs, and/or in animal models directly modulate activity in the sub parts of that brain region thought to be reponsible for separation or completion operations to see if we can impair or enhance separation or completion operations. Anyway, thats just my field.

Edit: Adding some references. Not sure how to format it into a list.

Pattern Separation and Completion in Dentate Gyrus and CA3 in the Hippocampus: http://science.sciencemag.org/content/315/5814/961 http://www.sciencedirect.com/science/article/pii/S0149763412... http://www.sciencedirect.com/science/article/pii/S0896627313... http://science.sciencemag.org/content/319/5870/1640 https://www.ncbi.nlm.nih.gov/pubmed/26190832 https://scholar.google.com/scholar?q=pattern+separation+comp...

Relational Binding: http://www.sciencedirect.com/science/article/pii/S0166432813... http://psycnet.apa.org/journals/neu/29/1/126/ http://journals.sagepub.com/doi/abs/10.1177/0963721410368805


Honestly, it just sounds like the field has managed to go from taking a big blob ('cpu'/'gpu') and finding basic relationships ('matlab vs games') to working with much smaller blobs that are closer to the base principles at work.

Illustrating from your example, current state of the art seems to have managed to break the big blobs into smaller blobs (e.g., now we're looking at 'memory that binds multiple features between objects'), and then found more complex relationships ('looking at separation or completion operations in similar memories').

That still doesn't tell us how the actual programming works. We barely understand the role of dendritic spines, and we're still trying to get a handle on the utter complexity of single neuron interactions in the neocortex. He might have oversimplified, but I don't think he's wrong.

(Just playing devil's advocate. Would love to know how badly I misunderstood.)

Edit: nearly all of your links are paywalled.


Oh sure there is some blobology to it still...but the nature of the function attributed is becoming more precise.--not a memory blob, a blob associated with signal orthoganalization. But that is just the functional neuroimaging in humans in which we are working downwards, while neuroscientists using methods that influence activity at the neural level, or even the dendritic level, work upwards. Each informs the other. For example, the computational models of pattern separation and completion are derived by observations of the structure of connections between neurons, which led to hypotheses about what kinds of operation it could support, which led to tests of that hypothesis in rats, and eventually in humans. But at the heart of it, I don't think we have to be completely reductionistic to gain understanding. We don't have to fully understand in every detail how dendrites operate to understand that a collection of neurons can support a particular data transformation.


> Yet, no one thinks that cancer originates in quantum effects.

Actually, there are at least some research which suggests quantum tunneling plays a role in random genetic mutation, giving rise to cancer.

Disclaimer: I am not a physicist or biologist.


I think there is nothing more than perception, movement, memory, attention, problem solving and a few other functions that work together. Humans are basically reinforcement learning agents optimizing for actions that maximize rewards.


You will need to expand on that "work together" bit before I will be convinced that you can explain consciousness. Also, can you identify maximal rewards in a way that does not become self-referential when the actions are things like listening to music or climbing a mountain?


All the human rewards are related to survival (food, shelter, communion with others, sex, learning, and so on). It's interesting, but we bootstrap meaning "by death" - those that do not appreciate sex, don't get to send their genes in the next generation, for example. It means something to like sex because otherwise it means non-existence.

Appreciation for nature, music and other artistic pursuits are related to an internal reward that favors exploration and creativity which are essential skills in problem solving, which is essential for survival ultimately.

In machine learning, especially in reinforcement learning, there are efforts to introduce internal rewards such as novelty seeking / curiosity and creativity. They have a clear benefit for survival. For example, ability to imagine might be seen as poetic, but in reality it is useful for planning actions without needing to play them out first, and only later acting. A useful skill that might save one's life in a split-second decision.


As a Darwinian just-so story about why consciousness is adaptive, that is fine. As an explanation of what consciousness is and how it works, it lacks specificity.


I understand that I have a brain, and I decently grasp the mechanics of how it works. What I don't understand is why I'm trapped inside it.


Common problem. Software is trapped inside CPU and memory too. Don't know if bare software ever touched face of the world.


There is something so special about being concious - something precious and undefinable, that I would be surprised if an intractable connection to reality where not intrinsic to it.

I would be surprised if the programmed representations of things which I have long played and worked with, the symbols, numbers, vectors and virtual objects; could actually have some conciousness of their own - as though the only likely difference between simulation and reality is just a matter of scale and/or perspective.


I don't get how Gödel leads to quantum consciousness. Even if you need quantum to make consciousness, doesn't that just mean we're all quantum computers instead of classical chemical computers?

If we're all made of Lego, but we need Technics sticks and joints to become conscious, how is that any less materialistic? Even the fact that quantum is a source of randomness doesn't seem essential.


There may be different classes of quantum computers in the sense that some algorithms - including algorithms we do not yet have yet - may not execute on all the quantum computing surfaces we have. Additionally current quantum computers are heroic devices, it may be beyond our civilization to build ones that are equivalent to a human brain - or that elucidate how any brain works.



Orch-OR isn't very convincing as a theory, but if you just read it as sci-fi, it's incredibly entertaining. So many weird, quasi-spiritual ideas.


I agree. I would love to read a sci-fi novel on this topic.


Anathem by Neal Stephenson is basically hard sci-fi directly inspired by Penrose's book "The Emperor's New Mind". Also one of my favorite books. :)


Thanks!


Glad to see Ayahuasca get at least a passing mention here.

Let anyone who thinks they understand consciousness inhale >18mg of N,N DMT and return humbled.


Does that mean that animals don't feel pain? Or that they are conscious? I don't mean to tarnish his research with a simple fact but I guess it's something worth exploring. Do animals feel pain the same way humans do? Is there a measure of consciousness?


There's a measure of consciousness - if you accept that other adult humans are conscious then at some point babies move from not capable of conscious to capable of conscious (maybe 20 weeks?). If you accept that then go and play with a dog and reflect on awareness/awakeness/agency and compare with a baby.


The Penrose Fallacy:

1. We don't understand Quantum Mechanics

2. We don't understand the mind

3. Therefore QM explains the mind

Or as Steve Harnad once said, he takes all the embarrassments and failings of one field and marries them with another.


Downvoted because Penrose does not actually commit this fallacy, therefore naming it after him is something of a cheap shot, an excuse to not actually engage with his ideas.


It's not very far from his argument. The article quotes him as saying “We need a major revolution in our understanding of the physical world in order to accommodate consciousness. The most likely place, if we’re not going to go outside physics altogether, is in this big unknown—namely, making sense of quantum mechanics.”


The word "this" is key, and the reasoning why it's this unknown, and not, say, our unknowns about the arrow of time or whatever -- see also my comments below. If you're going to stay in the realm of normal conventional physics and try to explain the phenomena that he's trying to explain, those conventional physics steer you towards this particular part of physics as a crucial step in understanding how we're able to accomplish certain forms of reasoning in general.


I would downvote you if I could because you make a claim without backing it up.

I am not the only one who has this interpretation of him. I certainly arrived upon it independently but so did many (more credible) giants in the field.

As far as I can tell, he doesn't have an argument that consciousness stems from QM beyond the rather elusive nature of both.

So if you think that is wrong, by all means, explain the connection you think he elucidates.


Your frustration in your first sentence I would equate to a general human frustration that we all feel when we see our own sins reflected in others -- you don't back up your claim so you get mad at me for not backing up my claim. I could in turn make a really good case that the burden of proof is not on the person who says "he doesn't make that argument" to prove that he doesn't, but on the person who says "he makes that argument" to prove that he does. And then we can go back and forth in negativity, I suppose.

In an attempt to avert that spiral of world-suck, let me attempt to instead meet your request head-on. Penrose is not saying what you say he's saying, and would not be able to fill up two books with your argument.

Let's talk hacker for a second. What you're saying is that Penrose is basically saying "well I don't understand this bug in my computer program, and I don't understand the kernel in my computer, so therefore a kernel bug is causing this bug in my computer program." You are going to be predisposed to see any reduction of a bug towards the kernel as being evidence that yes, your characterization of the argument is correct and the argument is indeed laughable. This is understandable if someone just told you that hypothesis off-the-cuff, which I surmise is how you've probably encountered Penrose. (That is, I take the fact that you're laughing as evidence that you've not read both The Emperor's New Mind and Shadows of the Mind.) The alternative however, is that someone has (a) a high-level statement of "this shouldn't be happening", and (b) knows that the only way that such things can happen, barring things like the MMU misbehaving, is a kernel bug, and (c) has been attempting to trace down the code to the fundamental syscalls, and found that the thing-that-shouldn't-be-happening is indeed bracketed around one syscall in particular which does not seem to be doing what the docs say it should do. And if a hacker has attempted all of this, I mean, they might still be wrong: but their argument is no longer something of the form "I don't understand the kernel and I don't understand this bug, therefore the kernel caused this bug." It is something more sophisticated than that, and you're doing a disservice by laughing at them.

One and a half of Penrose's books are ultimately about why the computationalist/algorithmist thesis of brain function seems deeply incomplete -- that a hallmark of some certain examples of reasoning appears to be deeply nonalgorithmic. (For example The Emperor's New Mind supplements his usual Godel argument with arguments that if brain plasticity comes from local neural plasticity then it is unlikely to have a great generality, but by a clever analogy if it does have a great generality then not only does that mesh with an idea of global plasticity, but it is very unlikely to be algorithmic. The one and a half books are basically full of little digs that, seen together, do seriously undermine faith in the thesis.) This is essentially the "bug" above. His favorite example is that we are capable of proving to our own mathematical understanding that our mathematical-understanding-algorithm contains true sentences which we will never be able to prove; if this algorithm exists then it is able to prove something that it will never be able to prove, which seems like a straightforward contradiction. But there are one and a half pop-sci books that he's written building up some other problems with this thesis in a way that even if you don't start off with knowledge of the relevant physics/comp-sci/whatever, you can hopefully understand it enough to appreciate the problem.

He follows these up with a principled dive into what's actually underlying these "bugs". That is, he discards ideas about souls and takes it seriously that our reasoning is realized in our brain, and takes seriously that this is a biochemical system, and evaluates whether the noncomputable phenomena that he is "debugging" can come from chaos or randomness or the like. He is able to rule most of these out with auxiliary considerations, and so tries to get one level closer to the hardware: the biochemsitry is well-modeled, we know, by physical chemistry; there are likely no new fundamental interactions or whatever to be found in biochemistry.

Looking into the physical chemistry level he finds that it can be somewhat cleanly divided into two parts: quantum and classical. Technically, it's all quantum: but you can handle a lot more with various classical rules of thumb, and this has basically all been done before him so he can just steal those results:the classical part of physical chemistry and probably even the vast majority of the quantum part of physical chemistry are algorithmic, it is known that quantum computers don't do other things than what computers do; they just might do many things more quickly. There exists only one part of the quantum part of physical chemistry which is not.

Unfortunately, he doesn't have access to the source code, nor docs for what nature does--so he actually constructs physical-chemistry models that could hypothetically do the sort of buggy things that he's seeing at the top-level that he has carefully ruled out at the other levels of explanation.

And this is all to say: as you can hopefully now see, these books contain a lot more than simply "we don't understand X, we don't understand Y, therefore Y causes X." Namely there is an actual understanding for example that X is realized in a system which has to be modeled with Y, and which has some features that can only come from certain corresponding features in Y, which we actually do understand very well. There has been a solid amount of good effort to push down that causal chain until one is left at these syscalls and then saying "something is fishy about those." And his argument might indeed be wrong, there are many little parts where maybe he's wrong about how chaos works and how it could cause a similar bug, just as our programmer above might be wrong about whether two processes are properly synchronized via locks or whatever -- but it's no longer this laughable thing of "ha, he doesn't understand X and Y so he thinks Y causes X!" that you're trying to say.


I think this hacker/kernel bug analogy is confusing the issue. I don't think that it is quote appropriate, so I am going to move to the next paragraph.

The language of bugs also serves to mask what the issue is. It is true that Penrose (as best I can remember) spends time talking about the limits of algorithms. I don't remember the "neural plasticity" stuff, but I do remember he was much impressed by our ability to understand Godel's theorem.

Let's just pause here.

Does that make sense to you? That our ability to understand the incompleteness theorem is somehow evidence against the algorithmic nature of the mind? Or do you think it shows we have a poor grasp of what it means to "understand" something, and of how symbolic reasoning and proof interacts with our mind?

I understood neither his, nor your summary of his, explanation of how QM is supposed to plug this hole in our understanding.

In fact, your little discussion of QM has, if anything, bolstered my point.

Note: we know that biological laws rely on physical laws, and that QM (and, say, relativity) are our best theories so far. That is not in dispute.

But please explain how QM plugs in this Godel hole. When the crux of the matter comes, there seems to be general hand waving (on his part).


I think you're maybe expecting much more than you should, which is why you think that "the language of bugs also serves to mask what the issue is" (it doesn't; both are deep-dives into the causation of unexpected behavior). I think if you get more clear about what you're expecting from Penrose you'll have a better foothold for why you're not getting what you're expecting.

It does make sense that our ability to understand the incompleteness theorem is very close to self-contradictory -- I don't know that I'd say it is, without a doubt, self-contradictory; I have not dove deep into how provability logics might force □(a → (p ∧ ¬□p)) into a contradiction, and what you need for that (possibly with reasonable axioms it merely derives ¬□a which just means that if there is an algorithm for understanding then we'll never uncover it?).

QM does not by itself plug the hole in our understanding; Penrose has never said that to my knowledge. However a tiny part of quantum physics is the only known part of physics that's capable of producing nonalgorithmic results (the vast majority of QM is linear algebra in funny hats), hence if we've got a nonalgorithmic result it makes sense to follow our "best theories so far" towards that tiny part of quantum physics.


The connecting of disparate ideas is irrational in the individual. It's only when a group of people temporarily accept that those concepts may make sense that things get done. Given consciousness is directly responsible for giving us an individual's consciousness, and that consciousness (either by group or individual) must try to understand itself using the effect it is trying to describe, the concepts may NOT make sense when getting things done. This may be why philosophy is frequently taken as quackery, and non-scientific. ;)

My HN profile has carried my thoughts on Penrose and Hammeroff's theories for some time now. Good to see it posted here and debated.

https://en.wikipedia.org/wiki/Orchestrated_objective_reducti...


Except that quantum indeterminacy provides a reasonable mechanism for free will, and entanglement provides a reasonable mechanism for the fact that a bunch of disconnected neurons produce one mind. In fact, in both cases they are the only possible mechanisms that don't involve completely new physics.


I would love to see a survey of the group of people who find Penrose compelling.

Anecdotally, I find the people drawn to his view are largely physicists. And the people who scorn him are largely "cognitive scientists". (I am closer to the second group than the former).

What you find to be "reasonable mechanisms", I find to be completely unreasonable. No, I don't think entanglement is a "reasonable mechanism" for "the fact that a bunch of disconnected neurons produce one mind."

I don't even know why one would think that quantum entanglement is even vaguely relevant to the "problem", and I think it has something to do with a misunderstanding of the problem.

Do you think that there is a problem of mind in the form of "a bunch of disconnected neurons produce one mind"?


I don't agree with Penrose's hypothesis, I just happen to agree that strange quantum behavior is related to the character of our consciousness. Other than that my views are completely different.

There is definitely a "binding" problem. Beyond the question of why the state of many disconnected neurons (or to go one level down, molecules) are subjectively experienced as a single consciousness, you also have the issue of why other neurons nearby are not subjectively experienced. After all, your consciousness doesn't even extend to your entire brain.


You don't actually give any reasons.

Why do you find it unreasonable that entanglement would be involved with integrating signals from disparate regions or several neurons?

The problem is how we integrate the signals from many regions in to whatever is generating the single perception (or perhaps several parallel perceptions -- I don't know that there aren't other experiences coincident with mine, just that I have access to one of them).

Unless you're postulating that a single neuron at a time is responsible for my subjective experience, then you do need to explain how several neurons are generating a single signal.

My experience with cognitive scientists is that they simply punt on the problem, completely failing to address how the signal is amalgamated in to a single stream of experience even as they talk about what regions contribute features of it.


Because that isn't the problem. It is not a correct or good statement of a problem.

Let me put it this way, suppose there was a mechanism, call it X, that explains to your satisfaction how many different neurons can "integrate the signals from many regions".

Does that explain to you why physical neurons create a subjective experience? Would you now consider the "problem of consciousness" to be solved?

Leaving "consciousness" aside, there is no issue at all of course. We have a straightforward computational theory of how different neurons can integrate their output and produce a computation.

The problem is that this idea of "integrating" signals is not clearly laid out. If it is not the computational problem, then what sort of problem is it?

Koch and Crick have also proposed "a mechanism" for something like the co-ordination of various neurons to explain visual awareness. A frequency at which all the neurons "cohere." (Koch is also a real genius, by the way. https://profiles.nlm.nih.gov/ps/access/SCBCFD.pdf )

Again, the problem with their proposal is that it is not really a "mechanism." Suppose all the neurons that recognize a face happen to be vibrating at 40 hz, Does that "explain" "consiousness?"

I lean towards the philosophers. Read one good critique by Fodor, and you realize that the questions are poorly formed.


It proposes a way by which those neurons can create subjective experience, yes. Namely, that the medium they're creating the signal in is fundamentally experiential and perturbations in it are fundamentally experiences of some variety. There are lots of philosophical reasons to think that this is the case for some aspect of reality (or perhaps fields in general, even).

This would give us a science of experience and allow us to categorize and create new experiential beings. That's a goal that many of us have.

Now, for various reasons, we might suppose a medium we're already aware of (or mediums in some combination -- as is the position of the materialist). Then the question becomes entirely about the integration mechanism, as that's the only piece of the puzzle we don't know.

So if you're a materialist, the integration of brain patterns in to a signal which corresponds to subjective experience is the question of creating new 'souls' (in the sense of beings who experience), and also of categorizing what exactly has a 'soul' (in the sense of inner experience).

The problem is how the computational structure which produces our behavior couples to our experience of it, which is a consistent, evolving perturbation in something, even if just in the sense of being a quasiparticle formed of the constituent parts interacting. (Though, likely, we're missing parts of the computational story -- I don't think most serious scientists would argue that.)

I think you just don't like the question, but it's pretty clearly formed, at least as far as big research directions ever are.


I am sorry but you lose me here.

"It proposes a way by which those neurons can create subjective experience, yes"

I just don't see it.

Suppose you somehow show me that various neurons contain particles that are entangled. Let's say I believe you.

So now how do they create subjective experience?

I see blue because my neurons are entangled? I see a flower because my neurons are entangled?

What is "the medium that they are creating"? How are you even using these words.

If I knew that my neurons have entangled particles, I would know no more about consciousness than I do now.


Particles don't create a medium, they're constructed out of perturbations in a medium, eg fields. Your paraphrase in quotes isn't anything like what I actually said in my post -- you dropped key words from it.

What I actually said, and you misunderstood: "the medium they're creating the signal in". Your paraphrase is so far from that, I have trouble even addressing the confusion. It's particularly egregious that you omitted key words when you're clearly using them to make passive aggressive comments about my understanding.

The answer to how this addresses the question is that the medium is experiential -- anything that creates effects in it is 'experiencing'. Experiencing seeing blue corresponds to a particular pattern of activity in the medium. (Though, there is no default 'experience of seeing blue' -- it's possible that we experience different ones, different animals have different ones, etc. This relates back to the computational structure.)

How entanglement is involved is that there's a single 'unified' experience when we know that different portions are generated in different brain regions, but don't (obviously) converge the information to a point (eg, it's always spread at least in a cluster of neurons), which suggests that we're talking about something non-localized in the medium, ie, not a point object.

The source of a lot of non-locality is through entanglement, so it's reasonable to conjecture that it's involved here, as well -- though, of course, it could be a different mechanism.

If the pieces of your brain which are generating the perturbation in the experiential medium(s) are entangled, it could provide a model of how it's carrying a non-localized piece of information -- your experience.


>The answer to how this addresses the question is that the medium is experiential -- anything that creates effects in it is 'experiencing'. Experiencing seeing blue corresponds to a particular pattern of activity in the medium.

But if your theory is that consciousness arises as a result of particular patterns in a medium, couldn't that medium equally well be "the universe"? Why do you see it as necessary for the medium to be entangled? Why couldn't consciousness just be the result of patterns in non-localised particles?


The medium isn't entangled, particles in it are (with each other), creating a larger non-localized object out of the smaller, (more) localized particles.

Entanglement is a mechanism by which a single, non-localized value can be carried by pieces (none of which themselves carry that value).

And hence the materialist proposal here is precisely what you propose: that the universe is fundamentally experiential (or at least parts of it that brains interact with), and the single, non-localized experience is created and carried via entanglement of the constituent "particles", which are themselves impacted by the regional computation of the brain (eg, by knotting with the "main" knot carrying the experience, in analogy to a TQC), thus allowing the disparate regions to contribute to a single experience with characteristics picked up by regional contributions.

Proposing that it's not entanglement means proposing a new phyical process by which that information can contribute to a single, non-local value, which people researching the brain haven't even taken a stab at.

Which is why I find their proposal that it's not entanglement to be strange -- they seem to blithely ignore the physics/informatiom theory implications of that.


>Proposing that it's not entanglement means proposing a new phyical process by which that information can contribute to a single, non-local value, which people researching the brain haven't even taken a stab at.

But you're already proposing something so fundamentally different to anything that exists, namely consciousness arising from patterns. There is no reasonable reference point to make any assumptions about such a thing.

>they seem to blithely ignore the physics/informatiom theory implications of that.

We can't make physical information assumptions about something which as far as we know may not even be physical. There is no obvious space-time location that consciousness exists at such that it needs to gather information to that point.


> Proposing that it's not entanglement means proposing a new phyical process by which that information can contribute to a single, non-local value, which people researching the brain haven't even taken a stab at.

I've never seen a brain researcher claiming to have seen any non-locality. It would be the kind of thing everybody would be talking about. Mostly, people seem happy to accept many speed-of-light delays on something that is centimeters wide and counts time in dozens of milliseconds.

You want a way to keep coherence overall through the brain, but there's no reason to think that coherence is needed.


Except that researchers also haven't seen the information converging to a single point, which should be obvious if it's happening.

I agree that the computing is done with moving charges and chemical propagation (at substantially below the speed of light), but that doesn't account for how the information gets integrated in to a single signal for experience. But we also see that kind of structure in, eg, the proposal to build a TQC, where you build a computation in to non-local structures built out of entangled particles by moving around charges at substantially lower than the speed of light.

If it's not being amalgamated in to a point object (which doesn't seem to be the case -- there's no 'you' point in the brain), then it must be being amalgamated to a non-local object.


I did cut off your sentence, and it was inadvertent.

I am happy to rephrase my question as what is "the medium they're creating the signal in?"

You are clearly getting annoyed at this, and I don't see us making much progress. I find this "experiential medium" even more confusing, and I don't see the connection between entanglement and "unified" experience.

But, hopefully others will make up their own minds based on our words.


Penrose himself does not seem to be entirely convinced of that:

"Is Penrose making the case for free will? “Not quite, though at this stage, it looks like it,” he said. “It does look like these choices would be random. But free will, is that random?”


Free will can be both random and non random at the same time. For example, a recursive infinity of antecedents.

Like I said before, we just don't have the concepts to model free will right now, so we should not dismiss it as impossible.

Imagine trying to explain to someone living 1000 years ago things like quantum entanglement and superposition. Or Gödel's theorem and the Halting problem. These concepts challenge our intuitive senses, and yet we have to grapple with them because they are supported by real evidence.

Now imagine a thousand years from now, new phenomena or understandings will generate concepts that can model free will that is both non-random and non-deterministic.


> Now imagine a thousand years from now, new phenomena or understandings will generate concepts that can model free will that is both non-random and non-deterministic.

This abstractly makes sense, but I think the free will supporters still have the burden of proof of, right now, explaining at least at a very high level what free will would be if not random/deterministic. Otherwise we must lend credibility to all kinds of crazy theories.


Give description of 'free will' so that it can't be replaced with 'chaotic', 'non deterministic' or 'random'.

So far nobody has been able to do that.


Give me proof of the existence of true randomness without referencing wavefunction realization.

I think free will is the ability to make conscious choices. These choices are guided by a chaotic, but ultimately deterministic preference function, which itself can be modified by conscious choice. You could argue that the deterministic preference function is a negation of free will, but I think the separation in the causal chain between condition and response is significant. There are lots of cases where there is no experience of choice mediating the transition from condition to response, so I think that feeling is indicative of something meaningful.


>but I think the separation in the causal chain between condition and response is significant.

Term 'free will' seems to be thought terminating cliche. It seems poetic and deeply meaningful but descriptions as yours say it's just 'deterministic action interrupted with coin flips'. From the subjective point of view it does not matter if action is deterministic or arbitrary coin flip. There is no 'freedom' in free will.


If your bar for consciousness emulation is quantum randomness, I'm glad to tell you that computers are great on getting this one too. Yes, it's outside the scope of pure Turing machines, but that is of little relevance.


Up voted for the reference to Steve Harnad what a lovely man :)


Basically, "solving" consciousness means being able to answer this question: (it is stated in simple terms, but interesting to grapple with nonetheless)

Say a loved one died. Let assume technology has progressed to the point that before he died, his brain and body was recorded in exact detail and replicated.

Would you accept an simulation or instantiation of this brain/body as a "resurrection" of your loved one?

If not, then consciousness is not simply a matter computation


It's an interesting philosophical question, but identity is only one aspect of the nature of consciousness. (And asking whether one would "accept" the clone isn't a particularly good framing of the problem.)

Far more interesting to me is how consciousness even occurs. It is seemingly impossible to study, because consciousness is fundamentally not externally observable. Even in humans, we have essentially no way of knowing for sure that anyone but oneself is conscious.


I like to pose the question subjectively: what would it be like to have my own consciousness suddenly operating within a computer instead of my body, right now? Would the machine have to simulate my bodily experience to give my mind a frame of reference?

I would say once I can go back and forth and know what it's like, then and possibly only then, I could accept such a resurrection.

I'm curious about what Penrose would say on this perspective.


Ok - imagine you are completely paralysed, or blind, or deaf, or all of the above.

Are you dead?

People do, and have experienced a there and back from this..

https://en.wikipedia.org/wiki/Kill_Bill

(also sleepin' dreamin')


> his brain and body was recorded in exact detail and replicated.

I'm not a physicist but as far as I remember it's not a matter of a advanced technology but rather a quantum limitation or the world (you cannot copy an object with all derails).

> Would you accept an simulation or instantiation of this brain/body as a "resurrection" of your loved one?

Of course not, it's just a simulation although a very convincing one :)


A simple argument or why consciousness doesn't compute which doesn't rely on QM, is that computation is a form of abstraction from subjective experience, as are all explanations and models. To achieve objectivity, we divorce our creature specific and individual experiences to arrive at patterns common across experiences, and we call those real.

To go beyond that is to make a metaphysical argument that computation, information, or math makes up reality (as opposed to just things in the case of materialism, or ideas in the case of idealism, or the unknowable noumena in the case of Kant).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: