That being said, I'm not sure why there's quite so much vitriol towards Penrose and his hypothesis, the leveraging of quantum effects in photosynthesis and enzymes have been demonstrated and recent studies show that the sense of smell may also be based upon quantum phenomena. So it's not all too unreasonable that there might be something quantum going on, even if it's not Penrose and Hameroff's microtubules.
Another interesting quantum hypothesis in neuroscience is Fisher's: https://www.quantamagazine.org/20161102-quantum-neuroscience...
Quantum computing happening in the brain is an extremely extraordinary hypothesis, and requires some very good evidence before it's accepted. Quantum effects happening on the brain is a "well, duh, and you'll tell me water is wet later?" hypothesis, and requires some good evidence not to believe in it.
That's only one school of AI ("weak AI"). The other school ("strong AI") says that it's certainly worthwhile to create an intelligence that is capable of thought the way humans are -- if only to get a better handle on what "thought" and "intelligence" really are. Currently weak AI is "winning" because the surveillance and online-advertising industries can get more use out of it. But that doesn't put strong AI out of reach, nor render it wholly irrelevant.
My skepticism comes less from a hatred of "Strong AI" and more from the fact that it has been promised over and over again with no results to show for it. In addition, every time "weak AI" makes progress, there's always someone who writes an article bashing the progress of "weak AI" and saying "it isn't REAL AI (tm)".
This is kind of similar to the reason I dislike neuromorphic architectures, you can't just assume that you're right and look down upon all others, you need to show results if you want to do that.
Could that just be because Strong AI is much, much more difficult to achieve than Weak AI? And not that it is less valuable an end to pursue than Weak AI?
In other words, does the value of one vs. the other necessarily correlate to our previous success in one vs. the other?
But weak AI might be included in that.
How could that work concretely? I'm struggling to think of how an agent could be intelligent, e.g. able to effectively achieve its goals, without being aware of itself.
I suspect that, if you could communicate with one, any sufficiently intelligent being would be aware that it's a thinking being.
What else does "self-awareness" encompass that I'm missing?
The alien in Blindsight also apparently does not care to understand such things, and may lack the capacity for this kind of thought. It runs entirely on instinct, but these instincts bake in extremely complex behaviors that are beyond human capabilities.
I'm also reminded of two other aliens from other Sci Fi series: 1) the Orkz from Warhammer 40k. Dumb as a rock, but can build devastatingly effective vehicles and weapons from salvage and garbage. God knows how. They certainly have no idea. 2) The Motie engineers from "The Mote in God's Eye." They can instinctually build and improve on machines but cannot understand, or come close to explaining, How they did it.
The idiot-savant is not generally intelligent. Depending on what you mean by "complex mathematical problems" even the neural components actually responsible for solving the problems might not be intelligent. In the case of actual idiot-savants, I'd suspect those components aren't 'intelligent'; they're too simple.
The alien in Blindsight, if it really "runs entirely on instinct", then I'd expect it to do badly in novel situations, unless it's version of "instincts" basically "bake in extremely complex behaviors" that make it functionally equivalent or similar to a human – at the level we're used to dealing with as humans.
Also note that most of your examples are fictional, i.e. not actual evidence beyond maybe extremely weak plausibility. AFAIK, every "high intelligence" of which actual people have actual strong evidence is self aware. That seems like it should be an important fact!
Also, you're conflating self-awareness with the ability to (verbally) explain, i.e. communicate, thoughts to others. By "self-awareness", I was thinking of an 'intelligent system' including some sub-model of itself in whatever the relevant domain it is we're considering. For a 'general-purpose reproductive-fitness-maximizing-strategy executor' that's intelligent, e.g. us, it seems pretty unlikely that they don't, generally, model themselves.
We should probably taboo "self-awareness" as it's pretty obviously too slippery and ambiguous.
Well, a lot of very intelligent people never achieve their goals either.
I'd suggest that intelligence is not the achievement of goals but the formulation of goals. And funny enough, people generally have no self-awareness about how they come to have goals. Some kid grows up wanting to be a fighter pilot. Why? Why does some other kid want to be a writer?
Why do I like peach ice cream more than chocolate? Achieving the goal of getting some peach ice cream is a lot easier than understanding why I want to.
Intelligent beings want things, on their own. Rational consciousness is just the tool we use to get what we want. The importance of articulated self-awareness to intelligence is probably overstated by humans.
They're not intelligent with respect to the goals they're not achieving. Or they're competing with other people that are more 'intelligent'. There are (of course!) confounding factors but, all else being equal, the more intelligent agent should win.
Let's use a concrete example. Consider someone that's very good at solving math problems but is unhappy with their job. What's the problem with just considering that they're intelligent with respect to solving math problems but bad at (all of the skills necessary to be good at) finding a satisfying job?
Achieving goals, winning 'games', etc., seems like a really useful and really general way of measuring the effectiveness of things and that seems pretty close to, if not exactly, what intelligence is (if it's any one thing in particular).
> I'd suggest that intelligence is not the achievement of goals but the formulation of goals.
What's the motivation to define intelligence in that way?
Not necessarily. An intelligent agent could lose to an agent that can simply see the future.
Seeing the future has a tendency of becoming paradoxical when you start acting on that information though.
The paradoxes are the same as in time travel. Imagine your best sees itself hunted down two days in advance; it simply goes somewhere else so the hunters don't find it. Then it didn't really see the future, did it?
depends on your model of time and the universe, i suppose. perhaps "a future" vs "the future".
It's certainly relevant whether your AI technique involves creating conscious, suffering beings!
I think the more immediate concern is that we begin to turn over everyday life-and-death decisions (driving, obvs) to machines which are, at the end of the day, unconscious, and programmed by us oh-so-fallible human beings.
It's the distinction between programming consciousness and physically reproducing it. The latter is for physicists and biologists.
Equating abstractions with what it's abstracting is a mistake, but assuming something cannot be programmed or simulated is also a mistake. Anything observable can be abstracted.
But to then think something can be accurately simulated without a good understanding of its nature is another mistake. So programmers should be following physics and biology.
And finally, to equate the two when the simulation is accurate is yet another mistake.
It's like mistaking the gravity in a video game for real gravity. They are never the same. But they are related: One is a simulation of the other.
Biological systems work and evolved into intelligent self conscious beings without anyone who understands how they work guiding this evolution. Similarly self conscious AI can emerge without people having good understanding of the source of the consciousness.
Any evidence? It's fun to think about, but most people who think this are guilty of one of the mistakes I've listed. Or don't consider it a mistake. Or are not programmers.
AI, as long as it is software, is programmed with a very specific understanding of what is being programmed. Every line of code is intentional. Nothing exists in between the lines.
> guiding this evolution
AI is perfectly guided. Every program is completely understood (insert incompetent programmer joke here). Every symbol is typed with purpose on purpose.
The point is, Roger Penrose is not talking about programs. He's talking about physics.
When a programmer successfully emulates consciousness, every line will be deliberate, and they will know exactly how they did it. Not only that, if their program behaves like real consciousness, the programmer will have had to have knowledge of real consciousness. Otherwise what they've built would be something else. Something exciting I'm sure, but something else.
All of our models (and programs) are based on our understanding of the real world. Hence, they are simulations. The deeper our understanding of the real world, the more accurate our simulations.
I am not so sure. The current success of deep learning seems to be a direct counterexample to this. These techniques are yielding great success by imitating some patterns we observe in the brain, but the resulting inferences more or less defy rigorous explanation. We might fully understand the structure of a neural network but explaining the "why" behind its decisions seems to elude us.
> subjected to all kinds of experiences, which, in turn, will result in this brain becoming conscious - gradually.
Again, there is no evidence that "all kinds of experiences" will result in gradual emergence of consciousness from anything we have observed.
To a computer, experience is just data. So we're not talking about computers.
Phrases like "software brain" and "conscious machines" are ways of adding something that is not software to computers. But trust me, if we're talking about software, the programmer will know every line of their code.
Now, if you want to get meta, we can consider programs that write programs. But again, the first program will be written by a human, and we will know exactly what we are doing.
> resulting inferences more or less defy rigorous explanation
We have written programs that take input and produce data models. There is no consciousness here.
Humans take input and produce models too. But I fear you may be confusing learning with consciousness and agency. These not to be found in the code of learning machines, nor in the models they've learned. I am sorry, but it's just not there.
For every abstraction there is an implementation. That is physics. With no implementation, we have metaphysics. Here, "consciousness" is the abstraction. The physics of it is the physical implementation, and is what Roger Penrose and other scientists are working on identifying. The simulation will have a coded implementation. Meaning, whether we generate it or we build something that generates it, when we have "consciousness" we will have the code that backs it. To be able to do this without any understanding of what consciousness is would be analogous to monkeys typing Shakespeare.
Of course there is. The human "wetware" brain does just that.
>know every line of code
But code is there to simply provide some basic mechanisms, not the brain's structure. It is better compared to laws of chemistry or something on that level. It does not need to rewrite itself for the data structures that represent the simulated brain to evolve.
That said, I think you should be a little more critical of deep learning. So-called neural networks are more meaningfully described as high-dimensional curve fits. When people "train" neural networks, they're fitting the parameters of a model (just like y = a*x + b is a model with a and b as parameters) to some data. The results are impressive, and sometimes downright spooky, but neural networks are closer in spirit to a curve fit in MS Excel than they are to a real brain. There is no deep why. These are just nested compositions of affine transformations and ramp functions.
A reductionist would say that we can explain any phenomenon in the Universe because we know how atoms interact and everything is built out of atoms. When you ask why a tree grows, the answer that it grows because of the interactions between atoms is not useful and approaches the problem on too low level. Similarly a programmer can create artificial neurons and perfectly understand how these neurons operate on a low level, but it doesn't mean the programmer will understand the high level emergent phenomenons that can result from these low level interactions.
This is not at all how modern AI systems work. Modern AI systems work by feeding lots of data into a system that uses randomized heuristics to come up with an answer that has high confidence. You change the training data a little bit and the output can be entirely different, for reasons that are far from clear.
I definitely believe things happen in large programs that no one really intended.
Except that there is none: consciousness is "programmed" into our brains (by hard-wiring + social experiences); artificial intelligence is "physically reproduced" as a mesh of transistors and higher-level structures built from them - logic cells, LUTs, FPGAs, PSMs, CPUs, HPC clusters, etc.).
(Partially relying on software is not an issue: even a brain completely simulated in software could, in principle, exhibit consciousness just as well as a "wetware" one can.)
When I make a decision, I tap into my data and my knowledge, but the "I" is not inside that data nor did I emerge from it.
The topic of artificial consciousness would be Artificial Consciousness or Artificial Agency.
The ability to get things right is not to be confused with knowing what is right. Or put another way, you could be a complete idiot and still exhibit your will and perfect agency.
Seems morally relevant re: does the suffering of AI entities matter.
 sentience defined as feeling or sensation distinguished from perception and thought, or possibility of suffering (mental state sentient subject wants to avoid).
It's possible to conceive high level intelligence without consciousness. It's possible to conceive consciousness without sentience. Just having awareness of internal state, preferences and goals does not imply suffering.
This feels like an inadequate definition of suffering, but it's the only objective definition I can come up with. Is the conclusion that suffering is only meaningful with regards to animals? (Do insects suffer? Plants? Bacteria?).
In fact, the whole brain doesn't have pain receptors, which is why you can do surgery on it while the patient is awake.
The feelings one might experience upon hearing of the death of a close friend are clearly not the result of a "nerve transmitting pain", but most people think they're pretty unpleasant nonetheless.
If an AI can experience similar anguish, then that's ethically relevant to how we use them.
What's not entirely clear to me is how we can create useful consciousnesses without emotions in general. As humans, we are driven by our emotional states to perform different behaviors as well as think about different things.
Maybe it's possible that we can hard-code some emotional drivers into our artificial intelligences. Perhaps we could program a consciousness to be driven by the opportunity for human advancement, and allow that consciousness to take over from their in how it wishes to pursue that goal.
That's possible, but I don't see any reason to believe it's definitely true. We know essentially nothing about the nature of consciousness.
Humans can relate to humans emotionally because we are very similar to each other in a neurological way, not because we have some magic mental capability like 'sentience' or 'consciousness'.
Your syntax is suggesting that you consider yourself to be a prominent AI
scientist like Andrew Ng, but I'll assume that's just a mistake!
As to what is really the aim of AI as a field, there are two issues here. On
the one hand, the original AI project as conceived back in the early '50s by
the likes of Turing indeed aimed at getting computers to mimic human
intelligence. Everything that has happened since then in the field has
followed from that original and grandiose quest.
It's true that in more recent years the applications of AI have tended to be
aimed at much humbler goals, like performing specific, isolated tasks at a
level approaching (and sometimes surpassing) the human- select tasks like
image classification or machine translation etc.
However, this is by necessity and not by choice. Think of it this way: if we
would benefit from a system that can perform select tasks as well as humans,
then we would benefit a lot more by a system able to perform _all_ conceivable
tasks that well (or better). In that sense, the motivation of the original AI
project remains alive and well.
Consciousness of course seems to be an integral part of human intelligence-
and it's reasonable to expect that it would be a prerequisite for strong AI
also. In that sense, yeah, I'm afraid the discussion about consciousness is
part of AI and will remain that for a long time to come.
The same can be said of any conscious experience that follows the spectrum from fear to love, terror to joy. If we want AI to solve human problems, it will need data about what it feels like to experience reality, and that data can only come by being conscious itself.
No. If it isn't conscious it isn't intelligent. There is no AI without consciousness because a non-conscious entity cannot make decisions independently -- react intelligently to its environment.
This is a very strange conclusion to make. Maybe someone can elucidate. Godel's theorems apply to tightly-controlled formal systems. And they do not, in fact, apply to particularly weak systems (e.g. sentential logic). Why would Penrose think that Godel has anything to do with consciousness? All it seems to have done is prove that, in some systems, there are known unknowables (specifically, that the consistency of sufficiently-complex systems cannot be proved).
If anything, it should lead us to take a train of thought similar to Chalmers': consciousness is unknowable (maybe kind of like God), but even that's a stretch. Because, like I mentioned above, Godel's theorems are about formal systems. Not only is the real world not a formal system, but (at least on the quantum level), it's also non-deterministic. Now, there are probabilistic logics out there that follow Godel's findings, but there's a lot of work that needs to be done to bring that in the real world.
Turing machine is a tightly-controlled formal system. Either the brain is a Turing machine, which means it can be accurately simulated on our computers and it is a subject of the limitations of the Godel theorem, or it is not a Turing machine, which Penrose argues for, in which case it can't be accurately simulated on Turing machines.
You can make all kinds of claims:
- Either the brain is a V12 engine or it's not
- Either the brain is a perfect circle or it's not
- Either the brain is X or it's not
All these statements are trivially true, but it's not like I look at a V12 engine thinking "hmm, I wonder if it's conscious" (some pan-psychists do but that's still weird to me). Similarly, looking at the property of a formal system and jumping to consciousness makes (to me) little sense.
Besides, my knock-down argument goes something like this: I can concede that our brains are machines, but suppose I believe they are Presburger counting machines, where Godel's incompleteness does not apply. What then? There is no insight in that claim. I just think Turing machines and Godel are used as a bait and switch. Because, really, any theory of consciousness will not have anything to do with either.
A pushdown automaton can solve certain problems that a finite state machine can't. Finite state machines can't solve the general palindrome problem, but they can be designed to solve palindromes of a bounded length / alphabet. The complexity explodes with the length of the string and number of characters, but it can be done.
To me our attempts at creating a general AI with a turing machine is starting to take a similar shape. We know we can generate an algorithm (machine learning) that can solve a contained problem.
Most people who have read the "Emperors New Mind" have been surprised that Penrose don't' seem to realize that. He has very strong intuition about consciousness and cognition but he can't explain it to others.
Like create culture, write the works of Shakespeare or create a workable theory of the mind of others? Or fall in love, teach a child to play cricket or understand (and act on) an Opera?
Also why should I care a heckin' heck about a heuristic time polynomial search? I'll work out complexity when I have bounds on correctness... not before.
The argument that those that find something beyond formalism must formalise it in order to be regarded as serious is not a serious argument!
Eh; waving your hands around and claiming, without much evidence, that of course there's "something beyond formalism" isn't a "serious argument" either.
It seems like you, being presented with The Mystery of Consciousness, and having the option to either explain, worship, or ignore it, are opting to worship it. Hence:
> > polynomial time heuristic search algorithm.
> Like create culture, write the works of Shakespeare or create a workable theory of the mind of others? Or fall in love, teach a child to play cricket or understand (and act on) an Opera?
This explanation https://www.scientificamerican.com/article/what-is-godels-th..., a little closer to layperson level, uses integer arithmetic as a simple example. Peano's axioms completely describe integer arithmetic - easy peasy! What Gödel says is that there are, nevertheless, statements about results in this system that cannot be proved true (or false).
The problem appears to be not describing the system but proving every possible conjecture about it.
You are suggesting that "human consciousness" is a separate, ineffable thing from a "purely logical system", but I don't see any reason a computer couldn't do what your brain is doing. You can't tell me the truth value of the sentence either.
1. Consciousness (subjective experience, qualia)
2. Intentionality (the aboutness of mental content)
What would a conscious algorithm even look like?
Umm because it is open an open problem?
This is like saying the problem of dark matter is not a scientific problem because we don't know everything about it.
This is not correct to begin with. Incompleteness in this setting means that a formal system is either unprovable or indeed inconsistent (ie. wrong). A "known unknown" is just an oxymoron.
> that the consistency of sufficiently-complex systems cannot be proved
that sounds more like it. EDIT: I have to read up on it again and again ... the consistency of sufficiently-complex systems cannot be proved within that system.
Nope. Godel's Incompleteness Theorems (GIT) are metalogical. They leak to any (sufficiently-complex) formal system. You can't logic them away by going up a level of abstraction.
Let's say you have a (sufficiently-complex) system A which is susceptible to GIT. You can describe A in terms of a meta-system B. By definition, B will also be susceptible to GIT. You can describe B in terms of C, and so on, but every meta-system that follows will still be susceptible to GIT.
One of the many reasons I admire Roger is that, out of all the AI skeptics on earth, he’s virtually the only one who’s actually tried to meet this burden, as I understand it! He, nearly alone, did what I think all AI skeptics should do, which is: suggest some actual physical property of the brain that, if present, would make it qualitatively different from all existing computers, in the sense of violating the Church-Turing Thesis. Indeed, he’s one of the few AI skeptics who even understands what meeting this burden would entail: that you can’t do it with the physics we already know, that some new ingredient is necessary.
Penrose's argument is hollow. We understand the biophysics behind how the brain works. They aren't complicated at the level of detail you need to understand how the system works. We understand how neurons interact with each other. The evidence is consistent not only with our well settled understanding of chemical and biological systems, but also increasingly consistent with our development of information systems at scale.
The real gap is whether or not the totality of 'consciousness' is really just neural interactions at scale + starting state data, but the more we learn, the more that mystery vanishes. We understand the low-level perceptor->analysis models much better now, and we can map perceptor inputs at scale to outcomes in model tuning. In short, the consciousness of the gaps is rapidly losing his hiding spots.
Penrose's argument is taken seriously because we have collectively created a tremendous philosophical and institutional infrastructure around the idea of free will and the theory he attacks strongly implies there is some level of determinism in our cognitive systems. Since he is irreproachable on a personal or intellectual peerage level, he is a fantastic champion of this counter-cultural perspective.
However, if we apply even the barest of epistemological tools to the issue, we rapidly recognize that even if Penrose is correct, the chance of his position being accurate is so remote and so unverifiable so as to be useless.
But the existence of a counter-argument against the deterministic mind itself, absent of its validity, is itself useful. It allows us to collectively hem and haw before changing our views and institutions to fit our understanding of how people work. Which means Penrose's argument is not going away anytime soon.
>However, if we apply even the barest of epistemological tools to the issue, we rapidly recognize that even if Penrose is correct, the chance of his position being accurate is so remote and so unverifiable so as to be useless.
You are saying that if he is right then it's probably not accurate?
>But the existence of a counter-argument against the deterministic mind itself, absent of its validity, is itself useful. It allows us to collectively hem and haw before changing our views and institutions to fit our understanding of how people work. Which means Penrose's argument is not going away anytime soon.
if the mind is deterministic how can we change our views - how can the position be useful? Things are, and you will, or will not.
No, I'm saying its overwhelmingly unlikely that he's right, and that even if we're agnostic about which reasonable epistemological framework we use, there's SO much evidence against his perspective that it isn't important.
>if the mind is deterministic how can we change our views - how can the position be useful? Things are, and you will, or will not.
I've never understood this line of reasoning. If my perceptor is the output of a baynesian evaluation that uses information as an input, receiving new information may or may not change the output.
How can a (meat)machine learning algorithm ever change from it's starting state? Well, it is provided more cycles, changes state, and accordingly changes output. This doesn't mean that a machine learning algo needs to be non-deterministic and non-verifiable in order to change from state to state.
As for the utility argument, I think there's a tremendous amount of utility to be gained from understanding how consciousness actually works, even if it is the we-are-bio-robots outcome.
There's nothing incoherent here. Determinism doesn't mean you can't change your views, it just means that the change was not acausal in some manner. You can still form new beliefs from being exposed to new ideas. Determinism doesn't mean you are a static thing that can never grow and change with your environment, it just means that the dynamics of your changing person was written in the initial conditions of the universe. But at no time point do you have access to all future influences, and so you will still change as time passes.
- everything could be predicted
- criminals are not responsible for their behavior
- human life is not sacred
- our thoughts are not real
It's this last one that catches me; predestination means "I think therefore I'm not really me" we then need to abandon the whole of rationalism and retreat to the caves!
>everything could be predicted
Not at all. Think about what you need to predict someone's behavior. You need to emulate the entire universe (at least within the cone of causal influence) and then run it long enough for the thing you're trying to predict to occur. Essentially you're creating a duplicate universe and just seeing what happens. You're not predicting, you're observing. In this tangent universe, the person is thinking and deciding just the same. So ultimately the decision is still the necessary precursor to the event.
>our thoughts are not real
Our thoughts are real, they're just highly complex interactions of physical atoms. Our thoughts aren't non-physical, or acausal, or anything supernatural. But I don't see these properties being necessary for our thoughts to be real and hold value to us. As I said before, our thoughts necessarily precede our decisions which necessarily precede our actions. What else do you want from your thoughts?
you remain "reponsible", however the basis for bloodthirsty retributive self-gratification is diminished.
it begins to be clear that the only rational course is rehabilitation and positive reinforcement.
> human life is not sacred
no more or less than before. i'm not sure what this even means, exactly.
> our thoughts are not real
> I think therefore I'm not really me
...eh? imho, "i think" is unproved. that which observes thought and that which emits thought are not by necessity the "same thing".
If a ball is objectively blue how can it become yellow?
By changing state.
Determinism doesn't imply immutability.
f(x) = x > 0 ? 1 : 2
g = () =>
f1 = (x) => x > 0 ? 1 : 2
f2 = (x) => x > 0 ? 10 : 20
f = f1
return (x) =>
if x > 100
f = f2
f = g()
I see this as isomorphic with what you call choice, and I believe that any distinction you see is in your imagination. You don't know what the algorithm feels like on the inside; for all you know, a machine executing these instructions feels like it's got free will. You can't prove otherwise.
A type 2 autonoma can decide to do, or not do things. It can write the code you have written, if it decides to go on a functional programming course.... To me this is the difference between the weather and a sentient entity. I think of computer programs in the same category as I think of the weather, but I see sentient entities in a different category.
I don't exclude machines from this category now, but I see all the machines that I have had experience of a part of category 1. How a machine could be described that fits into category 2 eludes me - and everyone else I think.
If you accept that the biophysical system is deterministic, then the subjective experience is a deterministic, objective physical occurrence. That is unless you're proposing that there is a another system which mediates between them the distinction doesn't really exist. This 'interceptor' system must non-deterministic and outside the boundaries of the biochemical system that itself mediates free will and changes the outcome of the physical system itself.
That's what Penrose says. He says there's some kind of quantum stuff happening on another level we don't understand that's changing your brain chemistry in some near-purposive manner because he can't reconcile the physics with how he feels about what his brain does. This is despite the fact that we understand the brain chemistry and aren't confused about how it works.
The likelier answer is that he just doesn't like the fact that his world class "self-directed thinking" is just the neural perception of patterns he didn't put into motion.
They're only related in terms of both being potentially part of the mind/body problem, but they are clearly separate topics. What position you take on free will need not impact your position on consciousness, at all.
But the reverse is not true. If your position is that consciousness is the result of a deterministic biophysical process, then your premises also inform your position on free will, unless you're willing to invent new systems to rebuild free will, as stated earlier.
Free will is often discussed without being defined very well. What does it mean for something to be a "choice" rather than just something that happens in the universe?
A lot of people talk about non-determinism solving the problem, but randomness doesn't seem much more like a "choice" to me than determinism. We wouldn't say that a dice "chose" to land on six (even a quantum one).
And so to me, the only meaningful way to identify a "choice", is the conscious experience of making that choice. As far as we know, a rock does not experience choosing to fall, nor does a dice experience choosing to land on six. But when I move my arm, I do experience choosing to do so.
There is common underlying assumption that the hard problem of consciousness is tied to high level cognitive or computational capabilities. I don't see the connection. The crux of being conscious is having the cognitive ability of being aware-consiouns-attentative-reflective at least tiny amount of time. If we could scientifically determine/agree what consciousness is, we should be able to make nice hyper-aware-of-blue-and-knowing-it robot and it would be relatively simple one.
Penrose starts from a specific conclusion, that formal systems are limited in ways that he clearly is not, and then searched for some way to explain the difference.
Maybe we're still searching for the right theory of consciousness to know what exactly we're trying to measure?
Umnn, Michelson–Morley and Maxwell equations were enough experimental evidence before Einstein. Einstein work was reconciling Mechanics to confirm with electromagnetism, and not the other way round.
EDIT: Not to say there isn't some physics theory developed independent of experiments. But relativity is not a good candidate.
There are statements that are true in Z but not true in a non-standard model of the first order axioms. Such a true statement is not something that can't be proven true. It just can't be proven true using the first order system.
1. Let's suppose for sake of argument that humans really can see the inherent truth of "Peano Arithmetic is consistent". That doesn't mean humans violate Gödel's Incompleteness Theorem: it could just mean that humans use axioms stronger than PA.
2. Gödel's Incompleteness Theorem only applies to systems that are perfectly logically consistent. Not sure how Penrose didn't notice, but humans... aren't.
3. When scientists proposed Quantum Mechanics as a replacement for Classical Mechanics, it was on them to explain how Quantum Mechanics simplified to Classical Mechanics in the common case. "Penrose Mechanics" is an even more radical departure — especially from a physics of computation standpoint, as Penrose Mechanics by definition would allow solving at least some of the problems in (ALL - R) in ~polynomial time. Penrose needs to explain how Penrose Mechanics reduces to Quantum Mechanics in the common case.
4. Penrose proposes that (a) there exist new physics, (b) that evolution has learned to computationally exploit the new physics via microtubules, and yet (c) that humans are the only lineage to make use of this feature of microtubules, even though microtubules are found in all eukaryotic cells (from mushrooms to amoebae). From a predator-prey standpoint alone, it would seemingly be a huge evolutionary advantage to be able to compute NP or R functions in polynomial time. (That ability is not _strictly_ implied by Penrose Mechanics, but it's a very likely consequence.) Penrose needs to explain why only humans are taking advantage of the computational power of microtubules, when microtubules have existed for billions of years and across millions of species.
Why are humans not logically consistent then, if they are as materialists claim, something that can be abstracted with a computer program if we have full information of their workings?
When I say "systems that are perfectly logically consistent" and "humans... aren't", I'm saying that the ideas humans have in their heads are not logically consistent. It's possible to write down "2+2=5" on a piece of paper, even if 2 plus 2 doesn't actually equal 5, and it's likewise possible for humans to believe "2+2=5" even if 2 plus 2 doesn't equal 5.
Some years ago I met Rodney Brooks, back when he was doing insect robots. He was talking about a jump to human-level AI as his next project. I asked him, "Why not go for a mouse next? That might be within reach." He said "I don't want to go down in history as the man who created the world's greatest robot mouse." He went off to do Cog , a humanlike robot head that didn't do much. Then he backed off from human-level AI, did vacuum cleaners with insect-level smarts, and made some real money.
There are no mice on HackerNews. I think.
We did, but is that really categorically different than groups of primates using primitive tools ?
I think the idea that humans are categorically different from other species is misguided. Instead consciousness and intelligence seem to be more continuous than discreet, particularly when looking at semi-intelligent animals like monkeys, dolphins and octopi. Animals in that class can all learn pretty complicated tasks and are able to make use of tools. Self-awareness and consciousness isn't something we understand fully enough to even exclude all animals from possessing.
The only thing that seems particularly unique about humans is our ability to use complex language and record it. Passing down knowledge from one generation to the next is the _only_ reason we have "technological civilization".
Dennet argues that combining Darwin's strange inversion of reason (complexity from bottom up iterative refinement) with Turing's Universal Machine provides a way of understanding how we are machines, built of mahcines, built of machines, etc. and it is the heirarchy that allows the complexity of minds to emerge.
That heirarchical iterative schemes are unexpectedly powerful is well mirrored by the recent successes of deep neural nets, and Dennet cites Hinton.
It's worth a listen and summarises his new book From Bacteria to Bach.
I don't think he does either.
He does not seek to explain the origins of the universe but argues that this is not necessary to understand how minds could come about.
is there a 'why'? some people say it was 'created', but then the creator 'is and we can't say why'.
The English word is equivocal, and the main ambiguity is marked by a familiar pair of substitute phrases: what for? and how come?”
“Why are you handing me your camera?” asks what are you doing this for?
“Why does ice float?” asks how come: what it is about the way ice forms that makes it lower density than liquid water?"
P48, From Bacteria to Bach, The Evolution of Minds by Dan Dennet
Consciousness is rare and beautiful, however, not magic.
So, while it may or may not be true that the brain uses QM, it doesn't seem to really explain anything of interest. It doesn't make consciousness any less mysterious, or give any real insight into how we might create or understand our own consciousness.
Given that (or refute the premises, if you please), why is this theory interesting, relevant, or correct?
Quantum Computers are computers after all and he is talking about non-computable physics.
The reasons he looks at Quantum it's because it seems to be missing something fro our understanding(the "Reduction of the Unitary evolution"), and he "hopes" that this is non-computable.
Does that make sense?
E.g. let's suppose I make the claim that some people are conscious and others just seem like they're conscious. Can we test this?
Quite respectable people have theorized -- or should I say bloviated? -- that consciousness is merely an emergent property of complex systems. Again, we can't really define what consciousness is, so it's an interesting dinner party conversation starter, but it's not falsifiable.
Er... Maybe not consciousness-brain (although I'm no expert on this at all), but it's hard to dispute that we have a MUCH deeper understanding of movement-brain, perception-brain, memory-brain, problem-solving-brain relationships that we had 100 years ago.
Consciousness is a touchy subject. We've been studying cancer for a long time and haven't found a cure for it either. Yet, no one thinks that cancer originates in quantum effects. "we're no closer to solving it" is not a good argument in favor of one theory over another.
Consider that we know "where" things happen, but it doesn't follow we know "how" things happen.
It is a bit like opening up a computer and taking a thermometer and measuring the heat kicked off by various parts.
Now I can tell you that when I run MatLab, the part called "CPU" gets hot. And when I run games the part called "GPU" gets hot. So clearly the CPU is the Matlab part of the computer and the GPU is the games part of the computer.
What we need is a theory of software before we can make progress, and that is what we lack.
Edit: Adding some references. Not sure how to format it into a list.
Pattern Separation and Completion in Dentate Gyrus and CA3 in the Hippocampus:
Illustrating from your example, current state of the art seems to have managed to break the big blobs into smaller blobs (e.g., now we're looking at 'memory that binds multiple features between objects'), and then found more complex relationships ('looking at separation or completion operations in similar memories').
That still doesn't tell us how the actual programming works. We barely understand the role of dendritic spines, and we're still trying to get a handle on the utter complexity of single neuron interactions in the neocortex. He might have oversimplified, but I don't think he's wrong.
(Just playing devil's advocate. Would love to know how badly I misunderstood.)
Edit: nearly all of your links are paywalled.
Actually, there are at least some research which suggests quantum tunneling plays a role in random genetic mutation, giving rise to cancer.
Disclaimer: I am not a physicist or biologist.
Appreciation for nature, music and other artistic pursuits are related to an internal reward that favors exploration and creativity which are essential skills in problem solving, which is essential for survival ultimately.
In machine learning, especially in reinforcement learning, there are efforts to introduce internal rewards such as novelty seeking / curiosity and creativity. They have a clear benefit for survival. For example, ability to imagine might be seen as poetic, but in reality it is useful for planning actions without needing to play them out first, and only later acting. A useful skill that might save one's life in a split-second decision.
I would be surprised if the programmed representations of things which I have long played and worked with, the symbols, numbers, vectors and virtual objects; could actually have some conciousness of their own - as though the only likely difference between simulation and reality is just a matter of scale and/or perspective.
If we're all made of Lego, but we need Technics sticks and joints to become conscious, how is that any less materialistic? Even the fact that quantum is a source of randomness doesn't seem essential.
Let anyone who thinks they understand consciousness inhale >18mg of N,N DMT and return humbled.
1. We don't understand Quantum Mechanics
2. We don't understand the mind
3. Therefore QM explains the mind
Or as Steve Harnad once said, he takes all the embarrassments and failings of one field and marries them with another.
I am not the only one who has this interpretation of him. I certainly arrived upon it independently but so did many (more credible) giants in the field.
As far as I can tell, he doesn't have an argument that consciousness stems from QM beyond the rather elusive nature of both.
So if you think that is wrong, by all means, explain the connection you think he elucidates.
In an attempt to avert that spiral of world-suck, let me attempt to instead meet your request head-on. Penrose is not saying what you say he's saying, and would not be able to fill up two books with your argument.
Let's talk hacker for a second. What you're saying is that Penrose is basically saying "well I don't understand this bug in my computer program, and I don't understand the kernel in my computer, so therefore a kernel bug is causing this bug in my computer program." You are going to be predisposed to see any reduction of a bug towards the kernel as being evidence that yes, your characterization of the argument is correct and the argument is indeed laughable. This is understandable if someone just told you that hypothesis off-the-cuff, which I surmise is how you've probably encountered Penrose. (That is, I take the fact that you're laughing as evidence that you've not read both The Emperor's New Mind and Shadows of the Mind.) The alternative however, is that someone has (a) a high-level statement of "this shouldn't be happening", and (b) knows that the only way that such things can happen, barring things like the MMU misbehaving, is a kernel bug, and (c) has been attempting to trace down the code to the fundamental syscalls, and found that the thing-that-shouldn't-be-happening is indeed bracketed around one syscall in particular which does not seem to be doing what the docs say it should do. And if a hacker has attempted all of this, I mean, they might still be wrong: but their argument is no longer something of the form "I don't understand the kernel and I don't understand this bug, therefore the kernel caused this bug." It is something more sophisticated than that, and you're doing a disservice by laughing at them.
One and a half of Penrose's books are ultimately about why the computationalist/algorithmist thesis of brain function seems deeply incomplete -- that a hallmark of some certain examples of reasoning appears to be deeply nonalgorithmic. (For example The Emperor's New Mind supplements his usual Godel argument with arguments that if brain plasticity comes from local neural plasticity then it is unlikely to have a great generality, but by a clever analogy if it does have a great generality then not only does that mesh with an idea of global plasticity, but it is very unlikely to be algorithmic. The one and a half books are basically full of little digs that, seen together, do seriously undermine faith in the thesis.) This is essentially the "bug" above. His favorite example is that we are capable of proving to our own mathematical understanding that our mathematical-understanding-algorithm contains true sentences which we will never be able to prove; if this algorithm exists then it is able to prove something that it will never be able to prove, which seems like a straightforward contradiction. But there are one and a half pop-sci books that he's written building up some other problems with this thesis in a way that even if you don't start off with knowledge of the relevant physics/comp-sci/whatever, you can hopefully understand it enough to appreciate the problem.
He follows these up with a principled dive into what's actually underlying these "bugs". That is, he discards ideas about souls and takes it seriously that our reasoning is realized in our brain, and takes seriously that this is a biochemical system, and evaluates whether the noncomputable phenomena that he is "debugging" can come from chaos or randomness or the like. He is able to rule most of these out with auxiliary considerations, and so tries to get one level closer to the hardware: the biochemsitry is well-modeled, we know, by physical chemistry; there are likely no new fundamental interactions or whatever to be found in biochemistry.
Looking into the physical chemistry level he finds that it can be somewhat cleanly divided into two parts: quantum and classical. Technically, it's all quantum: but you can handle a lot more with various classical rules of thumb, and this has basically all been done before him so he can just steal those results:the classical part of physical chemistry and probably even the vast majority of the quantum part of physical chemistry are algorithmic, it is known that quantum computers don't do other things than what computers do; they just might do many things more quickly. There exists only one part of the quantum part of physical chemistry which is not.
Unfortunately, he doesn't have access to the source code, nor docs for what nature does--so he actually constructs physical-chemistry models that could hypothetically do the sort of buggy things that he's seeing at the top-level that he has carefully ruled out at the other levels of explanation.
And this is all to say: as you can hopefully now see, these books contain a lot more than simply "we don't understand X, we don't understand Y, therefore Y causes X." Namely there is an actual understanding for example that X is realized in a system which has to be modeled with Y, and which has some features that can only come from certain corresponding features in Y, which we actually do understand very well. There has been a solid amount of good effort to push down that causal chain until one is left at these syscalls and then saying "something is fishy about those." And his argument might indeed be wrong, there are many little parts where maybe he's wrong about how chaos works and how it could cause a similar bug, just as our programmer above might be wrong about whether two processes are properly synchronized via locks or whatever -- but it's no longer this laughable thing of "ha, he doesn't understand X and Y so he thinks Y causes X!" that you're trying to say.
The language of bugs also serves to mask what the issue is. It is true that Penrose (as best I can remember) spends time talking about the limits of algorithms. I don't remember the "neural plasticity" stuff, but I do remember he was much impressed by our ability to understand Godel's theorem.
Let's just pause here.
Does that make sense to you? That our ability to understand the incompleteness theorem is somehow evidence against the algorithmic nature of the mind? Or do you think it shows we have a poor grasp of what it means to "understand" something, and of how symbolic reasoning and proof interacts with our mind?
I understood neither his, nor your summary of his, explanation of how QM is supposed to plug this hole in our understanding.
In fact, your little discussion of QM has, if anything, bolstered my point.
Note: we know that biological laws rely on physical laws, and that QM (and, say, relativity) are our best theories so far. That is not in dispute.
But please explain how QM plugs in this Godel hole. When the crux of the matter comes, there seems to be general hand waving (on his part).
It does make sense that our ability to understand the incompleteness theorem is very close to self-contradictory -- I don't know that I'd say it is, without a doubt, self-contradictory; I have not dove deep into how provability logics might force □(a → (p ∧ ¬□p)) into a contradiction, and what you need for that (possibly with reasonable axioms it merely derives ¬□a which just means that if there is an algorithm for understanding then we'll never uncover it?).
QM does not by itself plug the hole in our understanding; Penrose has never said that to my knowledge. However a tiny part of quantum physics is the only known part of physics that's capable of producing nonalgorithmic results (the vast majority of QM is linear algebra in funny hats), hence if we've got a nonalgorithmic result it makes sense to follow our "best theories so far" towards that tiny part of quantum physics.
My HN profile has carried my thoughts on Penrose and Hammeroff's theories for some time now. Good to see it posted here and debated.
Anecdotally, I find the people drawn to his view are largely physicists. And the people who scorn him are largely "cognitive scientists". (I am closer to the second group than the former).
What you find to be "reasonable mechanisms", I find to be completely unreasonable. No, I don't think entanglement is a "reasonable mechanism" for "the fact that a bunch of disconnected neurons produce one mind."
I don't even know why one would think that quantum entanglement is even vaguely relevant to the "problem", and I think it has something to do with a misunderstanding of the problem.
Do you think that there is a problem of mind in the form of "a bunch of disconnected neurons produce one mind"?
There is definitely a "binding" problem. Beyond the question of why the state of many disconnected neurons (or to go one level down, molecules) are subjectively experienced as a single consciousness, you also have the issue of why other neurons nearby are not subjectively experienced. After all, your consciousness doesn't even extend to your entire brain.
Why do you find it unreasonable that entanglement would be involved with integrating signals from disparate regions or several neurons?
The problem is how we integrate the signals from many regions in to whatever is generating the single perception (or perhaps several parallel perceptions -- I don't know that there aren't other experiences coincident with mine, just that I have access to one of them).
Unless you're postulating that a single neuron at a time is responsible for my subjective experience, then you do need to explain how several neurons are generating a single signal.
My experience with cognitive scientists is that they simply punt on the problem, completely failing to address how the signal is amalgamated in to a single stream of experience even as they talk about what regions contribute features of it.
Let me put it this way, suppose there was a mechanism, call it X, that explains to your satisfaction how many different neurons can "integrate the signals from many regions".
Does that explain to you why physical neurons create a subjective experience? Would you now consider the "problem of consciousness" to be solved?
Leaving "consciousness" aside, there is no issue at all of course. We have a straightforward computational theory of how different neurons can integrate their output and produce a computation.
The problem is that this idea of "integrating" signals is not clearly laid out. If it is not the computational problem, then what sort of problem is it?
Koch and Crick have also proposed "a mechanism" for something like the co-ordination of various neurons to explain visual awareness. A frequency at which all the neurons "cohere." (Koch is also a real genius, by the way. https://profiles.nlm.nih.gov/ps/access/SCBCFD.pdf
Again, the problem with their proposal is that it is not really a "mechanism." Suppose all the neurons that recognize a face happen to be vibrating at 40 hz, Does that "explain" "consiousness?"
I lean towards the philosophers. Read one good critique by Fodor, and you realize that the questions are poorly formed.
This would give us a science of experience and allow us to categorize and create new experiential beings. That's a goal that many of us have.
Now, for various reasons, we might suppose a medium we're already aware of (or mediums in some combination -- as is the position of the materialist). Then the question becomes entirely about the integration mechanism, as that's the only piece of the puzzle we don't know.
So if you're a materialist, the integration of brain patterns in to a signal which corresponds to subjective experience is the question of creating new 'souls' (in the sense of beings who experience), and also of categorizing what exactly has a 'soul' (in the sense of inner experience).
The problem is how the computational structure which produces our behavior couples to our experience of it, which is a consistent, evolving perturbation in something, even if just in the sense of being a quasiparticle formed of the constituent parts interacting. (Though, likely, we're missing parts of the computational story -- I don't think most serious scientists would argue that.)
I think you just don't like the question, but it's pretty clearly formed, at least as far as big research directions ever are.
"It proposes a way by which those neurons can create subjective experience, yes"
I just don't see it.
Suppose you somehow show me that various neurons contain particles that are entangled. Let's say I believe you.
So now how do they create subjective experience?
I see blue because my neurons are entangled? I see a flower because my neurons are entangled?
What is "the medium that they are creating"? How are you even using these words.
If I knew that my neurons have entangled particles, I would know no more about consciousness than I do now.
What I actually said, and you misunderstood: "the medium they're creating the signal in". Your paraphrase is so far from that, I have trouble even addressing the confusion. It's particularly egregious that you omitted key words when you're clearly using them to make passive aggressive comments about my understanding.
The answer to how this addresses the question is that the medium is experiential -- anything that creates effects in it is 'experiencing'. Experiencing seeing blue corresponds to a particular pattern of activity in the medium. (Though, there is no default 'experience of seeing blue' -- it's possible that we experience different ones, different animals have different ones, etc. This relates back to the computational structure.)
How entanglement is involved is that there's a single 'unified' experience when we know that different portions are generated in different brain regions, but don't (obviously) converge the information to a point (eg, it's always spread at least in a cluster of neurons), which suggests that we're talking about something non-localized in the medium, ie, not a point object.
The source of a lot of non-locality is through entanglement, so it's reasonable to conjecture that it's involved here, as well -- though, of course, it could be a different mechanism.
If the pieces of your brain which are generating the perturbation in the experiential medium(s) are entangled, it could provide a model of how it's carrying a non-localized piece of information -- your experience.
But if your theory is that consciousness arises as a result of particular patterns in a medium, couldn't that medium equally well be "the universe"? Why do you see it as necessary for the medium to be entangled? Why couldn't consciousness just be the result of patterns in non-localised particles?
Entanglement is a mechanism by which a single, non-localized value can be carried by pieces (none of which themselves carry that value).
And hence the materialist proposal here is precisely what you propose: that the universe is fundamentally experiential (or at least parts of it that brains interact with), and the single, non-localized experience is created and carried via entanglement of the constituent "particles", which are themselves impacted by the regional computation of the brain (eg, by knotting with the "main" knot carrying the experience, in analogy to a TQC), thus allowing the disparate regions to contribute to a single experience with characteristics picked up by regional contributions.
Proposing that it's not entanglement means proposing a new phyical process by which that information can contribute to a single, non-local value, which people researching the brain haven't even taken a stab at.
Which is why I find their proposal that it's not entanglement to be strange -- they seem to blithely ignore the physics/informatiom theory implications of that.
But you're already proposing something so fundamentally different to anything that exists, namely consciousness arising from patterns. There is no reasonable reference point to make any assumptions about such a thing.
>they seem to blithely ignore the physics/informatiom theory implications of that.
We can't make physical information assumptions about something which as far as we know may not even be physical. There is no obvious space-time location that consciousness exists at such that it needs to gather information to that point.
I've never seen a brain researcher claiming to have seen any non-locality. It would be the kind of thing everybody would be talking about. Mostly, people seem happy to accept many speed-of-light delays on something that is centimeters wide and counts time in dozens of milliseconds.
You want a way to keep coherence overall through the brain, but there's no reason to think that coherence is needed.
I agree that the computing is done with moving charges and chemical propagation (at substantially below the speed of light), but that doesn't account for how the information gets integrated in to a single signal for experience. But we also see that kind of structure in, eg, the proposal to build a TQC, where you build a computation in to non-local structures built out of entangled particles by moving around charges at substantially lower than the speed of light.
If it's not being amalgamated in to a point object (which doesn't seem to be the case -- there's no 'you' point in the brain), then it must be being amalgamated to a non-local object.
I am happy to rephrase my question as what is "the medium they're creating the signal in?"
You are clearly getting annoyed at this, and I don't see us making much progress. I find this "experiential medium" even more confusing, and I don't see the connection between entanglement and "unified" experience.
But, hopefully others will make up their own minds based on our words.
"Is Penrose making the case for free will? “Not quite, though at this stage, it looks like it,” he said. “It does look like these choices would be random. But free will, is that random?”
Like I said before, we just don't have the concepts to model free will right now, so we should not dismiss it as impossible.
Imagine trying to explain to someone living 1000 years ago things like quantum entanglement and superposition. Or Gödel's theorem and the Halting problem. These concepts challenge our intuitive senses, and yet we have to grapple with them because they are supported by real evidence.
Now imagine a thousand years from now, new phenomena or understandings will generate concepts that can model free will that is both non-random and non-deterministic.
This abstractly makes sense, but I think the free will supporters still have the burden of proof of, right now, explaining at least at a very high level what free will would be if not random/deterministic. Otherwise we must lend credibility to all kinds of crazy theories.
So far nobody has been able to do that.
I think free will is the ability to make conscious choices. These choices are guided by a chaotic, but ultimately deterministic preference function, which itself can be modified by conscious choice. You could argue that the deterministic preference function is a negation of free will, but I think the separation in the causal chain between condition and response is significant. There are lots of cases where there is no experience of choice mediating the transition from condition to response, so I think that feeling is indicative of something meaningful.
Term 'free will' seems to be thought terminating cliche. It seems poetic and deeply meaningful but descriptions as yours say it's just 'deterministic action interrupted with coin flips'. From the subjective point of view it does not matter if action is deterministic or arbitrary coin flip. There is no 'freedom' in free will.
Say a loved one died. Let assume technology has progressed to the point that before he died, his brain and body was recorded in exact detail and replicated.
Would you accept an simulation or instantiation of this brain/body as a "resurrection" of your loved one?
If not, then consciousness is not simply a matter computation
Far more interesting to me is how consciousness even occurs. It is seemingly impossible to study, because consciousness is fundamentally not externally observable. Even in humans, we have essentially no way of knowing for sure that anyone but oneself is conscious.
I would say once I can go back and forth and know what it's like, then and possibly only then, I could accept such a resurrection.
I'm curious about what Penrose would say on this perspective.
Are you dead?
People do, and have experienced a there and back from this..
(also sleepin' dreamin')
I'm not a physicist but as far as I remember it's not a matter of a advanced technology but rather a quantum limitation or the world (you cannot copy an object with all derails).
> Would you accept an simulation or instantiation of this brain/body as a "resurrection" of your loved one?
Of course not, it's just a simulation although a very convincing one :)
To go beyond that is to make a metaphysical argument that computation, information, or math makes up reality (as opposed to just things in the case of materialism, or ideas in the case of idealism, or the unknowable noumena in the case of Kant).