I live by a couple of (vaguely related) principles:
a) Consciousness is just your brain trying to anticipate the future.
b) Your brain compresses (normalises?) repetition in memories. So even if day to day events happen at normal speed, the years seem to fly by when you reflect on them. If your life seems to be flying-by then maybe you need more novelty.
I always try to see the human brain from an evolutionary perspective.
What the human brain added was a way to simulate other brains, because of growing communication ability for coordination's sake. It's basically a way to recognise smart freeloaders. E.g. men trying to sweet talk a woman for sex (and impregnation). He will make a lot of promises but how surely can you depend on him? It's of vital importance you know so you don't get duped.
So the human brain is an advanced bullshitmeter and bullshitter in an arms race, and this brain simulation machine has all kinds of unexpected side effects. At least, based upon some clues I read from some biologists, I came out on that understanding. If this has some truth, based on the world today, we still can get a lot better at it.
Having a mammalian cortex (or avian cortex equivalent) is what give you the ability the predict. The cortex is the prediction machine - capable of overriding our evolutionary-earlier reactionary behavior with planned behavior based on prediction.
The ability to predict is surely what drove development of the cortex, since the benefits are massive - allows you to know what comes next when you hear the roar of a tiger, or see a poisonous snake, or make a plan, rather than waiting until something bad is actually happening to you.
Parts of our cortex specialized to social behavior would have come later. I doubt bullshit generation/detection would have been a specific driver though! As girls attracted to "bad boys" attest, the urge to merge is far more basic than that.
There's a simpler explanation for time flying by as we get older: the older you are, the smaller a fraction of your total lifespan is represented by a single year. One year is half of a two-year-old's life, but only one eightieth of an eighty-year-old's.
That's not an explanation, because it only makes sense under the unmotivated assumption that the brain measures subjective time as a proportion of lifetime. The brain has no reason to do this.
The brain "searches" through its memory space to find relevant information (dangers, opportunities, rewards).
If we define the perception of time passing through novelty, then there's less novelty the more experiences you have. I think that's an interesting model (if not perfectly accurate).
Likewise, the accumulation of relevant experiences might make time seem to actually pass slower, because you're making more connections at each moment and (arguably, might be false) have a more coherent and active brain -- but if there's less novelty worth remembering, it may appear in retrospect (when you examine your memory) that less (perceptual) time has elapsed for the same wallclock time.
So I think this suggests two kinds of time: instantaneous perceptive time (how much your cognition is active -- for example, it may be near zero during anesthesia) and comparative memory time (how much you judge time intervals from your memory of events).
One interesting example is perhaps when we've had an exciting week and it both seems that a month has elapsed in that week (since so much has happened), and that "time passed so quickly!". The former is explained by comparative memory time; maybe the latter is explained by us simply not thinking about the passage of time at all, and "skipping our clocks" (like forgetting to increment a clock or calendar). This is of course mostly speculation.
Though if I make a stack of boxes the same size the 80th box will be a smaller proportion of the whole stack as is than the 4th box was, yet the individual boxes feel the same size.
Edit: In saying that, if I was looking up at a stack of 80 boxes and someone added another on top it might feel subjectively like the tower was extended by less of an amount than if it was the 3rd box being added.
Our brains typically do judge based on relative change even when it doesn't make a lot of sense - I.e. we'll put more effort trying to save ~$1 on something less than $10 than trying to save $2 or $3 on something that's $100, or $10 on something that's over $1000. The former "feels" like a more significant saving - but a $1 is $1 either way. You can post-rationalise it by saying "but I buy $10 things more often than I do $1000 things", but do you really do 100 times as often?
Also how we perceive time largely depends on how novel and new the experience was. I just got back from an Indian wedding that lasted two days, but felt like a week of time had past since I was experiencing so many new things for the first time.
> a) Consciousness is just your brain trying to anticipate the future.
My personal theory is consciousness is a coalescence of weighted distributions that learn via a closed loop feedback and an injection of true randomness via quantum tunneling. That is, the brain is a mechanism by which true randomness gets turned into a weighted distributions that equate to decisions and actions with a feedback loop. As experiences get encoded as complex neuron paths or "memories" (this includes things like muscle memory, not just what we normally think of as memories), they build stronger paths and more connections, effectively increasing their "weight". This holds true down to how neurons function on an individual level.
Randomness is the basis for all life and seems necessary for there to be any sort of curiosity component to intelligence. At a macro level, it seems to represent breaks in cycles that allow for change. Further, if our own decision making amounts to weighted distributions in neuron paths, how would a brain resolve an equally weighted distribution (no matter how small the chance of that happening is).
Edit: To further explain, I apply the basic principles that govern life to all facets of life, all the way up to the complexities of human society, language, etc.. That is, DNA is a self-replicating system that through randomness was able to build more and more complex organisms over time. It generically represents a way to encode behaviors through time, a necessary subset of which include behaviors for self-replication and resource acquisition (mostly to satisfy the self-replication requirement). On a more complex level, human society is the organism, human language (including speech, writing, visual arts) is the DNA (a means to pass knowledge, behaviors, etc. through time), and humans are the "cells".
Deterministic pseudorandom would also solve these cases, and is not true random.
I'm not saying these cases are pseudorandom (the logistics of having a prng with state apply to biology looks hard), but that it doesn't seem to require true random
Correct! I think that's the question though. Is there true randomness in the system or is it really just playing out based on the laws of physics. I'm more inclined to believe the mechanisms at which randomness gets injected into DNA replication is pseudorandom, but for mental constructs I'm more inclined to believe it's sensitive enough to true randomness.
Hmm, I don't think what I'm saying is the same thing. I'm saying that the behavior of organisms amount to systems with the same function no matter the complexity or timescale. Though to that end, I do consider human society to be an organism of sorts, but beyond that not so much.
Not entirely wrong. I think it's entirely possible for "free will" in the sense that it's normally talked about to not really be a thing. After all, we're taught about "nature vs nurture" in psychology, that is, learned vs inherent behaviors passed through genetics. I think what we call "free will" is itself curiosity manifest from random decision making. The overwhelming vast majority of our decisions are based on learned behaviors though, and decision making itself is known to be a cognitively taxing process; so we're predisposed to routine, simplification, and working with proxies.
Edit: to your point about needing validation: the desire for acceptance is simply put, the desire for a positive feedback response on a personal or societal level. Remember, our brains are big reinforced learning machines and crave feedback as quickly and unambiguously as possible. The desire for acceptance is both a thing that keeps behaviors "in-line" with societal expectations while allowing for changes in those behaviors to shape societal behaviors (and expectations) over generations (time), and therefor make progress, or at least change.
This has been my thought process for a while as well - that the brain amplifies quantum effects and uses them to drive larger processes. It's definitely physically possible - for instance, you could construct a double slit splitter that chooses 0 or 1 using the measurement of the direction a photon takes, and decide whether a train goes left or right based on this. You would be making huge macroscopic changes based on minute quantum effects. Perhaps the brain doesn't work this way, but the point is that it is entirely possible, and not ignorant pseudoscience.
In my experience, such emphasis on quantum phenomena tends to be the motivated reasoning of free will compatibilists: they hope to preserve the intuitive idea of free will by looking at what corners of non-determinism still remain in modern physics, emphasising any possible connection to the macro-scale mind, however tenuous. It's analogous to the God of the Gaps argument. [0]
Even assuming quantum phenomena are central to the brain's workings, the arguments still fail: you have no control over the quantum phenomena in your brain, so you still don't have free will in the naïve intuitive sense.
I see even less connection between quantum phenomena and consciousness. Consciousness need not depend on free will.
I agree that some people think this way, but not I. I don't even believe there is a "you" to have free will. Who has control over anything in their brain? They are their brain.
I'm just thinking that perhaps there is a possibility that the brain evolved to take advantage of quantum phenomena to increase capability.
That would assume that randomness is the only quantum property that the brain would use. The possiblity of quantum computation demonstrates that there is more that could he possible.
Your earlier comment certainly appeared to be suggesting the brain uses quantum phenomena as an RNG. Are you instead suggesting the brain is a quantum computer?
The purpose of the example in my first comment was to show that quantum phenomena can influence macroscopic phenomena, not as an example of an RNG. It was probably a bad choice as everyone took it to mean an RNG. Quantum phenomena are not needed for this.
Does it have to be random? There must be some value besides randomness that can be derived from quantum effects, else why would all these tech companies invest billions into quantum computing?
Perhaps your bar for "meaningful" is calibrated too high. Perhaps a dog-level of self-reflection is still quite a big deal, when compared to say a fish or whatnot.
Anyway, can you elaborate on why you think dogs ar "clearly" conscious?
> Perhaps a dog-level of self-reflection is still quite a big deal, when compared to say a fish or whatnot.
Fish are probably conscious.
> Anyway, can you elaborate on why you think dogs ar "clearly" conscious?
Unless you think it's ok to torture dogs, you already agree with me that dogs are conscious.
Or, a more philosophical response:
1. I am conscious
2. My consciousness is, broadly speaking, seated in my brain
3. A dogs has a brain broadly similar to my own
4. Dogs respond to stimuli in a way similar to the way I do. Both of us appear to be able to respond to painful stimuli, for instance.
5. It therefore seems reasonable to conclude that the dog is capable of subjective experience such as suffering, i.e. dogs are conscious.
Of course, dogs appear to be much less intelligent, and it doesn't seem justified to assume that dog consciousness is on the same 'level' as human consciousness, whatever that might mean.
This is broadly similar to an argument like anyone who wishes to argue for solipsism needs to explain why they, one human being among billions, are the only one with consciousness.
It takes more than that, I think; I don't think the adaptive digital filters I used to work with long ago were conscious, yet they work by predicting future values of a signal and continuously adjusting their parameters to minimize the mean square error (or some other error measure).
Most of the prediction that's going on in the brain seems to be at an unconscious level, and our conscious mind only seems to be alerted when something unexpected happens, like when you're walking along and suddenly step into a shallow hole you didn't notice: your brain was unconsciously predicting when your foot would hit the ground and something else happened.
Re b) “.. maybe you need more novelty” - do you have kids? Time flies when you have to focus most of your efforts on someone else than yourself, even if every day is something new.
Yeah, the "repetition is what makes time go faster" thing never made much sense to me. Few parts of my life have involved more day-to-day sameness and as rigid a schedule as high school. Should have felt fast, right? Nope, felt slow as hell. The less-repetitive summers? Felt about the same speed (so, seemed nice and long—now I blink and it's like "oh shit, the leaves are falling? I meant to plant some tomatoes...").
Meanwhile, the very-novel first few years with kids seemed to go by in a flash. A month feels about a week long to me, now, and has for years.
For me the effect seems disconnected from circumstances, but connected to age—years had been feeling subjectively shorter for quite a while before I had kids, even, and they seemed to just be "getting shorter" as I got older, period, no relation to what's going on.
very good point. It's an important distinction to make and a first step in formally defining what is meant by "consciousness". Some people mean consciousness as in "I become unconscious when I sleep or faint" some as in "I exist, I am" others as a prerequisite for free will. As with many things, trying to define the concept helps understand it
Regarding predictions and randomness, it's amazing that when producing music, just injecting some randmness for velocity and timing in a loop (for each note event) completely changes it from being insufferably boring to actually enjoyable, even for extended periods.
The brain apparently can detect when a part is exactly repeated, even if it's long, but is thrown off by micro changes.
Yes, the theory isn't new, but getting experimental evidence is difficult in neuroscience. Here they used GPT-2 for quantitative predictions, which obviously was not available a decade ago. This way they could extend experiments from highly artificial signals to natural language.
I feel like it is one of the things you learn in a intro psych/neuroscience course. Also that the predictions are a way to deal with timing issues between parts and lag from your limbs and sensory perception? Kind of like the net code for a FPS.
Lying to oneself and exposing the complete truth are two alternative strategies towards survival.
That is why some people have brains that lie to them less then others, while others have brains that lie more. You will find different degrees of bias and delusion in the people you encounter.
Free will lies at the particle level. If atoms are defined by strict physical rules that are deterministic and have no free will then if our brains are made of of the same atoms does that mean our brains are also defined by the same strict physical rules?
What if atoms are not defined by deterministic rules? What if it's defined by probabilistic rules? Then are our brains bounded by the same probabilistic rules?
It seems that given that all brains are made of the same stuff as the universe, the question of free will is more of a physics question. What is the universe made of? And does this fundamental unit allow for free will?
there's also subconscious lying, such as hiding our blind spots where the optic nerve is attached to the retina and turning off our vision when we move our eyes, and toying with our perception of time passage to make it seem like there was no gap in vision.
Is that one of its right predictions? Or one of the wrong?
A bias toward wrong predictions can be adaptive. If a wild animal comes across something that is 99% safe and 1% fatal, it could well be better off predicting doom and fleeing just in case.
Your achievement in getting to this point in life and any continued existence are pretty irrefutable proof your brain is getting it mostly right, no matter how often you feel wrong.
Yeah. If you can find your mouth with a fork-full of food then your brain is predicting well. If you can drive a car in traffic without becoming an immediate public hazard then your brain is great at predicting.
Scale and context matter. Its number one job is to predict what actions to take to keep you alive to reproduce and it's done that pretty well. The more abstract you get the less accurate it will be.
It's been a decade+ since I read through it, so my memory is a little fuzzy, but my take away was: humans are hardwired for pattern recognition. Pattern identification improves our mental model of the universe. We use that model to predict things about the future. Making accurate predictions about the future is intelligence.
In the 1950's - same era as Watson and Crick, Nicholas Rashevsky [1] invented Relational Biology in which he was looking for the mathematics of life itself. His student Robert Rosen [2] went on to pen a book "Life itself..." in 1989, but not until after he had penned a different book "Anticipatory Systems".
While looking for the mathematics of life (living things, not machines), several points came to life, one of which is that while machines have the luxury of plan s and makers,living things don't. How to resolve the difference led to the thesis that living things are anticipatory - they are driven by predictive "models".
The low level paramecium is able to swim towards "food" by using chemical sensors and measuring gradients; of course, that creature doesn't have anything like a human brain (the topic of this thread), so its models are more innate internal structures; humans have those too, but over time, the adult brain emerges as the supreme modelling agent. It is anticipatory.
What does anticipatory mean?
A child lets go of a heavy object and it smashes the child's toe. Next time, the child won't let go of heavy objects because it learned an important correlation (causality isn't yet in play). Correlation is sufficient to teach that child a new anticipatory rule; simplest form: dropping heavy stuff --> pain.
That applies to reading and human prediction machines. Just try learning a foreign language by reading children's books - the pictures really help. Over time, you don't need them anymore. @hunta20097 on this page says this with different words, but the game is the same: model building and refinement.
There are books 'out there' which try to explain all of this in Bayesian terms, that we are always "updating our priors" as we encounter new experiences. It's way above my pay grade to know if that's right, but it still offers satisfying explanations.
>Just try learning a foreign language by reading children's books
People often trot this strategy out like it's a good idea because children's books are "simple". Have you read children's books? The language in them is actually typically very weird and playful, the situations presented are often strange and non-sensical, and to that end they're not actually all that useful to a foreign language learner as a good starting point at all.
Perhaps. But, as evidence, I have a Chinese friend who bootstrapped his way to learn German when he became an exchange student in Germany, then, when he was doing his residency after graduating med school, he did the same with Spanish so he could treat his Hispanic patients in their language. He used children's books to kickstart, then graduated to advanced books, always with a dictionary at hand.
We do not perceive anything directly. Our perceptions are constructs tuned by sensor sample inputs. But we can only sense small amounts at once and must create a simulation for it to make any sense at all.
But there is a delay in sensor data, and if we waited before trying to finalize the constructs then we would always be behind and not be able to dodge falling rocks or anything. So the simulation must not only integrate over samples and time but also be predictive.
Given that the universe is eventually inconsistent, your assertion that human observers are certainty maximizers is interesting as a polar force to eventual inconsistency.
Throw the idea of entropy into the mix and you have some interesting comparisons for fun.
>Given that the universe is eventually inconsistent,
I don't think this has been observed to be true yet. At least not true within a framework of very specific theoretical rules. Otherwise all our science and logic is useless.
> it is difficult to imagine that even a simple dream is possible without something like an ego or an “I” experiencing it, he adds. So if spiders dream, “it might mean that we start talking about spiders having something like a minimal self”
Why would having a self not be a binary feature? If the theory is that a human's self is less minimal because the human understands itself more richly, that seems like two features are being conflated.
Why would it be binary? Few natural things are. E.g., living/non-living is an obvious binary that turns out to not be so clear once you look at the details. E.g., whether viruses are alive. Ditto all of the theories about life's origins: https://www.quantamagazine.org/a-biochemists-view-of-lifes-o...
> Your brain compresses (normalises?) repetition in memories
No.. I think this phenomena is related to the fact that our brain is indeed a prediction machine. When a prediction is wrong (i.e we encounter something surprising/unexpected), our brain updates it's prediction based on reality (exactly like training a "predict next word" language/sequence model like GPT-3).
As we get older, with more and more life experience, we don't get surprised so often (been there, done that - know how it's going to turn out), so our brain makes less and less prediction mistakes and therefore prediction-failure memory updates.
Of course we have multiple types of memory - short term vs long term, etc, so these prediction-failure updates are only part of the picture, but I think this is still the core reason why "the years fly by" when you're just doing more of the same. If you want to lay down more memories then novelty should help, but it's really surprise you'd want to maximize... no good being novel if you can already anticipate/predict how it's going to go.
> "The final non-predictive variable was semantic congruency or integration difficulty. This speaks to the debate whether effects of predictability reflect prediction or rather post-hoc effects arising when integrating a word into the semantic context. This can be illustrated by considering a constraining context (‘coffee with milk and … ‘). When we contrast a highly expected word (‘sugar’) and an unexpected word (e.g. ‘dog’), the unexpected word is not just less likely, but also semantically incongruous in the prior context."
Seems like reading a string of random words vs. reading a text the listener was very familiar with, or had even memorized previously, would have been the obvious test to see if there was a clear difference. Perhaps taking the familiar text and swapping some words, too.
Otherwise, the conclusion is that people anticipate the familiar and get surprised when the familiar behaves abnormally?
> PNAS... kind of an odd little club, that journal.
Ironically, you're kind of illustrating the point made in the text you quote. You expect things to be written in a style and jargon familiar to you. That was written in a style and jargon from a different specialty, thus it's unfamiliar. Often these seemingly-odd phrasings are the result of authors trying to be very precise about what they are or are not claiming - especially they are not making the more general, ambiguous, and seemingly obvious claim. They use terms that have very specific meanings in their own field for that purpose. But instead of considering that, you just cast aspersions on the entire organization or specialty. Congratulations, you've done yeoman's work for the Anti Science Brigade.
The National Academy of Sciences is a rather odd club (notably Feynman refused membership and there's a general suspicion they spend most of their time deciding who else to elect to the membership). That's not 'anti-science', although it might be 'anti-august-institution'. In particular don't members get to skip the normal peer review process with their publications?
See:
Dear Prof. Handler:
My request for resignation from the National Academy of Sciences is based entirely on personal psychological quirks. It represents in no way any implied or explicit criticism of the Academy, other than those characteristics that flow from the fact that most of the membership consider their installation as a significant honor.
Sincerely yours,
Richard P. Feynman
If you want a deeper dive on this concept read Ray Kurzweil's How to create a mind. Ray pioneered a lot of the concepts speech recognition and his book is a fun read and greatly expands on these concepts as well as Ray's ideas on how to build AGI.
Not to discount the research or its usefulness, but I feel uncomfortable with sterile language that, to me, implies that humans are nothing more than accidental automatons that can eventually be replicated and replaced by AI.
> Not to discount the research or its usefulness, but I feel uncomfortable with sterile language that, to me, implies that humans are nothing more than accidental automatons that can eventually be replicated and replaced by AI.
>
> Aren't we more than that?
Are we more than that? What evidence do you have, aside from "God loves us"?
You are carrying around an amazing machine in your noggin, but I haven't seen any convincing evidence that it can't be duplicated. As far into it as we have seen, it is all laws of physics, chemistry on top of that, and biology on top of that.
We definitely haven't cracked the "self learning algorithm" yet. You can show a person a couple pictures of a dog, and they can construct a 3-D mental model of a dog, and what it looks like from all angles and in all poses. Yet you have to give thousands of examples to do the same thing for a ML/DL system.
How much of the ability of those models is baked into the genetic code of humans? How much is learned as a toddler playing with blocks to figure out basic physical properties like object permanence, gravity, friction, and such? By just watching and interacting?
> You can show a person a couple pictures of a dog, and they can construct a 3-D mental model of a dog, and what it looks like from all angles and in all poses. Yet you have to give thousands of examples to do the same thing for a ML/DL system.
By the time a human brain can do this, aren't they also exposed to thousands of examples? How do we learn to speak for example? It's a long painful process of trial and error. How is this significantly different from a supervised learning algorithm? Teaching a child to catch is very similar. At first, they can't recognize the trajectories of thrown objects very well and they aren't coordinated enough to move their hands to where the projectile is. After time and many repetitions, they get to the point where they can catch a ball without consciously following the ball or can even catch without looking based on initial trajectory.
What makes you so sure an "artifical" system has no qualia?
Do you find that hard to believe? Why?
Why could having qualia not be some kind of inherent, perhaps emergent property of all systems complex enough to be called intelligent? And why is this an issue at all? This could be some unknowable mysterious meta-physical property that has no practical bearing on anything.
What is the angle here? Are you worried it is important? There is literally zero indication it is important or am I missing something here?
Personally, the more I learn about the intelligence of myself and others, the less I believe that we are more than a bunch of pattern recognition layers.
It used to haunt me. I have come to acceptance. Of course I could be wrong, but that’s my intuition at this point.
Do you experience subjective existence or are you a non-conscious entity? If you are a set of pattern recognition layers responding to stimuli, then I would not be so shocked you are unconcerned about the hard problem of consciousness.
I have basically fallen to the side of the Copernican principle on this.
Consciousness feels like it may at least partially be our fancy word for “soul” in the 21st century.
Measures of basic intelligence appear to us in crazy places in the animal kingdom. We will find it is not special at all.
I believe we are nothing but of a bunch of chemical states, the patterns and flow of which could be replicated in another system, natural or artificial.
On this planet, we just have the fastest, most complex and efficient processor going at this time.
Edit: I should add there may be more to human intelligence than “just” patter recognition, but that seems to do the bulk of the work.
> On this planet, we just have the fastest, most complex and efficient processor going at this time.
Not neccesarily. We haven't really figured out how to benchmark other species' brains properly and I wouldn't be surprised if something like individual bottle-nosed dolphins or elephants couldn't rival at least some individual humans.
> If you are a set of pattern recognition layers responding to stimuli, then I would not be so shocked you are unconcerned about the hard problem of consciousness.
I am as unconcerned about the hard problem as I am about elan vital, because there is just as much "compelling" evidence for either.
Perceptions taken at face value are not evidence. Evidence is an observation interpreted within a consistent body of knowledge. Our consistent body of knowledge is inconsistent with the naive perceptions of qualia.
You can take the position that qualia are direct evidence and so our body of knowledge is mistaken, or you can take the position that qualia are likely deceptive and so cannot be taken as direct evidence, just like all of our other perceptions.
I'm constantly amazed how many people think the former position is more plausible.
I walk around on legs. I can’t be sure if I am moving because of the legs or through some other action. Some people who dislike the idea of legs are telling me I don’t really have legs.
If you also claim that your legs let you walk on water and sometimes even walk off a cliff without falling, then I'd agree your analogy is in the right ballpark.
So then we agree, you are saying your legs exist and are magical and therefore our understanding of the world must be wrong, and I am saying that our understanding of the world implies that magic doesn't exist and so you must be mistaken about the magic of your legs.
No, I am saying the analogy is that my legs exist and may or may not be magical and therefore our understanding of the world must be incomplete because it has no explanation for legs in the first place magical or otherwise, and you are saying that our understanding of the world implies that magic doesn’t exist and so I must be mistaken about the magic of my legs and probably their existence as well.
Saying "our understanding of the world is incomplete because it has no explanation for qualia" is a classic, fallacious god of the gaps argument.
In fact we do have the beginnings of viable neuroscientific theories of consciousness, including qualia [1], thus showing that the claims of a hard problem were just the same old smoke and mirrors we've seen time and time again in claims to human specialness, just like with vitalism.
> and you are saying that our understanding of the world implies that magic doesn’t exist and so I must be mistaken about the magic of my legs and probably their existence as well.
Nah, our only disagreement is over the magic. I don't deny the perceptions of qualia, simply their special, "ineffable", "experiential" character that you implicitly conclude they must have. That's the magic that doesn't fit into our body of knowledge and that's the only "hard problem" of consciousness, a problem that doesn't really exist because it's based on erroneous assumptions, as described in the paper above.
Just like placing a pencil in a glass of water gives the illusion that the water has broken the pencil, so your introspective perceptions trick you into erroneous conclusions about your awareness.
But what exactly is the erroneous conclusion here? It clearly exists widely and many people report having it. We don’t know how it works or its mechanism. We have no idea how to create more of it outside of having babies. We have no idea if animals have it. A recently dismissed Google employee was unable to prove to himself that a chatbot didn’t have it. A physical explanation would require completely new science as science has almost no existing concept of the phenomenon.
> But what exactly is the erroneous conclusion here? [...] A physical explanation would require completely new science as science has almost no existing concept of the phenomenon.
That is the erroneous conclusion. "Qualia", whatever they are, fits squarely into physicalism and there exists literally no evidence that anything more is required. The paper I linked is proof that qualia can be accounted for within science as it exists. Again, you're asserting a god of the gaps.
> It clearly exists widely and many people report having it.
That doesn't mean that water actually breaks pencils. The meaning of any observation is always interpreted within a consistent body of knowledge. The problem is that people really want to take the perception of their experience as a direct observation of reality that doesn't need to go through that interpretation step, and that's simply incorrect.
What AI doesn't have is access to this world. Instead it has a fixed training set. The world is a much richer, dynamic training set. So AI would need an android body and to become part of human society to experience the real world as we do.
In my view human intelligence is being cracked open like a nut as we speak. It's a matter of decades - honestly, I think years - before all kinds of shit will hit all kinds of fans. However, irrespective of the societal impact, intelligence is very interesting and there is a lot to be gained from decoupling it from the human form.
I've handled this by looking at the (mis)usage of language. "Automatons", "pattern recognition", "statistics". These are words. And they all carry with them a palpable sense of dismissal, of a certain arrogance that I think is misplaced. Note: If you use these words fully egoless and objectively, great, that is excellent, but I don't think most do.
If you'd start to regard intelligence in it's abstract, non-material, form as a "divine" gift to all "living" beings the picture starts to change. There is a very real, very deep sense of wonder to what is happening that gets lost whenever you call intelligence - biological or digital - "just pattern recognition" or something to that effect. You reduce the wonder that is intelligence to something you hold in your head as simple and reductionist. It is in that very moment you diminish it and make it non-interesting. It's like calling human embryos just a bunch of cells. It's not wrong per se, but it's said in a way that reduces the complexity of the system to ridiculous levels. Even without invoking metaphysical/religious beliefs you can imagine an embryo being a highly-complex physical system with astronomical levels of emergent properties that are effectively incalculable, for us at the moment if not at a fundamental mathematical level.
If you look at machine intelligence like perhaps a small child would, full of awe and wonder, I bet your outlook on things can change. Humans are carriers of this "intelligence", they are not optimal and they are not alone. Who cares? Do we need to be the sole proprietors of intelligence in all its forms and shapes in order to be happy? I say enjoy the ride.
I don't think so. I think that we're a brew made of very simple ingredients that results in something that strives to live and reproduce. That's the main missing ingredient in AI for me; desire, or preferences. We just show them a bunch of things in turn, and tell them if they're right or wrong with some reduction of those things. They don't even desire our approval, we've built systems that process, they don't desire.
I think it's because we've nearly figured out cognition (which is ultimately simple), but not the desire to live or to reproduce (which as people we half the time think of as sinful desires), which we probably have deeply hardcoded within ourselves. The desire to emulate others derives from those more fundamental desires (without which we each wouldn't have existed in an unbroken lineage from the first cell), because we can look at others as experiments to model ourselves after, or when we see them fail, make guesses as to how they may have been mistaken in their choices.
We're physical things. The really mysterious part of our situation imo is how our perspective (for lack of a better word) attaches to our bodies (and minds.) Why am I me and not you? Why am I physically located in this mind? Would a perspective attach to a machine with cognition and desires? Would there be any physical way to prove that one hadn't, so would we morally just have to accept a machine's experience as similar to ours? We can't prove other people aren't machines, so we (generally, with extreme difficulty with people very unlike us) just take them at their word.
That's actually a reason I'm against full AI, because it involves creating a bunch of machines that we wouldn't have the moral authority to shut down. Especially if we haven't refined the technology and it's just as power-inefficient as what we have now. Sparking neural networks where electricity isn't used for the signal, just as a potential within the cell walls, and the spark is a little cough of neurotransmitter that is immediately sucked back in. That's just unbelievably low power (even though iirc it's still half or more of the calories you eat.) And brains are so space-efficient. I could cram 100 people in a room full of one AI, and the 100 people would consume fewer calories.
We're accidental automatons that are very cool and very difficult to reproduce, and still very mysterious in a fundamental way.
I'm "getting" to watch a couple close relatives go through severe cognitive decline.
From what I've seen: no, we're not.
They frequently drop the last few minutes of memories on the floor and you can watch them reset, ask the same questions, react the same way, all in precisely the same tone as the first time. Over, and over, and over. We're little more than spinning tops—they've a condition that makes that plain, but I don't see why the rest of us would be any different. We just retain and re-apply more of our input, than they do, so it's less obvious that "the lights are on, but nobody's home".
There is no behavior in our universe that exists outside the laws of physics. So whatever we are, it is a product of our physical form. If that's true, then we can build something else that also exhibits our same behavior but is a purely non-biological organism.
You've made the assumption that the laws of physics allow for something that also exhibits our same behavior but in a purely non-biological organism. I see no reason why this must be so. The laws of physics are not entirely understood and neither is the brain.
Even granting your premise that the laws of physics allows for such a thing you've also made the assumption that humans are intelligent enough to "built something else that also exhibits our same behavior but is a purely non- biological organism". You presumably don't think a dog could invent such a thing but are assuming evolution has given humanity the ability to do so.
Agreed. In a sense, that is kind of my point: Can physicists explain in extreme detail how the mind physically implements omniscience (instances of which are on display in this thread)?
Your “laws of physics” are just a verbal representation of what has been observable to you. It’s not a fundamental property in the deepest sense and there’s no guarantee that they may not change.
It's not a fundamental property in the deepest sense
What do you mean by "in the deepest sense"? If the actual laws of physics were to change, then our world would instantly change - perhaps in such a way that we no longer can exist. If our understanding of the laws of physics were to change, that doesn't undo what we already know. For example, apples won't start falling up from trees. Greater understanding of the laws of physics refines our knowledge, for example what, exactly, happens inside a black hole? As far as the physics governing our daily lives and even our biology, it appears to be rather mundane and well-known. It's not apparent that resolving QFT with GR is going to answer the question for how life arises.
I think it therefore stands that we could, in principle, build something equivalent to a human being - I just see no reason why we would choose to do so. After all, we're already pretty good at building new human beings!
Can you solve a system of coupled Schröedinger equations and know what my wife will want for dinner? I’m only partially joking of course. What I mean is that physics conveniently formalizes what is most conveniently formalized, thus giving an impression of all-encompassing knowledge and predictive ability, when in fact it’s only a small fraction of what is. And I’m saying this as a PhD in chemistry.
Do you believe consciousness doesn't fall under the laws of physics or are you saying the laws of physics as currently known don't yet go far enough to explain the working of consciousness? You probably know Roger Penrose and Douglas Hofstadter have been going round and round on this issue, in a good-natured manner, for decades now. Penrose is convinced QM is required to explain consciousness, whereas Hofstadter maintains consciousness can be modeled with mathematics. It really boils down to can you create an AI that's conscious (which Hofstadter believes) or do you need a physical structure like the brain (which Penrose believes). Neither one is arguing though that consciousness isn't subject to the laws of physics, one just believes the laws of physics as currently known are insufficient to explain consciousness. That appears to be the camp you're in?
Flip it around. Are you saying that you'd be comfortable with the idea that human behavior is completely and utterly random and unpredictable? I'd like to believe that every person has the ability to steer their own future (within the extent that their circumstances allow) and those of their families, and that requires a certain level of pattern recognition and predictability for that to succeed.
And anything that can be calculated by a human can possibility be calculated by an AI as well.
If someone asks you if you want vanilla or chocolate ice cream, once you have made a choice, even dualists would agree it is just a dance of chemistry that sequences a series of neural firings which propagate, excite muscle contractions, resulting in the verbalization of "chocolate". That is, the soul isn't acting at the level of muscles or plucking the strings of individual neurons to make them fire.
Instead, dualists have a vague assertion that the soul influences (perhaps determines) the choice itself. Getting to my point: if you believe that the soul drives the choice and the decisions wasn't just the result of billions of (purely physical) neural antecedents, where in your brain are the rules of physics violated in order to steer that neural outcome we call "choice"?
One hand wave is to say: quantum! Yes, quantum effects are not individually predictable, so one could imagine that a tricky soul which is trying to not be discovered somehow can cause just the right statistical fluctuation in the right neurons at just the right time to nudge the outcome to what it desires. But consider that that activation energy of a synapse is about 30 orders of magnitude greater than quantum fluctuations at the atomic level. Or put another way, not every butterfly flapping its wings causes a hurricane. Such things happen only if the interconnected decision points are on a knife's edge. It seems like wildly overconstraining to think that our brain configuration must be so delicately balanced for every decision we make to allow a soul to have the opportunity to effect every nudge that is required of it.
Taking a step away from humans, spiders have pinhead "brains". They can navigate the world, choose a place to build a web, build a web, catch prey, avoid harm, repair their web, mate, reproduce. I think few dualists would credit spiders with having a soul. Why is it surprising that a brain a billion times bigger is capable of making music?
Nope - the material universe seems to be able to achieve unimaginably spectacular feats (e.g. us) within the confines of physics, chemistry, and biology. I won't be surprised if such creativity will find a way to port "consciousness" across substrates (from meat to metal).
But I don't see this fact as dismissive of humanity's grandeur - the idea that we were placed here and made conscious by something outside of creation (quite literally a deus ex machina) is clunky. How much more elegant is it that we're instead of a kind with the rest of the universe around us and a link in that spectacle?
It's easier to start tackling the question when you make the distinction between mind and consciousness as other commenters pointed out in other threads. Let's just assume some very basic definitions:
- mind = memories, personality, aspirations, knowledge, pattern recognition, etc...
- consciousness = this unmistakable, unshakable certainty that "I exists, I am" that we all share (unless solipsism is true and you are a philosophical zombie :) )
To me it is then evident that the answer to your question is: we (and any sentient being for that matter) are more than that any AI will ever be.
I wonder what the venn diagram looks like between “people who are responding to this in the negative” vs “people who believe human rights are real and serious.”
I’m not trying to be flippant here, I just really don’t believe that people think through the consequences of their beliefs very well.
In case you’re interested, in the 20th century James in his pragmatism essentially did this for all of philosophy. He divided philosophy in two 2 main types, roughly the pessimist and the optimist or tender method and ____ method (forgot the second), and said all of philosophy is due to these personalities. Dewey made this Venn diagram.
I don't think humans are particularly special, and I don't believe in human rights. At least in the sense that there is some fundamental guarantee to liberty, property, or whatever.
Nature is cruel, and if given half a chance, will for example snatch the life from a 2-month-old baby. That baby did nothing wrong, but did not have the "human right" to life and liberty, just a hope for it.
We can still institute rules we agree to abide by, in hopes that our lives will be better overall. But that's just self-interested agreement, not some immutable "right" inherently bestowed upon us.
> I don't think humans are particularly special, and I don't believe in human rights.
I don't believe humans are particularly special, but "human rights" are nothing more than what societies deem as requirements for leading fulfilling lives and the degree to which we have those rights is determined solely by the willingness of society to protect those rights. There are no "natural rights" only rights that we recognize and implement for ourselves at a societal level. There is nothing stopping a society from making access to something arbitrary a "human right" like the "right to be delivered a new blue hoodie every winter solstice" and then protecting and enforcing that right. If a society decides that humans have no rights, then you'd be correct in that there are no human rights which is what you may experience in some parts of the world.
When you sum all the accumulated knowledge about the world and the universe, it seems to me that us being nothing more than accidental automatons is the most likely scenario.
All animal brains (including us) are first and foremost is a great signal processor that is tuned to surface that particular reality of nature to us that helps life.
Life is the evolution of (increasingly) complex adaptive systems in non-equilibrium state, driven by external energy and enabling a new regime of faster entropy increase, thus faster descent towards the "heat death". This is contrary to the popular belief that life somehow goes against the 2nd law of thermodynamics. In fact, it is the 2nd law of thermodynamics, the universal drive towards higher-entropy states (we are essentially specs of dust in a slow-motion explosion, since the Big Bang), that enables life-like systems to evolve under the right conditions to accelerate the slide towards a higher-entropy state.
Specifically in case of carbon-based biological life, one succinct tongue-in-cheek way to put it is "life's purpose is to hydrogenate carbon dioxide". See the "Nick Lane" discussion thread from today: https://news.ycombinator.com/item?id=32392937
Unless one believes in supernatural creators, observers and judges, there is no point in anything and existence, life and consciousness are all unlucky coincidences. We are an absolutely pointless long term consequence of the natural process of evolution, that can start anytime the environment for life is just right.
They are unprovable but irrefutable delusions or part of frameworks people made up to come to peace with the truth.
Examples frameworks would be pitiful reasons to exist like "keep the species alive", "keep the family line going", "I need to be behave this and that way because my deity will judge me when I die" etc.
By all means, don't follow "our scriptures", whatever those may be, but accept that we consider anything anyone would call "faith" and belief in the super natural somewhere on the scale between magical thinking and mental illness.
If people were so forgiving when it's time to walk the talk.
Regardless, this (particularly, the conceptualization of the supernatural) is somewhat of a variation of the kind of thinking that I'm referring to. I don't disagree that this isn't formally documented in scripture, but nonetheless it seems to have somehow ended up broadly installed in minds (both theist and atheist, to be fair), not unlike how various behaviors can be observed in theists despite it not being a part of formal scripture.
Atheists don't have scriptures. Religious people love thinking about atheism as just another religion and will bring up all kinds of examples like Stalin and Mao commited attrocities in the name of religion.
In reality atheists don't have anything in common with one another. There is no shared belief system or a set of scriptures for atheists. Stalin and Mao did however have a shared belief system and it wasn't atheism.
I see brain as this vast hashmap with excellent lookup capabilities.
Somehow, the prediction thing is not as surprising for me. We call that as judging in the mortal terms! Judging is generally perceived as a negative thing. I'd think judging is what the brain does implicitly. Our personality is a culmination of memories experienced so far. If you run through these memories on a brand new brain, you'd basically create your own version.
I have no idea when it happened, but recently I have been explaining to my girlfriend that we are all Bayesian machines, that we are constantly working with priors and posteriors, and while that subconciously our posteriors are updated a lot slower, I often ask her to try to conciously to update our posterior to try to (re)evaluate our beliefs sooner based on our observations.
A Thousand Brains by Jeff Hawkins captures some of these observations, and whether the theory is correct or not, there was definitely some resonance with how I feel like my brain might work. Things like anticipating how much something weighs before picking it up, or thinking you're about to drink water but its actually milk, etc...
Yes, they're very similar. Brain is trying to decipher meaning from a noisy and incomplete world.
I've actually wondered if intelligence could be defined through information theory, i.e. intelligence would be the inverse of error rate. Super-intelligence could operate at the Shannon limit.
I'm glad this just a metaphor about discrimination of signal from noise.
When the idea of consciousness comes up, too often I've heard the exact claim that consciousness cannot be from the brain. They assert the brain is just the receiver of the information coming from the soul. When asking why damage to a particular part of the brain usually results in a predictable deficit, they confidently reply that is also true of a radio receiver -- the non-physical soul is intact but the brain just garbles the message.
The large language models are very good at prediction. We are making good progress getting them to reason. We need to figure out how to get them to use an internal "blackboard".
Chinese room thought experiment maybe. It says a human does not learn language purely formally (the man inside the room doesn’t learn/understand Chinese like a natural Chinese speaker does by shuffling formal syntax around), which programs are.
So there must be something extra non-formal for consciousness, and the meager electronics of modern computers plainly don’t seem it.
>So there must be something extra non-formal for consciousness, and the meager electronics of modern computers plainly don’t seem it.
But the neurons of a brain do? Don't get me wrong, I like the theory that microtubules harness quantum weirdness to do something hard-problem of consciousness adjacent. But I would bet that if we were silicon-based lifeforms experimenting with carbon adding machines we'd say they plainly don't have the stuff for consciousness either. There's just no consensus theory[1] explaining how consciousness arises.
[1] though for my part I'd bet Tononi's model of information integration combined with quantum bayesianism and long relaxation times is on the right track.
How does the sense of the ridiculous or sarcasm fit into the theory of the brain as a conscious prediction machine? Is a sense of humor necessary with AI?
I think a lot of humour works by establishing a recognisable pattern and then providing a punchline that is not the expected continuation of that pattern.
TLDR: the fMRI brain image lights up a bit more when the person encounters something unexpected. Yet another stunner from "neuroscience" using up thousands of dollars of funding to tell us something we either already knew, or that is entirely irrelevant.
The reason why it can be valuable to research common sense is that every now and then, we discover that common sense is in fact wrong. Those are the discoveries that lead to paradigm shifts.
When the common sense is just a truism, it can't lead to paradigm shifts, by definition, because it's a truism. "Something different happens in the brain when unexpected things happen" is common sense in this case. Seeing a picture of the difference in an fMRI machine (for the 100th time over several decades) doesn't add anything to our understanding, wastes money that could be better spent elsewhere, and wastes valuable time.
++ Both great links. I've been aware of and impressed by Hawkins' book(s) for years, but now that I am reading Surfing Uncertainty I am blown away by how developed research supporting a predictive brain model appears. Perhaps I missed something, but in Hawkins' more recent Thousand Brains I do not remember him giving prominent notice to Surfing Uncertainty and related research, which would be a great disservice.
I noticed that your only comments are 4 that are almost identical to the above. Is this because you are truly single-minded, or perhaps this is an AI/ML bot account?
> Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language.
Interestingly, this is (roughly) the Qur'anic explanation for the genesis of human consciousness.
It makes a statement that can be interpreted that way with some help, sure. Sort of like how the ancient greeks came up with atoms, evolution etc. But it was just a vague idea, really, nothing very concrete.
I think language is required for communicative sentience, but not for consciousness - I mean, bees are obviously conscious. Sense of self, doubtful - but certainly present in other higher order vertebrates.
Unconscious intelligence is vastly underrated. I learned to use mine, early on. Consciously, I’m kinda average cognitionwise - but I figured out many years ago that if I just go with the first answer that tumbles into my head from seemingly nowhere, it’s usually right, and the moment I start consciously thinking, it goes wrong. It’s like trying to catch yourself looking elsewhere in the mirror - you can’t ask directly, but the computer already has the answer long before your language driven abstraction layer can grind there.
We don’t judge physicists, biologists, etc by their engineering ability. The theory of the mind will be won by that hypothesis which makes the most accurate predictions. I would also wager that engineering a mind will depend on that theory, not the reverse.
a) Consciousness is just your brain trying to anticipate the future.
b) Your brain compresses (normalises?) repetition in memories. So even if day to day events happen at normal speed, the years seem to fly by when you reflect on them. If your life seems to be flying-by then maybe you need more novelty.