One thing that isn't central at all, but it stood out to me.
"The amygdala appears to do something similar for emotional learning. For example infants are born with a simple versions of a fear response, with is later refined through reinforcement learning."
Positive and negative emotions can be seen as a reward/punishment mechanism - the goal of a reinforcement learning policy. Our brain is able to change this policy (what defines a positive or negative emotion) over time as our emotional intelligence matures. For example, when we are babies, we cry at anything that scares us. As we get older, we mature and change the emotional reaction automatically. In the example, we learn that not everything should scare us. I never realized that the brain (or ULM) can modify everything, including it's own policies, in response to external stimulus.
> I never realized that the brain (or ULM) can modify everything, including it's own policies, in response to external stimulus.
This statement does not make sense. For the brain, learning is the process of modifying policies. It is possible nothing else happens when brain is learning.
> Additional indirect support comes from the rapid unexpected success of Deep Learning[7], which is entirely based on building AI systems using simple universal learning algorithms... scaled up on fast parallel hardware (GPUs). Deep Learning techniques have quickly come to dominate most of the key AI benchmarks including vision[12], speech recognition[13][14], various natural language tasks, and now even ATARI [15] - proving that simple architectures (priors) combined with universal learning is a path (and perhaps the only viable path) to AGI.
This article presents an emerging architectural hypothesis of the brain as a biological implementation of a Universal Learning Machine.
Looked in the section titled "Universal Learning Machine", I looked at the footnotes (easy, there are none), I googled and used Google Scholar. I found no coherent definition of Universal Learning Machine.
I mean, the section I mentioned says: "An initial untrained seed ULM can be defined by 1.) a prior over the space of models (or equivalently, programs), 2.) an initial utility function, and 3.) the universal learning machinery/algorithm. The machine is a real-time system that processes an input sensory/observation stream and produces an output motor/action stream to control the external world using a learned internal program that is the result of continuous self-optimization." But it's using other vaguely defined concepts in a fairly vague fashion.
What the author is defining is kind of like a Godel Machine [1] or Symbolic Regression[2], to give two more concrete references than I've found in the text (well, I'm only skimming).
The key defining characteristic of a ULM is that it uses its universal learning algorithm for continuous recursive self-improvement with regards to the utility function (reward system).
And there the author gets much more specific and the claim is much more debatable. Of course, if you leave "continuous" vague, then you have something vague again. If you're loose enough, the brain, by your loose definition, has utility function. But that easily be true but not useful. Every at least macro physical system can be predicted by solving it's Lagrangian but the existence of many, many intractable macro physical system just implies many, many unsolvable or unknown or unknowable Lagrangians.
I think the problem with outlines like this, that I think are somewhat typical for broad-thinker/amateurs, is not that it's a priori bad place start looking at intelligence. It might be useful. But without a lot of concrete research, you wind-up seemingly simple steps like "We just maximize function R" when any know method for such maximization would take longer than the age of the universe (problem of a Godel Machine). Which again, isn't necessarily terrible - maybe you have an idea how to much more simply approximately maximize the function in much less time. But you know what you're up against.
I present a rough but complete architectural view of how the brain works under the universal learning hypothesis.
Keep in mind that to claim a rough outline of how the brain operates is claim more than the illustrious neuroscientist of today would claim.
Great, I'd like to ask you some questions, as most talk I've heard along these lines is beyond vague. I'd be great if you could clarify some questions I have about the idea. My questions might be so off-base from your mental model of how things work they may seem ridiculous, but that would stem from me never hearing more than vague hand waves about "radio receiver" brains and such.
#1: What is the division of labor between the physical mind (PM) and the non-physical mind (NPM)? Eg, is the NPM doing all the thinking, and the PM is just carrying out the instructions? Or does the PM do some share of the work and the NPM just nudges it when need be, like making free will decisions?
#2: What is the NPM doing while the PM is sleeping? There is some metabolic reason for the mind to sleep 1/3 of the time, but presumably the NPM has no such need. Is it still thinking all that time, or does it sleep too?
#3: When the PM is damaged in specific ways, perhaps catastrophically, what do you think the NPM is doing? Does it get frustrated that the PM can no longer receive the full message? For example, in the case of an Alzheimers patient.
#4: By what mechanism does the NPM communicate its thoughts/wishes to the PM? Does it incur a violation of the physical laws in the PM?
#5: Likewise to #4, how does the PM communicate to the NPM so the NPM knows what is going on?
Because written communication is ambiguous, I'll explicitly state these are sincere questions.
If you’re interested in learning about other metaphysical possibilities, George Berkeley’s works could be worth a read. One of his big statements is that “to be is to be perceived” — that is, anything that cannot be perceived doesn’t actually exist. Berkeley makes a pretty decent argument for this and his work is fairly influential in the realm of metaphysics (UC Berkeley is named after him).
I interpreted the original comment to mean that the mind is necessary for reality (i.e. there is no meaningful reality outside the mind), which is very close to what Berkeley is gesturing at.
Either way, the questions you’re asking (which all themselves presuppose a certain metaphysical interpretation of the world) have interesting implications (it seems like what you’re driving at is similar to the mind-body problem).
> One of his big statements is that “to be is to be perceived” — that is, anything that cannot be perceived doesn’t actually exist. Berkeley makes a pretty decent argument for this...
Do you know of where a person could read a summary of his argument?
The original text is the best, I think (maybe look at Stanford's Encyclopedia of Philosophy's treatment of the subject [1]).
The way I like to think of Berkeley's position is like an equivalence argument. Suppose one is arguing that mind-independent objects exist (that is, there are things out there that cannot be perceived, but one claims to exist). Berkeley's conception of the world (in which such undetectable objects do not exist) is equivalent, at least from the perspective of any observer. Any mind/observer in the world, by construction, cannot perceive or detect (even indirectly) mind-independent objects. If they could, then those objects wouldn't be mind-independent. So a world in which mind-independent objects exist is indistinguishable from Berkeley's world.
Does it really make sense when materialists argue that unobservable, undetectable, totally unperceivable and uninferrable things in this world actually exist?
So, Berkeley argues that reality is actually contingent on our minds (and he tries to show that this isn't as big of a deal as it sounds).
Berkeley's real argument goes a bit differently (as I understand it), but I think a claim of functional equivalence may be more convincing for people with a math/CS background.
> The way I like to think of Berkeley's position is like an equivalence argument. Suppose one is arguing that mind-independent objects exist (that is, there are things out there that cannot be perceived, but one claims to exist). Berkeley's conception of the world (in which such undetectable objects do not exist) is equivalent, at least from the perspective of any observer. Any mind/observer in the world, by construction, cannot perceive or detect (even indirectly) mind-independent objects. If they could, then those objects wouldn't be mind-independent. So a world in which mind-independent objects exist is indistinguishable from Berkeley's world.
This seems like not a sound argument to me....is it not tautological, or, simply an observation that perception of reality does not necessarily match actual reality?
If one was to imprison a child in a room from day one, they would have no way to perceive what is outside the room, yet multiple outside observers would agree in very high detail that certain specific things outside the room do exist. Does this scenario not consist of some sort of a reasonable disproof of this theory?
And if the counter-argument is that the perception of the outsiders is what causes those objects to exist, if we then killed all of those outside observers (say, just a few researchers who are aware of what is in the room surrounding the child's room), would the objects in that room then cease to exist (and if so, by what mechanism that we reasonably know exists)?
From your link:
>> Berkeley presents here the following argument (see Winkler 1989, 138):
>> (1) We perceive ordinary objects (houses, mountains, etc.).
>> (2) We perceive only ideas.
>> Therefore,
>> (3) Ordinary objects are ideas.
To me, the obvious flaw here is that there seems to be an implicit "only" perceived within the conclusion: "Ordinary objects are [only] ideas [and nothing else]." This is an extremely common error that the human mind makes, but you'd think that a philosopher would catch it in review of an idea (or the reviewers, who say: "The argument is valid, and premise (1) looks hard to deny."), so I feel like I must be missing something in the argument.
> Does it really make sense when materialists argue that unobservable, undetectable, totally unperceivable and uninferrable things in this world actually exist?
It makes complete sense to me (and I typically disagree with materialists)!
> So, Berkeley argues that reality is actually contingent on our minds (and he tries to show that this isn't as big of a deal as it sounds).
I 100% believe that reality is contingent on our minds, but I disagree extremely with the idea that this isn't a big of a deal - I think it might be the biggest (unrecognized) deal out there.
> This seems like not a sound argument to me....is it not tautological, or, simply an observation that perception of reality does not necessarily match actual reality?
The argument is trying to say that the distinction from "actual reality" (whatever that means) and perception of reality doesn't make much sense. One cannot break free of their perceived reality in order to see/compare/reason about "actual reality".
> And if the counter-argument is that the perception of the outsiders is what causes those objects to exist ...
As I understand it, this is Berkeley's position.
> ... if we then killed all of those outside observers (say, just a few researchers who are aware of what is in the room surrounding the child's room), would the objects in that room then cease to exist (and if so, by what mechanism that we reasonably know exists)?
Yeah, basically -- assuming you, the experimenter/person who is observing this thought experiment, is also killed.
I think the issue with some of these thought experiments is that they assume there exists some omniscient perspective who can see every part of the experiment (i.e. the person running the experiment).
> To me, the obvious flaw here is that there seems to be an implicit "only" perceived within the conclusion: "Ordinary objects are [only] ideas [and nothing else]." This is an extremely common error that the human mind makes, but you'd think that a philosopher would catch it in review of an idea (or the reviewers, who say: "The argument is valid, and premise (1) looks hard to deny."), so I feel like I must be missing something in the argument.
Yeah, I think the link I sent doesn't sum up Berkeley's point well. Perhaps it'd be better to take a look at the original argument. [1]
> > Does it really make sense when materialists argue that unobservable, undetectable, totally unperceivable and uninferrable things in this world actually exist?
> It makes complete sense to me (and I typically disagree with materialists)!
Interesting! How do you reconcile this with the belief that "reality is contingent on our minds"?
> The argument is trying to say that the distinction from "actual reality" (whatever that means) and perception of reality doesn't make much sense.
Oh I disagree, and would offer belief (perception) in covid as an example of the consequences of not making a distinction between the two.
> One cannot break free of their perceived reality in order to see/compare/reason about "actual reality".
Only as a binary (100% break free and see True Reality as it is), but as a spectrum, it is certainly possible to improve upon one's perceptions, education is a good example of that, and for the already educated, things like meditation and psychedelics can teach you substantial new things.
>> ... if we then killed all of those outside observers (say, just a few researchers who are aware of what is in the room surrounding the child's room), would the objects in that room then cease to exist (and if so, by what mechanism that we reasonably know exists)?
> Yeah, basically -- assuming you, the experimenter/person who is observing this thought experiment, is also killed.
Is this not simply a proof by re-assertion, that is easily countered by simply asserting the opposite?
> I think the issue with some of these thought experiments is that they assume there exists some omniscient perspective who can see every part of the experiment (i.e. the person running the experiment).
I mean that would certainly help, but asserting True/Accurate knowledge of the true state/nature of reality with no concern for what is actually true seems weird to me. Maybe this has something to do with the mind's common inability to not "know" certain things?
>>> Does it really make sense when materialists argue that unobservable, undetectable, totally unperceivable and uninferrable things in this world actually exist?
>> It makes complete sense to me (and I typically disagree with materialists)!
> Interesting! How do you reconcile this with the belief that "reality is contingent on our minds"?
Oh I was not explicit in my belief, I am thinking of ~the unfolding of reality....the future state of reality is a function of human perception (accurate or not). But this isn't really relevant to the question of whether there are unobservable/undetectable things in the world - simply imagine a position in the deep dark corner of the ocean, in a spot where man and his devices cannot reach: what is located there? Nothing? Null? A black hole? I simply presume that there will be your typical ocean stuff, but I do not know this to be true. And that's just a simple example, a more complex one would be: how does this world that we live in actually work? Why do things happen the way they do, and not some other way? Now, it is often very difficult for people to realize that they do not actually know the answer to questions like this (including or maybe even especially materialists), but that is very different from them actually knowing the answer. (Apologies if that sentence is hard to understand!)
Great question! Yes, this line of reasoning is vague. New lines of inquiry can be quite vague until more is known. Here are my opinions to your questions:
1) The division of labor is straight forward. The brain collects the inputs and sends the information to the NPM; possibly instantaneously.
2) I think the need for sleep is by design for a multitude of reasons. I do not believe the NPM sleeps and may possibly be the most active PM <-> NPM state.
3) I believe the NPM carries on with the limited information that it is receiving. I don't think that it gets frustrated per se because perhaps it's existence is eternal in some way.
4) Some sort of "entanglement". I'm pretty confident that this mechanism violates the physical laws as we know them.
This is interesting. I hadn't heard this idea before.
But it also describes another(if you ignore the wording and look at the ideas), simpler explanation, online and offline (plus garbage collection) training for the brain.
Yes, they do. All dualists have open themselves up to questions like this, though I suspect few have thought too hard about it. How many people believe in a soul that exists beyond the corporal body? Billions, probably.
When asked, why is it when specific parts of the physical brain are damaged, specific (and often predictable) types of deficits occur, the answer given is something like this: if you damage a radio receiver, it is normal that it can't receive certain signals.
That is why I asked about what smallmouth's model was for situations like this: if the physical mind is damaged, does the non-physical mind remain intact but simply gets frustrated that the physical mind isn't carrying out its wishes?
I've also heard this kind of talk from some in the psychedelic community. They've had experiences that feel so real yet are incoherent with their understanding of the apparent "real world." Rather than concluding that these small molecules have simply interfered with our brain's ability to model reality and their drug-induced experience was an illusion, they instead think their mind lives in a larger reality and communicates with their earth-bound physical brain, and that a lot gets lost in the translation.
> Rather than concluding that these small molecules have simply interfered with our brain's ability to model reality and their drug-induced experience was an illusion, they instead think their mind lives in a larger reality and communicates with their earth-bound physical brain, and that a lot gets lost in the translation.
Oh boy, have I been there.
Psychedelic experiences really test the foundations of your world view. I am definitely not a dualist, but I can still vividly remember experiencing existence outside of time. When the dust settled I arrives at a panpsychic explanation, in that I expect consciousness to be a property of the computation that occurs in the universe, whereas brains are a particularly sophisticated nested computer. It just doesn't seem right any more that consciousness would occur in the brain, but not outside of it. It's all space dust anyway - to believe that brain space dust is somehow special for some reason seems as unreasonable as the beliefs of a dualist. Not sure how crazy I am to be thinking this (doesn't everyone think their own worldview is the sane one?), but I can very easily imagine people coming up with all sorts of even crazier interpretations of their experience.
I really believe that in this communicational context - in these pages -, statements 'A is B' should always be accompanied by relevant sufficient (for the context) justification. There is legitimately no 'A is B', there is only 'One can state that A is B owing to C'.
Otherwise, anyone here could state 'Neurons function through lightbeams (full-stop)', 'Neutrinos are Leibniz's monads (full-stop)', 'Filippa's Republic is better than Western Democracy (full-stop)', 'Smith is wrong (full-stop)', 'The ratio of circumference and diameter is clever (full-stop)'...
If anybody stated 'A is B (full-stop)', another could come up with "No it isn't". We would be at Monty Python's Argument Sketch - a parody of the "drily strictly professional" soulless¹ spoilt cheap service associated (in some cultures) to brothels.
How do you reconcile this view with the findings that various mental operations correspond directly to processes occurring in the brain? Doesn't it seem an odd coincidence that a simple "gateway" also contains everything it would need to do the work itself, without a gateway?
Not the parent and not a “dualist”, but I sometimes like to engage in thought experiments comparing us to intelligent entities on a webpage that have been given access to a REPL on their current context:
We discovered document.body.innerHTML (dna?), and perhaps have found a way to establish a debugger connection too (eeg/ekg/etc?). We can see that various sequences of token inputted to the REPL correspond to reproducible outputs (gene engineering), but we have no real understanding how it all works under the covers. That is, we don’t know anything of the miles of renderer/OS/hardware/physics stack that makes it all possible.
Importantly to this discussion, we don’t know anything about a funny little sequence called XMLHttpRequest. We see it all over the place and can easily see how particular behaviors correspond to the sequence being invoked, but as far as we can tell it doesn’t act all that differently to any of the other token sequences we test, being perhaps most similar to Math.random, with the exception that it takes a seed as input.
I don't really have an opinion on the subject, but I do think this argument you are making isn't all too convincing.
Hypothetical scenario: Imagine if we hadn't discovered the medium of air. Yet we have discovered that putting a sail on a boat makes it randomly push the boat forward. In that case, we could explain the mechanics of the sail with simple spring models, we would observe how the sail is perturbed as it accelerates, we could see that it's made out of atoms and see how those atoms build molecules which build threads which make a fabric, we could even convincingly model the fabric in computers, yet it would tell us very little about why the boat is animated by the sail.
"The amygdala appears to do something similar for emotional learning. For example infants are born with a simple versions of a fear response, with is later refined through reinforcement learning."
Positive and negative emotions can be seen as a reward/punishment mechanism - the goal of a reinforcement learning policy. Our brain is able to change this policy (what defines a positive or negative emotion) over time as our emotional intelligence matures. For example, when we are babies, we cry at anything that scares us. As we get older, we mature and change the emotional reaction automatically. In the example, we learn that not everything should scare us. I never realized that the brain (or ULM) can modify everything, including it's own policies, in response to external stimulus.