
Simulation, Consciousness, Existence (1998) - Artoemius
http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1998/SimConEx.98.html
======
araes
An analogous way of saying this argument is, this world is a dream. It is a
reflective dream that builds content based on the observations of the
observers within that content. If I look over here, it tells me about here, if
I look there, info about there. As I build understanding and knowledge about
the things I'm looking at, about the act of looking itself, I gain the ability
to ask deeper questions against what is effectively a lookup table of
information about existing truths of the dream that I have already observed.
As I observe those events more closely, and build a better reference or lookup
of the information related to them, they in a sense become more stable and
more solid.

In the current state, all of the aspects of what me might consider a standard
"dream" are present, they just exist within the wrapper of technology. The
standard sufficiently advanced technology is magic argument. Truly, all
technology is magic, just accomplishing magical goals with a symbolic wrapper
of tech.

Another way to look at it, is we all live within a light hologram. All of
existence is a light hologram. It is like a bright point of light that we are
continually looking closer at, and as we do, we discover the complexities of
color and form within what was originally just a wash of light. All/No Colors
-> White/Black -> RGB, ect... Researchers have already demonstrated the
capacity to create subatomic particles using nothing but the confluence of
light. It isn't a huge leap to then surmise that all "matter" is actually
condensed or bound light.

We are effectively consciousness objects sharing the delusion that we all
possess "physical" bodies and interact with a material world, but its a
reflective argument, because the material objects we interact with are only
material to things within the simulation of materialism. Its like we're all on
the Star Trek holodeck, but the joke is that we're all actually the Doctor,
and all the meat things which think they exist within the "real" world, are
fundamentally just as immaterial as the Doctor - sharing only the one thing
they all possess, which is consciousness, and the ability to observe, interact
with, and build knowledge about their external world.

------
monadai
My alma mater ^_^. The basic loop is : [http://monad.ai/wp-
content/uploads/2016/09/dev_framework.png](http://monad.ai/wp-
content/uploads/2016/09/dev_framework.png) You can alter the environmental
context easily by re-directing the wire in/out : [http://monad.ai/wp-
content/uploads/2016/09/dev_context.png](http://monad.ai/wp-
content/uploads/2016/09/dev_context.png). An individual would be none the
wiser.

Sure we could be in a simulation. It's interesting to ponder on it. You'll
begin forming answers when you dissect the different components,
relationships, interactions, and try to create your own version of it : Say
AGI.

;)

~~~
erikpukinskis
It's amazing how many people speak about AGI as if it's even a theoretically
plausible thing.

It's about as plausible and well substantiated as the Christian God. Which
isn't to say it's impossible, but just that it's almost entirely an article of
faith. There are a few anecdotes which maybe hint at its possibility. But
nothing even approaching a rational explanation of how it might be built.

Also, the model of cognition you are describing (information processing or
I/O) is very old, from the 1960s. It was inspired by the discovery of the
computer. There are other models, like Ecological Psychology, Embodied
Cognition, Distributed Cognition.

It is tempting to draw a box around the brain and posit that it's the most
important interface and that all of the important information passes through
it. But whenever you break down so-called "cognitive" phenomena, you find that
often very little information passes through that barrier. The lions share of
encodings remain outside the brain, and for any given animal task, quite a lot
of information processing happens outside the brain.

In a weird way, the information processing model is a vestige of the notion of
a soul. If you really accept "physical fundamentalism" as OP describes it,
then the interface between the brain and the rest of the world is nothing at
all. Just a ribbon of atoms with a name. No more interesting than the
interface between your stomach lining and the bacteria inside, or between the
vibrations in front of your mouth and all nearby ears.

The only reason to center the brain/environment interface is to try to
separate what you consider to be the essential identity of a person from their
physical grounding. I.e. to maintain a model which includes a soul.

~~~
monadai
At this point, it seems there is no manner of detailing that will convince
people that it's plausible. I'm honestly unaware as to why people can't fathom
it. It's a systems issue. If you want to resolve a systems issue, you need
people who have worked on complex systems.

Having done the research myself, having the computational models in front of
me, and currently developing iterative capability levels, I can assure you its
plausible and I've developed some pretty complex infrastructures and systems
during my time in industry.

I encountered the same issue in industry when I was developing the network
infrastructure equipment that ensures your packets traverse the net. "You
can't do that". "That isn't possible". "You can't shave off 300ms from that
process. No one has touched that code in 10 years"

Why yes you can and I personally already have a track record of doing what is
said can't be done. Everything is implausible until its made plausible. So,
You'll never truly know until you try.

The biggest hurdle blocking people's way is that they're choosing to run down
the same path. Why should you expect different results when everyone is
approaching the problem the same way? That and, instead of the most knowing
people getting in the trenches and attempting to write code, they remain in
the philosophical camp and their works are tossed across the wall to the
applied engineering camp. Rarely do you find someone who wears multiple hats
or straddles the fence. I straddled the fence, saw what I saw, and now I'm
developing it.

There are few who want to start from scratch and build up models. Many are
ripping models from work done in the 60's,70's, and 80's without a second
thought as to what was the thinking behind them. I chose an alternative path.
It's paying off.

The model of consciousness that is being used is actually not detailed in any
way (purposely). So, it is not very old. I have a stack of annotated white
papers on the pioneers from the 60's/70's/80's centered on this inquiry and
present day papers on : Global workspace theory (GWT), integrated information
theory (IIT), etc. I fail to see any deep connection between my approach and
their approach.

I intentionally haven't given any detail about the computational model of
consciousness that is at the center of the architecture nor even the slightest
detail on how to implement it. Given the climate in this space, I hope you can
appreciate why.

You see a box. I see a relationship. There are no broad boxes over anything I
am developing. The diagram was made in simple reduced form to help one
conceive of the ties and flows to and from the world. People sing high praise
of OpenAI and OpenAI gym. I experimented with similar open source packages
when attempting to create a virtual environment for testing my code. I
resolved on different packages and developed my own gym. I needed more access
to the core/gut functions. From what I can tell, there are several other
groups/companies/individuals that have done the same. No mention of them ever.
No praise. Which is fine but it just goes to show you how there are likely
numerous groups making headwinds in this area that no one has ever heard of.

>In a weird way, the information processing model is a vestige of the notion
of a soul. If you really accept "physical fundamentalism" as OP describes it,
then the interface between the brain and the rest of the world is nothing at
all. Just some atoms of many. No more interesting than the interface between
your stomach and your brainstem, or between the vibrations in front of your
mouth and all nearby ears.

Interface/Relationship. There are no 'boxes' until you create one.

> The only reason to center the brain/environment interface is to try to
> separate what you consider to be the essential identity of a person from
> their physical grounding. I.e. to maintain a model which includes a soul.

Objective reality (governed by strict laws like physics).. Subjective
experience. Pay close to attention to the wording I use as I don't give many
details.

Seeing will eventually be believing. Once made manifest, you won't be able to
deny its plausibility. Seems one can save a lot of time skipping attempts to
try to convince people and just get to the development.

But yeah, consciousness isn't that serious. You just have to think outside the
box to begin making progress on it. Whether or not were in a simulation is
immaterial. The word 'simulation' really loses its meaning once you peer deep
into the constructs that underly the universe. What does that even mean and
how, even if you discovered it was a simulation, would you alter it in any
meaningful way. Don't you think the person who created it, given how amazing
it is, had the wherewithal to implement safe-guards/alerts? Or even made
universal laws that forever restricts you from certain things? It's better
that you focus on how it works than trying to define it. It makes for good
story telling but I'd rather just dig in, understand it, and make use of that
understanding instead. Again, do you want to sit around philosophizing and
dreaming about it all day long or do you want to start converting that
understanding into something ground breaking?

P.S - A component of the research that was conducted centered heavily on
physics/quantum physics. It is quite important to understand the 'environment'
and its laws when working on AGI.

~~~
erikpukinskis
I just want to say: I wholeheartedly support you and any and all research in
this direction. I think AI is and will be hugely important.

You think I will be surprised how "general" the AIs are. I think you will be
surprised how similar to human bodies you have to make them for them to
approach a human definition of "general".

This is the only sense in which I think AGIs are impossible... not that they
couldn't be fabricated, but that they would be functionally indistinguishable
from a human with access to some good subroutines.

> Seeing will eventually be believing. Once made manifest, you won't be able
> to deny its plausibility.

I look forward to it. I say the same thing about my own work all the time, of
which many people are incredulous. But still, you must admit that doomsday
prophets say the same kind of thing. Again, please don't take that as
disbelief, just not-yet-belief.

Regarding the specifics of the models, I would humbly submit this (as old as I
am) paper on Ecological Psychology as a good, usable alternative to the
information processing model of cognition:
[http://www.trincoll.edu/~wmace/publications/realism.pdf](http://www.trincoll.edu/~wmace/publications/realism.pdf).
If you don't want to wade through the high philosophy stuff, pages 194-209 get
into more concrete specifics of the model they are proposing.

To me it seems fundamentally different than the I/O based picture my OP
proposed. I would be interested to hear your perspective. And hit me up if you
ever feel like coming out to Oakland to chat with a fellow crazy person with a
passing knowledge of cognitive science and quantum mechanics. I would love to
buy you a coffee.

~~~
psyc
I read 194-209. I think their approach is fantastic, and it's rare to see such
careful thought on this subject. However... nothing struck me as being
incompatible with an information IO model?

~~~
erikpukinskis
Their core hypothesis is that you can't separate any kind of agent from their
physical capabilities in the environment and still have cognition. That
perception and action are not two phases but are an atomic unit of analysis.
Perception is action. And what is perceived is not signals but relationships
between an organism's capavilities and it's environment.

------
empath75
His book "Robots: Mere Machine to Transcendent Mind" is really excellent, even
if he takes quite a few gigantic logical leaps that aren't really justified,
imo. It's just a great piece of futurology.

~~~
ThomPete
Agree, one of the better books on the subject in the sense of framing a
potential path towards the robotic revolution.

------
have_faith
Morpheus from The Matrix: "What is real? How do you define 'real'? If you're
talking about what you can feel, what you can smell, what you can taste and
see, then 'real' is simply electrical signals interpreted by your brain."

Doesn't feel like there's ultimately any way out of this line of reasoning.
What would it take to prove to you that you are indeed not in a simulation of
some kind? as the only methods of providing proof are also parts of the
simulation.

~~~
naasking
> Doesn't feel like there's ultimately any way out of this line of reasoning.
> What would it take to prove to you that you are indeed not in a simulation
> of some kind? as the only methods of providing proof are also parts of the
> simulation.

Simulations are just as real. Why wouldn't they be? What makes "natural laws
enforced by reality" really any more "real" than "natural laws enforced by a
simulation"?

~~~
goatlover
Are simulations just as real? Or are they only culturally real? A simulation
of the weather only makes sense when you have humans around to interpret the
output from displays. Inside the machine, it's just a lot of 1s and 0s. It's
not even really that. It's a lot of electrons moving about. A bunch of
electrons aren't a simplification of the weather. It's only because human
culture has computing devices that simulations make any sort of sense.

Physical systems aren't about anything, and don't represent anything on their
own. It's the entire problem of intentionality.

~~~
unfunco
> It's a lot of electrons moving about

The actual weather is also a lot of electrons moving about.

~~~
goatlover
Sure, in part, but we don't consider weather systems to be running
simulations.

Although Jaron Lanier wrote a paper on treating a meteor shower as a simulated
computer running simulated minds to make a point about consciousness.

------
simonh
My main problem with the simulated world argument is complexity. Take the
Billiard Ball example[0]. This means to accurately simulate the universe you
can't really get away with approximations. Under close enough scrutiny
discrepancies in the simulation are discernible, and we can scrutinize it at
the subatomic level. But to simulate the observable universe, how big would
your computer need to be? How slowly would the simulation run relative to the
simulator's real-time? It just doesn't stack up.

The only way to do it would be to fake it by generating the appearance of a
thorough simulation rather than the reality of one. In which case the
arguments put forward for wanting to perform a real simulation - to simulate
history and so forth - break down because you'd only be emulating the
appearance of it not simulating it.

The only way out of this I can see is if the universe containing the simulator
were vastly more complex than ours, such that in comparison our universe would
be trivial to simulate. But then why would they do it? Our universe would be
nothing like theirs. In principle this is possible, but it massively reduces
the chances that our world is a simulation because only a subset, and quite
possibly a vanishingly small subset, of possible universes would be capable of
hosting the simulation. Possibly fewer universes that there are universes like
ours. At which point the odds of ours being a simulation collapse.

[0] [http://www.anecdote.com/2007/10/the-billiard-ball-
example/](http://www.anecdote.com/2007/10/the-billiard-ball-example/)

~~~
XorNot
We already run simulators of our universe with vastly lower resolution then
reality, and they _do_ tell us useful things about it.

~~~
AnimalMuppet
Sure, but that isn't simonh's point. Those simulations aren't very good.
There's nobody living in those simulations that thinks that they're alive, let
alone that is capable of creating a working computer out of the material in
that simulation.

~~~
simonh
Right, and the Billiard Ball Example shows what you need to do to even just
simulate a game of billiards. To do so accurately you have to simulate every
elementary particle in the observable universe. That's how complex and
interconnected the universe is.

~~~
XorNot
Arguing that present technology is insufficient doesn't prove that all future
technology would be insufficient. The trajectory of improvement rather
suggests the opposite.

~~~
simonh
It's not a matter of technology, it's a matter of complexity. By definition a
computer capable of simulating every elementary particle in the universe would
have to be many orders of magnitude more complex (in crude terms 'bigger')
than the universe itself, even assuming ideal technology with mathematically
perfect efficiency.

------
eternalban
The elephant in the room of the assembled Gaian biomorphs is the Human
capacity for discerning _meaning_. _Meaning_ and consciousness are very much
related.

A simulation is only dealing with _form_.

~~~
hackinthebochs
>A simulation is only dealing with form.

It seems to me a sufficiently precise simulation would necessarily capture
meaning. If meaning is critical to decision making and that decision making is
precisely simulated, then the simulation must also capture meaning.

~~~
eternalban
> It seems to me a sufficiently precise simulation would necessarily capture
> meaning.

That is the very bone of contention here.

> If meaning is critical to decision making.

I don't believe it is in the general case. Our current crop of Go winning
machines seem to indicate otherwise.

~~~
hackinthebochs
Within the context of Go, what meaning does the Go playing machine lack?

~~~
goatlover
All of it. It has no idea that it's playing a board game, or even what a board
game is.

That it's even playing Go is a human interpretation of what the machine is
doing. Granted, we gave it that interpretation in the form of software
instructions. But to the machine, it makes no difference.

~~~
hackinthebochs
_Within_ the context of Go there is no board game, its just sets of possible
states and transitions between valid states. Any feature of the game Go is
encoded within this state space. The "meaning" of pieces, moves, captures,
win, loss, etc are all encoded here. The go playing machine may not capture
these concepts explicitly at a high level, but I'm not sure that's an
important distinction.

Concepts like board games, people, ancient Chinese culture, etc are all
external to Go.

~~~
goatlover
Yeah, but the encoding of the possible states, etc are all based on the actual
board game. We made an abstract version of the game and fed it to a learning
algorithm.

~~~
goatlover
hackinthebochs:

Right, but the parts of the abstraction only have meaning because they are
given meaning by us from the reality we abstract from.

~~~
hackinthebochs
You seem to be defining meaning just as what conscious entities endow
something with, and so our abstract notion of Go necessarily receives its
meaning from us. I don't agree with this.

At its most basic, meaning just is the set of concepts and behaviors that
allow correct manipulation as judged by _some_ standard. So in this case the
rules of the game have meaning within the context of a game of Go as they
allow for correct manipulation of the game state. That what constitutes valid
board states was derived from conscious entities isn't relevant here. The
rules of Go have meaning (allow for proper manipulation) within the context of
the system of valid board states and transitions between them.

(you can reply to comments by clicking on the timestamp)

~~~
goatlover
Concepts are something minds create to make sense of the world. Correct
manipulation as judged by standard is also a mental judgement. The rules of Go
are rules because human beings defined them. That there is a context for valid
board states is because we created a game that had a context.

~~~
hackinthebochs
>The rules of Go are rules because human beings defined them.

But this says nothing about meaning within this framework.
"Concepts"/entities/units within the framework have meaning precisely because
of the relations inherent between the entities and states within the
framework. The ultimate source of the framework is not relevant.

The entities in the system do not "get" meaning because of a conscious
observer, they get meaning because of the relational properties between the
entities. If every person in the universe suddenly died, those meaningful
relationships would still be valid. After all, the relationships entailed by
math is true regardless if anyone is there to recognize them.

------
dschiptsov
The evil demon which Descartes described is very real - it is one's own mind
conditioned by dogmas, traditions and so called collective consciousness, and
the illusions that such a mind produces are as good as real.

Simulations can tell nothing new about the true nature of reality, because any
simulator would reflect the current assumptions up to date and will be based
on an oversimplified model. Weather simulation is not a weather. Map is not a
territory.

------
fiatjaf
> The prescientific suggestion that humans derive their experience of
> existence from spiritual mechanisms outside the physical world has had
> notable social consequences, but no success as a scientific hypothesis.

Why?

~~~
bbctol
Depending on your definition of "outside the physical world," it's at best
untestable. You can explain consciousness by saying it comes from a massless,
noninteracting, odorless, tasteless... etc. but that explanation doesn't
really help. So far, lots of other things that were previously explained by
non-physical whims of God have had at least partially reductive explanations
in science; a lot of people hope that consciousness will yield the same. It
still may be possible that consciousness is a spiritual phenomenon, but with
no way to test, prove, or expand on that, scientists would rather not close
the book there.

~~~
goatlover
The modern philosophical argument against a scientific explanation of
consciousness is that science is an objective, third person pursuit,
abstracting away from subjective, first person. If so, you can't hope to
explain the subjective in terms of the objective.

There is no need to invoke the spiritual or supernatural to see that
consciousness is a problem for science.

~~~
bbctol
It honestly depends on who you ask. A lot of people think consciousness is
inherently unsolvable by normal science (Chalmers' "hard problem of
consciousness"), but there are plenty (Dennett etc.) who will deny that such a
hard problem exists, and make reasonable arguments that the scientific method
can make headway in reductively explaining consciousness.

------
tim333
I thought about this essay a fair bit over the years. It's one of the things
that persuaded me that reality is mathematical in nature.

------
sa_su_ke_75
in the dream for oriental philosophy we create the connoisseur, the knowledge,
and the known object. or in other terms the object experienced, the
experimenter and the experience of the object known.

------
drostie
I am very happy to see this at least bring up one of the most strange parts of
Everett's many-worlds (and others' "many-minds") theories: that in them, you
are immortal; for there always exists a possible world which you didn't lose
consciousness in, and your final consciousness will only propagate into those
worlds.

Searle's objection still seems to hold some water, though. Consciousness does
not seem to be a computation because whether something is a computation is
observer-relative; for some observers this set of electrical flickering makes
sense as a computation to produce a sunflower-like pattern of points based on
emitting branches in directions of (pi * the golden ratio) radians... but for
the vast majority of observers probably it doesn't seem like anything until I
print out a picture of the result; and even then it might not mean anything to
those observers (they might be blind, or they might not associate it with
sunflowers, or they might have alien brains so differently wired from mine
that they simply cannot appreciate art the way that humans can). We actually
have formally defined computation to be observer-relative in precisely the way
that the status of what words a book contains and what those words together
mean is observer-relative (think that in some other parallel universes the
English language was exactly the same but that the words for 'cat' and 'dog'
were transposed, and so this same book tells a somewhat different story in
those worlds).

The problem is that my two bunnies seem to be quite conscious, to say nothing
of myself or my girlfriend. It's not just that they're conscious-relative-to-
me-but-it-depends-who's-looking... if that's true then it's a very different
perspective which almost nobody takes seriously and practices. My bunnies just
seem to be conscious, full stop. They appear to have both interests and the
capacity to feel pain (observer-relative consciousness), but it appears to be
more than just an appearance! In some sense they are objective observers who
their own consciousness is relative to; therefore they are objectively
conscious in a way that computations just don't seem to be objectively
anything.

The hope of the functionalist approach to consciousness, with its common-
sensical "anything which could replace this airy-fairy consciousness stuff in
all of its functional roles would be equally justified to be called
conscious," is therefore that as processes with no-intrinsic-meaning become
more complex and more involved, there is some way to say "no, the parts of
that don't have much intrinsic meaning by themselves, but you put them
together and then this thing is objectively computing X or Y, there is just no
other way for an observer to view it, it has passed a complexity threshold
beyond which there is only one interpretation of it." Our books, with the cat
<-> dog substitution looming in our minds, clearly don't pass this threshold
by-and-large, but perhaps things more complicated processes than those books'
narratives can?

~~~
psyc
The premise about computation being observer-relative overestimates how much
meaning we read into a system, and underestimates how much we read out of it.
When two 1kg bags of sand land on a lever and counterbalance the one 2kg bag
at the other end, that has an objective, logical, correspondence-based meaning
that transcends observers. When a domino computer "executes" \- same. When a
silicon computer computes - same. As the computer gets more complex, it
becomes harder to intuit how the physical working is objectively a
computation, but this proceeds by straightforward extension from the bags of
sand.

We can certainly attach semantics to the numbers (e.g. saying this bit pattern
represents dollars, or a spaceship's shield %), and _that_ is observer-
relative. But that is completely different from categorizing or understanding
a process as a computation in general, in terms of logic.

I'll stop short of implying that it's the same for the human brain, because
nobody should be pretending to understand the brain at this point in history.
However, this does provide a way to see how it _could_ be true for the brain,
if it is ever determined that the brain is precisely equivalent to a computer.

~~~
drostie
Sorry, what is the "objective, logical, correspondence-based meaning" that
you're referring to? Moreover, you've already laden a lot of interpretation in
there, "1kg" vs "2kg", "counterbalance".

Let me put it to you a different way: suppose that I put two bags of some
substance on one side of a beam attached to a fulcrum, and one bigger bag
containing a substance that looks similar, on the other side of the beam.
Suppose that the beam continues to tilt such that the bigger bag is resting on
the ground. There are definitely some observer-relative ways to read this
situation; but is there an observer-independent way to read it, which goes
beyond what I have already said about it in this paragraph?

