
Is Matter Conscious? - dnetesn
http://nautil.us/issue/47/consciousness/is-matter-conscious
======
Animats
This seems to be the old homunculus argument in another form.[1]

As knowledge of how brains work increases, this will probably become a non-
issue. A big question for millenia was "what is life?" This question continued
through the discovery of bacteria, and later, DNA. As the lower level
mechanisms of biology were understood, it stopped being a question. Bacteria
are alive, and are reasonably well understood. The components of bacteria are
not quite alive, but are also understood. (There's still some argument over
whether viruses are alive, but that's now just an argument over the
definition.) "What is life" is now a "go look it up" question, not a mystery.

"What is consciousness" will probably go the same way in time.

[1]
[https://en.wikipedia.org/wiki/Homunculus_argument](https://en.wikipedia.org/wiki/Homunculus_argument)

~~~
Razengan
Suppose we build some robots with AI and send them to space, and they land on
some planet.

Those robots can mine raw materials and build other robots like themselves.

They can learn, and teach, and improve their designs and invent new things.

Fast forward a few hundred years. The planet now hosts a civilization of
robots, with their own culture, planning their own excursions into space.

To someone searching for Life™ and Intelligence™ in the cosmos, what would
make our robot civilization fail to be qualified as such?

~~~
visarga
An external observer should realize that robots are self replicators (like
people) that fend for themselves in the environment (like people, again), by a
complex process of adaptation to external situations, moment by moment (yep,
like us).

~~~
Razengan
And yet, when we make them here, everyone will consider them to be only
"artificially" intelligent, and they certainly won't be "alive" because, well,
no cells and no DNA and stuff.

So the GP's comment about life being a "go look it up" question is debatable
at best. Expect the definition to change over the centuries.

------
ppod
I don't see any advance in this debate since things that Dennett and
Hofstadter wrote 20+ years ago (both independently and in their co-authored
book "The Mind's I[^]"). It's surprising to see academic philosophers still
using arguments like the "redness problem" described in this article. How can
anyone with any knowledge of visual neuroscience or artificial neural networks
be confused by a generative system producing a novel state?

And perhaps more controversially, I'm always a bit taken aback by the constant
blase assertion that consciousness is a mystery. Is it really surprising that
we have a first person subjective experience? We know that we are incredibly
complex things, constantly integrating and acting on very complicated external
stimuli. Such a system should have references to its own body and its own
neural states, its train of reasoning should frequently include itself, its
focus will drift forward and backwards in time... this is just how a system
like this would work. If the system communicates about its state then its
language should have referents to these internal states, referents like
"experience", and "feels like", and "I understand". Is that surprising?
Wouldn't it be surprising if it _wasn 't_ like that?

I think that Tononi's approach is a good approximation, but it can't be a full
solution because the word 'consciousness' is too anthropocentric. One
criticism of IIT showed that a seemingly uninteresting complex artificial
system could have a very high IIT complexity quotient. The problem is that the
things we use to define the term 'consciousness' are things that can be
approximated to varying degrees by chimps or dolphins or generative
adversarial networks or antfarms or thermometers. But behind our use of the
word 'consciousness' there is still almost always a very slightly disguised
dualism that uses it as a substitute for the word 'soul'.

[^]edited

~~~
rexpop
It's "The Mind's I: Fantasies and Reflections on Self and Soul" by Douglas R.
Hofstadter and Daniel C. Dennett

"The Mind's Eye" is by Oliver Sacks (who has almost definitely read
Hofstadter/Dennett)

[https://www.goodreads.com/book/show/2081.The_Mind_s_I](https://www.goodreads.com/book/show/2081.The_Mind_s_I)

------
KennyCason
I think the title may mislead some and result in their dissatisfaction. This
article (and comments) are more addressing the unknown nature of consciousness
and matter. Specifically, how they seem to be circularly related. I.e. Our
consciousness seems to have originated from our physical configuration, yet
also the only way we perceive/understand the physical reality is through our
consciousness/mind.

It also concludes with the fun statement "The possibility that consciousness
is the real concrete stuff of reality". Which is quite a fun statement to
truly unpack and understand, of which, I will not attempt to do now. :)

~~~
visarga
The article is referring to "idealism", the belief that world is primarily
ideal (ideal = "of the same kind with ideas" = mental), not physical. It says
that the world we perceive is just an internal state inside the fundamental
substratum which is consciousness itself. It was in vogue in India 1000 years
ago in the time of Abhinavagupta and appeared in many other places, even in
Europe. Look for idealistic monism in the wiki.

------
markbnj
This reminds me of Ursula K. LeGuin's "A Wizard of Earthsea," in which all
things had a name, and knowing a thing's name granted power over it. At the
same time it also seems weirdly suggestive of The Matrix.

~~~
jat850
Holy wow. I just finished reading the first of the Kingkiller Chronicle books,
The Name of the Wind. I did not realize this central theme was so similar
(and/or derivative). My exposure to sci-fi and fantasy is more deep than
broad, a lot from the same authors instead of several from many. I think I
must be somewhat blinded to how common this probably is.

~~~
bitexploder
I have read a lot of sci-fi and fantasy fiction. A lot. Very few stories
surprise me regarding the broad strokes or philosophical system / theme as it
relates to magic and world building. I always enjoy when an author teases a
new perspective on a known theme or system. Sanderson is great at taking a
"known" system and bounding it tightly and thus making it a great story
telling device.

I always enjoy an author with a deep understanding of a subject like game
theory and they weave it into the magic and story giving a view into something
new. So "everything is derivative". And nothing is :)

~~~
jat850
I also just finished the first Mistborn book. I REALLY enjoyed that, much more
than Kingkiller Chronicles (of which I've read two books). I'm moving onto the
next one and am excited that there are a number of them I can consume in this
same series/world.

I thought the magic system in Kingkiller was pretty neat, and I think the same
of Mistborn. It really exposed further my narrow scope because I am very used
to conventional, plain magics and they were both different and unique, and
well-crafted.

~~~
bitexploder
This may be a little like cheat codes for fantasy fiction, but look up
Sanderson's creative writing course lectures. It is meant for aspiring fantasy
fiction authors but it has also made me a much more critical sci-fi and
fantasy reader.

I pulled it up: 2013 Lecture 1:
[http://www.youtube.com/playlist?list=PL8YydnShI45jSbRdMeyQ-S...](http://www.youtube.com/playlist?list=PL8YydnShI45jSbRdMeyQ-
SiKJDKYaTeMe) start here. This YouTube account has all of the lectures. As an
avid fanfic reader these lectures were immensely enjoyable especially as it is
essentially one of the preminent authors of this era sharing his secret sauce
freely.

~~~
jat850
Thanks for the dialogue and information! I've bookmarked this and will make my
way through it as well.

------
dgreensp
This is actually one of the better presentations of this idea, and it gave me
a better handle on it.

Here's one way to look at it. If you accept the claim, "My conscious
experiences exist, non-physically," which seems true, you can reason to some
interesting places from there. For example, it would be pretty weird if you
were the only "conscious" person in the world, surrounded by zombies who only
appear to have an inner life, so we can say that all brains have
consciousness. Then, you can keep delicately applying Occam's razor to say
that what we call consciousness could be just one manifestation of a more
basic phenomenon, which fills a metaphysical explanatory void.

Even if we don't accept the claim, and we say consciousness doesn't really
exist and we are merely projecting its existence, we could still discover new
phenomena that seem just as real to us as the illusion of consciousness,
despite not being clearly connected to physical laws. Maybe pretending that
it's possible to have a "direct experience" of something can reveal that
everyday consciousness is just one kind of direct experiencing.

------
visarga
> No matter how precisely we could specify the mechanisms underlying, for
> example, the perception and recognition of tomatoes, we could still ask: Why
> is this process accompanied by the subjective experience of red, or any
> experience at all? Why couldn’t we have just the physical process, but no
> consciousness?

The difference comes from context. It is one thing to have a feedforward
network that detects tomatoes, another to have a reinforcement learning agent
that optimizes for survival in nature.

The RL agent has a value function that assigns a predicted future reward to
the current state and actions available. The value network, together with the
sense data, is "what it feels like" for the agent to see the tomatoes. Maybe
it knows that red tomatoes are good, or that red fruit are ripe, and they can
give it a reward signal by reducing hunger. That makes perceiving red an
experience colored by emotion. Maybe agents that didn't see red tomatoes as
good, had less chance of survival and died off.

Also, the RL agent is not a simple feedforward net, but a loop:
perception->judgement->action->effect in environment, and rewards. It is a
dynamic process, evolving from moment to moment. In that continuous
perception-action-effect loop, there is space for the inner world and all its
complexities.

~~~
SomeStupidPoint
Why?

All of those could be feedback/computation systems without any experience.

Your answer only argues for why the system would have certain computational
features, not a subjective aspect.

~~~
visarga
My answer explains why there would be an emotion attached to a perception. The
subjective aspect is a result of the network being an agent, that has to fend
for itself in the greater environment. The stake of the RL game is survival
itself. Each new state it finds itself in opens some possible future paths and
closes other paths. Some actions lead to rewards, other to losses. I think
subjectivity lays in this space of values, options and actions. That places is
where "it feels like something" to see red.

And if you're still unconvinced, remember what we are: a collection of cells,
a protein based dynamical system that has computing and learning abilities.
Why would it feel like something to be a protein-based computer?

~~~
SomeStupidPoint
You've missed the question: why is there a perception at all for red to seem
like that in?

~~~
ppod
What do you mean by 'perception'?

~~~
SomeStupidPoint
That there's any subjective experience, that there are things I perceive or
experience.

The question is how do I explain having a subjective experience _at all_ , and
do toasters have one?

~~~
visarga
Toasters are not self-replicating systems like living things, so they don't
share the same problems. An organism has to fend for itself and reproduce in
order to exist. Perception appeared as a mechanism that allows organisms to
adapt to changing external conditions. With experience, they learned complex
behaviors by reinforcement learning. Every moment, it perceives the world,
judges the value potential of possible actions (instinctively), then selects
the action and executes it. This loop requires the agent to have an evolving
internal state. This is 'subjective experience'. It's a mechanism by which
organisms behave in a more adaptive way.

------
lend000
There's no reason to believe so.

Advancements in CS, AI, biology, and general models of intelligence are slowly
bringing us to the point where consciousness is no longer a mystery to the
enlightened. One of my favorite researchers in general AI is Professor
Schmidhuber, who has pioneered some of the (currently) more practical areas of
machine learning, too:
[http://people.idsia.ch/~juergen/](http://people.idsia.ch/~juergen/)

Even when we create self-aware machines that can demonstrate generalized
intelligence, I think most people will struggle to come to terms with what
consciousness is (or more so, what it isn't). My spin on it: consciousness
isn't real; it's a necessary perceived byproduct of 'everything else.'

I believe Schmidhuber said it more elegantly, describing consciousness as a
result of data compression.

~~~
hacker_9
It's not like this viewpoint isn't known, it's just not accepted because it
makes no sense. Consciousness really is unexplainable by what we know of the
physical world, it's constancy is just odd. When were talking about the most
advanced organ in the history of the universe, is it really a leap to consider
that new physics would be involved?

~~~
visarga
> Consciousness really is unexplainable by what we know of the physical world,
> it's constancy is just odd.

Consciousness is just you adapting to the world. You need consciousness in
order to live in the world, with all its complexity and dangers. The brain has
nothing magical about it, it is made of the same kind of atoms as anything
else. What is happening is just perception, judgement and action in a loop,
each supported by neural networks. Every moment, a full perception-judgement-
action loop plays itself. It generates a stream of experience and emotion
(which is just the value or reward intuited in certain situations and
actions). This internal stream of perceptions and judgements is consciousness.

~~~
hacker_9
Amazing that every sentence you wrote is objectively wrong. I'm bored so lets
go through them:

1\. Consciousness is just you adapting to the world

Since when did adaptation require consciousness? We're not even sure most
animals are conscious, but they adapt to their conditions just fine.

2\. You need consciousness in order to live in the world, with all its
complexity and dangers.

Where does this need come from? A self driving car could navigate the world
without being conscious at all.

3\. The brain has nothing magical about it, it is made of the same kind of
atoms as anything else.

Didn't say it was magical, but saying it's made of atoms moves this
conversation along by zero percent. It's like saying computers are just a
bunch of transistors, disregarding the millions of lines of code required for
it to even just take my key inputs as I type out this pointless comment, and
turn them into glyphs on the screen.

4\. What is happening is just perception, judgement and action in a loop, each
supported by neural networks.

Where has this come from? We don't even know what goes on inside one neuron,
let alone the billions of networks.

5\. Every moment, a full perception-judgement-action loop plays itself.

So what happens if I close my eyes and cover my ears? Now I am unable to
perceive the world around me, and break the 'loop'. It's not like I shut down.
I don't think you've thought this through.

6\. It generates a stream of experience and emotion (which is just the value
or reward intuited in certain situations and actions).

Yes certain chemicals are released in the brain depending on your actions, but
of course it's all chemicals; it's a biological system.

7\. This internal stream of perceptions and judgements is consciousness.

No, this is just a function of part of the brain, which we can be aware of
through consciousness.

All in all, there is completely no depth to your explanation at all. The fact
we haven't got a definite blueprint for the brain, does really mean it is
currently beyond our understanding. Neuroscientists have died of old age
trying to figure out the secrets of the brain, and it's because they were
looking a bit more in-depth that it's just a bunch of atoms.

~~~
lend000
> Amazing that every sentence you wrote is objectively wrong.

As I mentioned in my original post, most people, like yourself, will reject
these very non-intuitive conclusions, probably indefinitely.

Regardless, I'll point out that many of your counter-arguments to the parent
comment are wrong (not saying I agree with all of his description, either).

> A self driving car could navigate the world without being conscious at all.

How do you define conscious? 'Feeling' just like you do? I think many modern
AI's could be said to experience a different, simple form of consciousness as
they process inputs. Granted, it's not useful unless you have an AI that is
also more generally self-aware and communicates with us. But regardless, your
hyperbolic example equated driving a car on streets with "living in the
world," which you were replying to (which is an objectively ridiculous
comparison).

> We don't even know what goes on inside one neuron, let alone the billions of
> networks.

Considering what NN-based deep learning has accomplished with relatively small
numbers of simulated neurons with relatively simple models, I think it's safe
to say that a single neuron is not as complex and mysterious as you may think.

> No, this is just a function of part of the brain, which we can be aware of
> through consciousness.

This is your idea of an objective truth? This is your opinion, which, someday,
we may be able to prove is objectively wrong.

~~~
hacker_9
> How do you define conscious? 'Feeling' just like you do? I think many modern
> AI's could be said to experience a different, simple form of consciousness
> as they process inputs.

What an odd comment. So because we performed some clever math to make some
basic neural nets, our AI is now experiencing a form of consciousness? This is
just better algorithms that are more capable of fooling you, still running on
the same microprocessor as before. Consciousness is described as a state of
constant awareness, not of performing the act of perception itself.

> I think it's safe to say that a single neuron is not as complex and
> mysterious as you may think.

Not sure if trolling or not? You should look up protein folding and protein
machines at some point, basically there is a LOT going on at the nano scale
that we still don't fully understand. Additionally tell any neuroscientist
that the brains neural networks are the same as our computer version of NNs
and you'll be laughed out the room.

And perception is a function of the brain yes, how is that an opinion? Your
arguments really make no sense.

~~~
lend000
> What an odd comment. So because we performed some clever math to make some
> basic neural nets, our AI is now experiencing a form of consciousness?

Yes, you keep re-hashing this point which makes it clear that you still (and
probably always will) believe that consciousness is something 'more' \--
something mystical and unexplainable -- because it's so difficult and non-
intuitive to wrap your head around (as it is for most people).

> Not sure if trolling or not? You should look up protein folding and protein
> machines at some point, basically there is a LOT going on at the nano scale
> that we still don't fully understand. Additionally tell any neuroscientist
> that the brains neural networks are the same as our computer version of NNs
> and you'll be laughed out the room.

Who said our models are the same? You're missing the point. Re-read the
paragraph you're replying to, and look what we've accomplished with just
_dozens_ of simplified neurons. The point is that the details beneath our high
level understanding of neurons (basically just spiking and activation) are
likely not important to consciousness. Just like a high school physics student
can usefully understand the mechanics of rubber balls without fully grasping
the chemistry at the atomic level.

> And perception is a function of the brain yes, how is that an opinion? Your
> arguments really make no sense.

Here's the context, to remind you:

> Amazing that every sentence you wrote is objectively wrong.

>> 7\. This internal stream of perceptions and judgements is consciousness.

> No, this is just a function of part of the brain, which we can be aware of
> through consciousness.

Just want to feel like you won an argument today? No need to turn this into a
troll battle -- I'm actually trying to explain my viewpoint to you, if you're
interested.

~~~
hacker_9
You are arguing that by performing the algorithms for perception, one is
conscious. I am arguing that algorithms alone aren't enough, and that yes
conscious is still some unexplained physics that takes the results of those
perception algorithms in our brains and generates our awareness and experience
from them.

I am not trying to win any argument, but it does annoy me when people argue
with incomplete information and try to trivialize consciousness, because of
what you know about some basic neural nets in computer science. Look at this
video released last month, where scientists show a model for just 3 neurons
[1], which they think is super important to consciousness. Only 3 neurons out
of billions. There is just so much we still don't know.

[1]
[https://www.youtube.com/watch?v=5s5I-rUyDTA](https://www.youtube.com/watch?v=5s5I-rUyDTA)

~~~
lend000
> You are arguing that by performing the algorithms for perception, one is
> conscious.

Pretty close -- my argument is that for an entity to be intelligent, self-
aware, and perceptive, it must also believe itself to be conscious, in the
sense that it believes it has "experiences." Furthermore, and most
importantly, if an entity _believes_ it is conscious, it _is_ conscious.

> I am arguing that algorithms alone aren't enough, and that yes conscious is
> still some unexplained physics that takes the results of those perception
> algorithms in our brains and generates our awareness and experience from
> them.

There's so much undiscovered truth about physics that it's always possible
there are important, still completely unknown properties of physics that play
specific roles in consciousness. However, that's mostly speculation (like the
article) and there's no real evidence for it, whereas it's becoming
increasingly intuitive to some AI researchers and myself that such a 'Holy
Grail of Physics' yet-to-be-discovered property does not seem to be necessary.

Many AI experts presume that by scaling and adding complexity to models that
are pretty similar to today's AI models, we can achieve general intelligence.
I think that this logically requires the intelligent agent to believe itself
to be conscious, and therefore be just as conscious as any human, albeit in a
different way. The combination of intelligence and self-awareness are
logically incompatible with a lack of consciousness.

------
kneel
This subject has been beaten to death by a lot of smart people.

I think it's well understood how consciousness works, (on a higher order
level) the answer is just boring and somewhat demeaning to many who believe
that they're something bigger than a sack of nerves.

~~~
catshirt
"I think it's well understood how consciousness works"

care to expand on this at all? it completely contradicts most everything I've
heard.

saying consciousness is just a neural reaction could be right but it's by no
means an understanding.

~~~
bitL
We don't even have a proper explanation of how neurons themselves work; there
are some processes we have no idea how they are possible (vast amount of ions
passing through membranes) as well as some local protein computations that go
against perceptron simplification operating on simple analog electricity.

~~~
kneel
>We don't even have a proper explanation of how neurons themselves work; there
are some processes we have no idea how they are possible (vast amount of ions
passing through membranes) as well as some local protein computations that go
against perceptron simplification operating on simple analog electricity.

I think neuron function pretty well characterized. What about ions passing
through membranes do we not understand?

The exact details of all neuron types and protein interactions are definitely
not known but basic function is understood.

~~~
catshirt
just because these things can conceive consciousness does not make them
consciousness.

unless you can tell me exactly why and when in these processes consciousness
arises, were not having the same conversation.

~~~
visarga
> just because these things can conceive consciousness does not make them
> consciousness.

I like the car example. If I have a pile of car parts, they don't make a car.
They have to be put in a special way in order to function. What is "car-ness"
and where does it lay? In the parts, or in the configuration, or in the whole
action of driving it?

~~~
catshirt
great analogy. also does a good job pointing out the type of vagueness and
subjectivity in meaning that makes it hard to pin down consciousness.

the vagueness- which parts can you remove from a car before you wouldn't call
it a car anymore?

the subjectivity- which parts can you remove from a car before _I_ wouldn't
call it a car anymore?

bald is another funny example of emergence (though not my own). how many hairs
must one grow before they are no longer bald?

------
s_gourichon
Anyone interested in consciousness (even out of personal curiosity) is likely
to love all the work of Kevin O'Regan. [http://nivea.psycho.univ-
paris5.fr/](http://nivea.psycho.univ-paris5.fr/) Interesting, sound,
scientific.

For example his book "Why Red Doesn't Sound Like a Bell: Explaining the Feel
of Consciousness (2011)". Author makes it available to download on
[https://dl.dropboxusercontent.com/u/869531/OReganWhyRedDraft...](https://dl.dropboxusercontent.com/u/869531/OReganWhyRedDraft.pdf)

------
oldmancoyote
For a while it was common for folks to speculate that consciousness was self-
organizing behavior of a chaotic assemblage of electro-chemical relations
within the brain. The problem with that has been that self-organizing behavior
adds nothing new. It merely reorganizes existing behavior/properties. This
article proposes the existence of the fundamental "thing" that is organized in
consciousness, that thing that is the primitive element possessing the
property of awareness.

Seems good to me. In fact, it seems necessarily true.

~~~
visarga
> self-organizing behavior of a chaotic assemblage of electro-chemical
> relations within the brain

That seems about right to me. Add that the organism itself is part of a
complex environment, with all its sensations and reward signals, and that the
organism has to adapt in order to survive/exist in the first place. With these
constraints, the self-organizing behavior leads to consciousness.

------
powera
Another article that basically says "quantum and consciousness are both poorly
understood; maybe they're related" in thousands and thousands of words.

------
cool_shit
I can't stand articles like this -- they bait you into reading poetry at the
promise of science. So many words, so few ideas that are actually substantive.
Consciousness speculation articles are a dime a dozen today, this is part of
the noise. Even worse when they ramble on for what seems pages of periodic
content.

~~~
catshirt
I'm reminded of people talking about tripping. we don't really have the
vocabulary or knowledge to talk about certain things and they end up sounding
wishywashy.

but I don't think it means we can't try, or hypothesize...

~~~
M_Grey
It reminds me of Hamilton Morris talking to this guy who's been taking huge
doses of PCP for years. He gets absolutely lit, and then does "art", which is
highly meaningful to him, but... pretty much just to him. Some people never
really learn the difference between the _feeling_ of profound understanding
that can occur while dreaming or high, with profound understanding.

~~~
catshirt
good point! I just don't want the false positives to discourage us from trying
to share our feelings just because it's difficult.

~~~
M_Grey
Sure, and I'm for limitless sharing of feelings, but in the context of, "Here
are my feelings on 'X'."

~~~
catshirt
we're on the same page. and you shine some light on OPs point that I now see
more truth to- poetry parading as science.

------
arc_of_descent
I've been hooked onto UG for more than a month now. Articles like this would
have piqued my interest earlier. Now they just bore me. I'm going back to
learning CL.

------
buhrmi
no

~~~
lutusp
a. You're conscious.

b. You're composed of matter.

c. Therefore _some_ matter is conscious.

d. What's true for some matter might be true for all matter --- or not. That's
the "hard problem" of consciousness.

~~~
bitL
For all we know, our brain can be just an interface to something else we don't
know yet. Like when nobody knew what radioactivity was, miners thought some
monsters resided deep inside Earth, illuminating caves and mines in green, and
anybody that got nearby died soon afterwards. Quantum theory spawned a lot of
scientism at the beginning of 20th Century, attributing consciousness
properties down to atomic level, and this seems to be the case with each new
generation, falling into the same though a bit more refined scientistic trap.

Just for fun:

If we play with the idea of Platonic Theory of Forms as a super-set of what we
can perceive with our sens-es(-ors), under assumption philosophy/math is above
reality, we can simply be plugged into some kind of a virtual machine with
some API of sorts (of course undocumented) that is called by our brains when
we think. Those who figured out some rare API calls on their own or by a
secret tradition we call magicians, conjurers, gurus, Neo etc. Maybe
words/thoughts could invoke some of those API functions, hence magic Harry-
Potter-esque words and generally prayers in many religions. Similarly, what we
call angels could be simply API services with highly intelligent behavior,
preferably operating outside time, providing verifiable and expected outcome.
Demons then are some intelligent services gone completely wrong, messing up
with the rest of intelligent services that depend upon them, playing eternal
asynchronous byzantine generals (by e.g. trying to upgrade all services to
Windows 10 or flat interfaces with telemetry /s).

I think I should write a dystopian cyberpunk sci-fi "Services and Daemons"
about this ;-)

~~~
Eerie
>For all we know, our brain can be just an interface to something else we
don't know yet.

This is a very old, but unproven hypothesis. You might have heard of it, it's
called "soul". :-)

~~~
qb45
No, it's a novel idea called SaaS - soul as a service. The OP probably is
already making money on it while you sit here posting dismissive comments.

~~~
bitL
Yes, indeed. We need to assume "soul" resides in the cloud. What we don't know
is the soul ID - a very secret API exists only a few soul-hackers were able to
find but soon demons causing madness were unleashed on them after each
intrusion was detected. So only a few chosen services know IDs, allowing them
a direct access to souls in the Great Hash Map of Souls, hence why knowing a
name means having a power over a soul. Of course, soul ID is innately
incorporated in each individual brain itself, but encryption is too strong
even for bad services to break into, unless soul allows voluntarily to be
possessed by calling a secret black magic APIs. The rest of services need to
perform long and tiring searches in the Great Array of Souls (GAS) and match
with secondary characteristics of the soul like virtues and vices, instead of
ID.

Of course, we found a novel meta-parallel way to access souls in the GAS
reducing the search complexity to O(Sqrt(n)), hence we make a lot of money
exposing this API externally for a fee. Do you want to make somebody lucky?
Arrange a life situation that looks like magic? Any enemies that should taste
fear of damnation? Please visit www.soul-as-a-service.com to know more!

