
Ask HN: Do you think machine consciousness is possible? - zoba
(Read the last bit to skip the story)<p>I'm planning on going to grad school to study AI because I think it is very interesting.  For a very long time it has just seemed natural to me that computer scientists would eventually discover a way to make computers appear as intelligent as humans.  Since it wasn't done yet, I wanted to work on this problem.  I had no thoughts of solving it, but perhaps help it along.<p>However, I recently had the scary idea that machine consciousness may not be possible.  I've thought this before, however this time it really hit me and scared me some.  Considering I'd like to devote much of my resources to the problem, I'm now a little concerned that it may all be a waste.  I'd prefer not to waste my life on something that turns out like the phlogiston theory.<p>Therefore, because it may bring good discussion and for my own benefit I'm asking:<p>Do you think machine consciousness (or at least something that looks like it) is possible?  If not on current computer architecture, which "new lead" in computation do you think will allow it?<p>For extra credit:
Do you think the Church-Turing thesis (anything that is computable is computable by a Turing machine) indicates that machine consciousness is possible?
======
kevinpet
If we assume that humans are conscious, then yes, it is possible for a machine
to be conscious. The only arguments categorically differentiating humans from
electronic machines are religious in nature.

I define consciousness as having an internal model of the world that includes
yourself, as well as your own though processes (at a lower degree of
fidelity). This says how you compute things, not what you are computing, so it
is orthogonal to Church-Turing.

Whether it is achieved will depend on economic forces. I don't see much
economic value in making conscious computers, or things which seem to be down
that line. So I expect consciousness will come out of pure research (perhaps
within a corporation, like IBM), well after computers have exceeded the raw
processing power needed.

Because they will be so different from humans, it will have to demonstrate a
significantly higher degree of consciousness than a human needs to in order
for most people to be comfortable with the term.

~~~
codexon
You are assuming that there is no innate quality of human organic compounds
and processes that differentiates us from electrical components.

It may very well be the case that this is either true or false. We simply
don't have enough evidence.

And given the fact that we are discovering new properties of matter and
organic reactions all the time, there is a bias towards this being false.

~~~
Mentat_Enki
Bah....I call bullshit. This is pure anthropomorphism. Humans think they are
the shit, but in fact they are only story-telling animals (which does give us
an evolutionary advantage, incidentally. We are not limited in our information
transfer inter-generationally by genes alone.) We are limited by the same
physics as the chips we make. This innate quality you speak of is pure vapor.
Even if humans were somehow able to become mentats, we'd still be limited by
the tenants of information theory and what is computable. The fact that human
intelligence is emergent leads me to believe that machine intelligence will be
the same, albeit very different than a simian mammal's intelligence. Fish are
smarter than we are at swimming. Think ants.

~~~
nwatson
humans are unique in the universe among all life forms and inanimate objects,
are more than the sum of their physical parts, have a connection with
something larger than the universe, and though in an insignifcant corner of an
insignificant galaxy have an eternal significance.

~~~
rjurney
My peeps, it is not necessary to give negative karma for someone expressing an
honest and inoffensive opinion you don't agree with.

~~~
jodrellblank
It's akin to having a group of people sketching on a large piece of paper,
trying to build on each other's marks to create an accurate representation of
a scene, and someone comes in a scribbles all over it saying "but I see
scribbles! All pencil marks are valid! Don't be so limited!".

He/She's allowed to have such an opinion, but this discussion is trying for a
particular feel and that isn't contributing helpfully to it.

------
matthew-wegner
I look at consciousness fairly oddly:

I believe consciousness is an emergent property of a complex system; it is
simply the nature of the universe that complex systems exhibit consciousness.
I'm defining "complex system" as any system whose outputs/results affect the
inputs/possible states. If I think something, the possibility space for my
next thought is dependent upon my previous thought, and so on.

I think artificial consciousness research will progress, through computer
simulation, to the point where a "real" consciousness emerges from a
sufficiently complex simulation. The hard part will be mapping its
inputs/outputs into human-compatible form; success will probably occur by
accident at first. But when we can do this we'll be able to talk to a totally
simulated consciousness through the prism of it being another "person".

At this point more research/thought will be put into the nature of
consciousness itself, and how to connect with other-than-human
consciousnesses. We'll use the experience of bridging communication with
artificial consciousness to successfully communicate with naturally-occuring
consciousnesses associated with other complex systems (the earth, a tree, the
galaxy, etc). It sounds a little insane, but I totally think this is within
the realm of possible in our lifetimes.

Of course, that's all based on the notion that consciousness is an emergent
property of a complex system, and not something entirely unique or bestowed by
higher powers or whatever.

~~~
netconnect
I agree that simulation is the key to a concious AI system. If or when we ever
succeed in simulating a human mind to a close enough degree of comparability,
it is almost a given that the system will be self concious.

There are some problems simulating the human brain that would also have to be
addressed even once we can create a working system, such as the AI being a bit
of a blank slate, like a infant or a coma patient.

I see the whole process as having to follow a path similar to this:

1\. A breakthrough in computing power, something capable of simulating very
accurately small areas of space, this means perfectly representing
ridiculously complicated chemical reactions and some natural laws.

2\. Succeeding in creating a software environment to execute these simulations
within.

3\. A breakthrough in mapping an existing person, some sort of scan that
creates a mathematically provable perfect (or close enough) representation of
an area in space. Like some humans mind or possibly their entire body until
the subject of the scan can be simplified on the computer. Sort of like taking
a photo and then cropping off the body. The above simulation environment may
be what is used to provide the simulated inputs and outputs to the head, like
the CNS and cardiovascular system. Not to mention the inputs to the eyes and
other senses. Sort of like a virtual head in a jar.

4\. So far we would have a conscious system, but it would be a copy of a pre-
existing being. The next step would to be to somehow, ethically, re-write this
being. This would provide a learning challenge with the goal of simplifying
and modularising the human brain. Such as hacking language areas, input
nerves, the reliance on virtual blood and sustenance and most importantly the
memory.

The final product of this important stage is the most simple and easily tweak-
able simulation of the human brain that could be used by all researchers and
eventually commercial applications. If all these virtual brains are the same
or comparable, this isolates the memory as a way to load in or edit what is
essentially... people. The creepiest analogy may be the best, they will be
like swappable save game files, executing in virtual machines (the hacked
brains) that operate within another virtual machine (dare I say it, a super-
simple matrix of sorts)

5\. We may never reach anywhere near this far along the process due mainly to
ethical reasons that cannot be overcome with mere ingenuity. But if we do, the
next step is compressing all this down further and further until we have the
most simple possible (perhaps provable somehow) implementation of a mind that
does not require all the layers of virtualization.

God it's easy to get caught up in this stuff. I hold this prediction on my
fingertips in hopes that any developments may blow it away so I can re-
evaluate and make a new one.

------
lacker
I wouldn't bet on machine consciousness happening in your lifetime. But
consciousness is a cool enough thing that solving 0.001% of the problem is
useful too. Machine vision, collaborative filtering, machine learning, all of
these are attacking a tiny subpart of the consciousness problem, but they're
still useful.

Don't worry that AI will turn out like phlogiston. The journey will yield its
own rewards, and plenty of partial success will also be extremely valuable.

------
Diakronik
Grad student in Cognitive Science, here. My focus is language, but
consciousness is an ongoing side interest.

The answer: Yes and no, depending on what you mean by "consciousness".

If you mean something like "access to internal states" (and maybe
reportability thereof), then yes. Arguably there are extant, albeit crude,
versions of this form of machine consciousness.

If, on the other hand, you mean something that starts to look like qualia
(i.e. "raw feels"/"what's it's like"/"the hard problem"/etc cf. the Chalmers
references already made), then no.

Of course, my "no" essentially echoes Dan Dennett's, in that I don't think
people are conscious in this way, either. I suspect a lot of our "feelings"
are internal post-hoc stories (made possible by enabled by command of private
language) that rationalize/create causal attributions for the physiological
correlates of stress ("four Fs" situations).

That being said, I could be wrong, and finding a way to get at these
hypotheses empirically would be a genuine advance, whether they were supported
or refuted. So by all means pursue this...as someone else pointed out, it's
likely the "final" answers won't be known in your lifetime, and if it brings
you fulfillment in your lifetime, then giv'er.

As for where to apply, there are loads...some have already pointed out several
researchers (Hofstadter, Koch, etc.), so you could always apply where they
are. There's also UCSD, UArizona, Carleton University (in Canada), etc...

------
Bleys
Sounds like you should be going to the Singularity Summit to talk to other
people who devote their lives to this issue.

Nick Bostrom's Simulation Argument details the most obvious probabilistic
implications of substrate independence in consciousness:
<http://www.simulation-argument.com/>

The most blatantly obvious indicator that consciousness is substrate-
independent: We are DNA-based life forms. DNA stores information. It's program
code stored in molecules. You are the product of the code of your parents. If
for some bizarre reason we find out that we HAVE to use DNA to create other
conscious systems, we will still have the ability to do exactly that. Not
"machine" in the sense of being composed of metal, but certainly "machine" in
the sense of not being the immediate product of natural selection.

David Chalmers' work should be particularly relevant to you, and you will find
him at the Singularity Summit this year.
<http://en.wikipedia.org/wiki/David_Chalmers>

Even if you don't live to see machine consciousness as a reality, the only
other pursuits that might compare are anti-aging and intelligence enhancement
research. If you're not going to create something that can figure out how to
give you an indefinite self-contiguous narrative, you have to support its
creation or face certain death.

I'm guessing you already have your CS undergrad or will have it soon, and
you're interested in AI, so that seems the natural choice. I'd say you're
overdecided if that's what you want to study.

~~~
zoba
Thanks for pointing me to this. I had heard of the Singularity University, but
not the summit.

Hopefully they have videos posted online of the event...right now two months
rent is not available to spend on a conference, very unfortunately.

------
rabidsnail
We have no definition of consciousness, so it's impossible to say whether
machine consciousness is possible, or whether we have it already.

~~~
chrischen
I think the definition of consciousness is implied as whatever we humans are
experiencing right now. So for a computer to be truly realizing its own
existence, it would have to be modeled after ourselves.

~~~
joeyo
The problem is you have to believe what the computer tells you when it says it
has consciousness.

(In the same way that I have to believe you when you say you have
consciousness).

------
codexon
No one knows what human consciousness really is or if it even applies to other
animals.

Having said that, consciousness is not a requirement for intelligence. It
would interesting, and plausible enough for self-improving AI as smart as a
human to be developed without addressing the question of consciousness.

For extra credit: No there is not enough evidence.

Here is my opinion on this matter: It is possible to simulate the universe,
but it is not possible to be the universe.

Simulation is different from being, just as predicting the weather is
different from manipulating the weather. And simulating consciousness on a
computer is different from being conscious.

~~~
coderdude
Consciousness isn't a unique body in our universe, it's a state. (so far as we
can guess)

------
mlLK
I'm certainly not grad student, but I wouldn't drop my pretty dime in graduate
school studying AI. If anything, please not only research the current state of
the field of AI but also research the history and those scientists working in
the field right now.

I wrote a paper on AI a couple summer's ago and as crummy, arrogant, short-
sighted, and inconclusive as it reads, if you happen to skim it, I did come
away learning this. . .Turing was one of the few minds that was actually on to
something, his vision of the machine and his idealistic tone reads more like
that of SciFi writer, here is a most insightful re-paraphrasing of a Turing
abstract (me thinks it was his first major publication):

 _I propose to consider the question, 'Can machines think?'"[8] As Turing
highlighted, the traditional approach to such a question is to start with
definitions, defining both the terms machine and intelligence. Nevertheless,
Turing chose not to do so. Instead he replaced the question with a new
question, "which is closely related to it and is expressed in relatively
unambiguous words".[8] In essence, Turing proposed to change the question from
"Do machines think?" into "Can machines do what we (as thinking entities) can
do?"[9] The advantage of the new question, Turing argued, was that it "drew a
fairly sharp line between the physical and intellectual capacities of a
man.[10]_

[my crummy paper: <http://www.scribd.com/doc/19590360/AI>]

~~~
Locke1689
I must protest your lack of LaTeX/TeX use in your crummy paper. Knuth's line
terminating algorithm has made me forever hate Word's aesthetics.

~~~
mlLK
I'll join your protest in my paper's _lack of_ LaTeX; although I must admit
that I've never felt or even considered anything I've ever _published_ (for
someone of higher academic seniority) a serious scientific contribution.

Nevertheless, shame on you for introducing the two following ideas in a
complete sentence, _Knuth's line terminating algorithm_ and _Word's
aesthetics_. o_O As soon as my words are worth it, I swear to you that I will
start investing my efforts into formatting my thoughts in a language as
serious and beautiful as LaTeX. Until then Word is a wonderful canvas for
finger-painting.

~~~
Locke1689
Can't argue with that. Ever since freshman year of college I've just sworn off
writing a paper in anything but LaTeX. Rather simple after you get used to it
and a PDF is nice and portable too. Of course, I also tend to use a lot of
math, especially in my theoretical computer science pieces, at which Word is
just absolutely lousy. Anyway, just passing along my thoughts. :)

------
7iv3
What? Conscious machines already exists. Now its only a matter of coping them
into different material. And maybe learning about consciousness in process.

I mean, we know almost all about the low level - neurons. They are relatively
easy. Signal goes in, signal goes out. So now, even if we don't understand
this whole high level emergent process we can still copy it. Its like if we
had a assembler code of some big and highly complicated algorithm - even if we
don't understand it we can still rewrite it to different machine and it will
work.

There are about 100 billion neurons and about 500 trillion connections in
brain of an adult, so you can take Moors law and estimate, not if, but when it
will be possible to brute-force brain. I say 2025.

~~~
coffeeaddicted
Although I basically agree with you I miss your certainty. I can think of one
scenario which would prevent computers from gaining consciousness even if we
are just machines. The basic premise of computer to get conscious is that
consciousness is either created within our brains or that it has no intention
itself without our brains and will therefore not differentiate between human
brains and computer brains.

But that is not necessarily a given. Our brains are very good at reflecting
our environment. So maybe consciousness is not something our brain did come up
with but just something it reflects from the inputs it gets. Same as your
brain doesn't have to be blue to see blue, but only creates a representation
of the color which is outside. So there could be a bigger consciousness in our
environment and all we have is an internal reflection of that created by the
inputs we receive. But now that larger consciousness might not be without
intend - it might simply refuse to show itself to computers so they wouldn't
reflect it even if they would have the basic ability to do so. I know this
sound rather esoteric and I don't really belief it myself, but something that
seems to work like that is supported by so many reports of personal
experiences that I wouldn't yet completely disregard the possibility.

So computer consciousness will probably be possible, but could fail if
consciousness itself turns out to be something with an own agenda.

~~~
7iv3
Of course, I was making an assumption that humans are conscious. Even if
consciousness is not what we belief, even if its complicity deterministic.

I mean, if something exists, we can copy it. Even if humans only mirror
consciousness, we can copy the mirror.

Otherwise we're talking about teology, and, if such, I won't be part of the
debate. Not my field.

~~~
coffeeaddicted
AI isn't about copying the thing but about copying the information processes.
So a 1:1 copy isn't AI. Also if you make your assumptions the way that AI must
obviously work then, well - it certainly must work and there is no way that it
can't. Doh.

So far we haven't nailed down consciousness and until we got that I try to
keep some alternative theories still in my mind. Especially if the alternate
theories correspond rather well with many user reports. That's not because I
believe in magic or something like that, but rather is influenced by working
long enough with virtual worlds to be occasionally irritated how much easier
it is sometimes to put the intelligence in the world instead of putting it in
the bot and make the bots just reactive. The user watching them won't see the
difference, to him it looks like intelligent bots. And unless a bot registers
to the world it doesn't even matter if he is an identical copy - he won't do
much (just to mention that identical copies are no guarantee for same
behaviour as long as there are external dependencies which must be met).

And there was some recent article on ycombinator about anaesthesia which I
found also interesting. Basically it seems that unlike sleep this is a way to
completely disable consciousness. Like switching it off. And (the wished for)
side-effect is that the body nerves do no longer trigger pain. But the brain
certainly still works. So yeah crazy - but the only known way of completely
disabling all inputs without disabling internal processing is at the same time
disabling consciousness completely. And yes - I'm aware that we probably find
a better explanation for that any day now.

It's a fringe theory and I realize that I got even one step further in my post
above. But still, I don't think I'm in theology territory with that already.
The fact that something so basic that everyone experiences it evades a good
explanation for so long gives me enough reason to keep some fringe theories in
mind. That's why I agree to your post - but miss your certainty. The last few
time we humans got it really wrong in science (that sun-earth rotation thing
and that evolution stuff) we got it always wrong because we put ourselves so
much in the centre that we ignored alternatives.

~~~
7iv3
Well, if you put it that way, maybe I am to certain. I'm aware that there is
still a lot of things that we don't know or even have a single clue about.
That we can gain new data and that equation can change.

But on the other hand, agnosticism is kinda lack of balls ^^ What you stand
for determines what you do. So clearly its better to stand for something, even
if you sometimes get wrong.

------
jacquesm
There have been many threads recently about this exact theme,
[http://www.google.com/search?q=site%3Anews.ycombinator.com+a...](http://www.google.com/search?q=site%3Anews.ycombinator.com+artificial+intelligence)
should turn up some results.

My personal take on this is that I'm really not sure.

There are some pretty clever people here that are sure it is possible, 20
years or less.

There are others (of which I'm one) that think it may be possible but either
devilishly hard compared to what has been achieved to date or beyond our
abilities, figure at least 20 years, probably much more, if ever.

And then there are those that think that it is impossible.

I'm an absolute nobody when it comes to stuff like this but it interests me
greatly. When I was 15 or so I envisioned a world about 2 decades away where
computers could be taught. We're 30 years down the line from that point and
we're still programming computers more or less the same as back then.

But that does not mean that things can't change overnight, and who knows,
maybe you're just the guy for the ticket and you will be the one to crack this
nut.

------
lux
No one's mentioned On Intelligence by Jeff Hawkins yet... I'm surprised. Check
it out here:

<http://www.onintelligence.org/>

Great read on exactly this topic, and one of the more plausible paths to
actually achieving machine intelligence. But it's not through the typical path
followed by most AI research up until recently. He argues you need to look at
how the neocortex actually works, how it communicates with the senses, stores
data and learns patterns, in order to create anything artificial that displays
intelligence. I'm inclined to agree.

Our brains may seem to map to computer-like functions reasonably well (short &
long-term memory, processing, input/output, etc), but there are key
differences. Hawkins argues that it's all based on input and learning to
recognize patterns within that input. A few interesting points I remember:

\- More feedback goes back to the senses and/or higher levels of the cortex
than data is sent in. This seems to imply that the lower levels are
recognizing larger patterns and sending feedback to help correct or verify
against higher levels.

\- There's no difference between input from the ear, eye or hand. It's all
just patterns at the cortex level. In fact, the output is the same as the
input. The feedback process actually helps learn how to utilize our body, and
as an extension of that, tools.

\- A key element to the patterns in the real world is time. Everything occurs
over time, so the patterns almost come into the cortex as a sort of melody it
interprets.

\- Memory is imperfect, because we don't need it to be in order to learn. We
remember things and draw our conscious attention to them when they don't fit
the pattern we're expecting. At that point, we're attuned to learn a new
pattern or determine how to react to a missed pattern.

These are very different from the components we use in computers, and how we
use them. The cortex is almost like a universal biological learning machine.
Differences in intelligence between mammals can be attributed to the size of
the cortex, e.g., the amount of processing available. Interesting stuff!

------
dbul
Where did you go to school? It sounds very much like you have a symbolic
background. There are many approaches and people are attacking them from
different angles. Here are just a couple of resources worth looking at off the
top of my head:

Christof Koch at CalTech (hi, virgil) <http://www.klab.caltech.edu/~koch/>

Larry Yaeger & John Beggs
<http://www.indiana.edu/~rcapub/v30n2/mindmade.shtml>

Of course, Douglas Hofstadter <http://www.cogsci.indiana.edu/>

Koch, Churchland, etc. speak on consciousness:
<http://mitworld.mit.edu/video/342>

~~~
zoba
I am at NC State (graduating in May), and my research adviser does have a
symbolic background (I'm pretty sure)...very good read on your part!

Thanks very much for the links. I am stressing over where to apply, hopefully
these will help some.

------
drcode
I am dead certain that it would NOT be possible to achieve consciousness in a
computer program.

On the other hand, I know if I ever had to debate this question with Daniel
Dennett or Eliezer Yudkowsky or any other capable person who believes there's
nothing special about "consciousness" I would lose the debate.

Attributing a special meaning to consciousness, on the surface, is just as
illogical as believing in religion- Neither seems to have a valid defense, as
far as I can tell. Therefore, I find it disquieting that I DON'T believe in
religion but DO feel so certain that computers can't be conscious.

Once you figure out the reason for this apparent inconsistency in my belief
system, please let me know :)

~~~
dejb
OK here's an argument for why you could be right.

Whatever it is that distinguishes consciousness for what computers can
possible do is also the thing that makes you realise that computers can't be
conscious. If you could logically write the reason down to 'prove it' then it
would also be something that you could implement in a computer. So the reason,
whatever it is, has lie somewhat outside of specifiable logic.

Just in case this is somehow original. I'm designating this 'thing' as a
'dejb' and calling the whole thing "dejb's theory/proposition/whatever".
Although it actually probably just a restatement of Godel's incompleteness
theorem. Also I actually believe that computers can be concsious.

~~~
drcode
I like dejb's theory! I'll have to give it some thought.

------
petesalty
Of course it's possible, everything you can imagine is possible, given enough
time and resources. But is it probable?

But I think the question you should be asking yourself, if you're wondering
how to spend your life, isn't if machine consciousness is probable, possible,
whatever. You should be asking, what do I want to do with MY consciousness? If
you want to work on these problems, then do it. Doesn't matter if it happens
in your lifetime or not. If you enjoy it then give it a try. Worst case is you
spend your time doing what you enjoy. Best case is... well, machine
consciousness I suppose.

Besides, you should always be thinking big. The person I admire the most has a
small metal plaque that sits on her desk. It says "what would you attempt if
you knew you couldn't fail". I happen to think this is the most powerful
sentiment I've ever encountered.

------
warewolfe
Yes, it is possible. But a more important question would be "What are the
benefits from creating an artificially conscious(A.C.) system?" There are a
myriad of uses for smarter computer systems that can problem solve with
minimal guidance from humans. But what practical use is there for A.C?

    
    
      Howabout re-phrasing the original question into "Do you think it is possible to genetically enhance and selective breed apes into a race of human-level+ super-apes". Sure, it's possible, but why do it?
    

If you are trying to create a controllable human-like intelligence, then
ethically it is the same as creating a slave race, and practically it is just
overkill for any real-world use. And if you are trying to create a human-like
intelligence with free-will, then you are creating a competitor/replacement.

------
siol
Hey guys, maybe human consciousness is nothing else but an "illusion" provided
by the matching of information between what we perceive through our senses and
what we have stored in our brains? Of course, that "matching" process is the
big problem to crack. Could we hypothesize that when babies are born they
aren't aware because they don't have a 'minimum threshold' of information in
their brains that is required to enable the 'match' being produced by their
input senses? Or from the other spectrum, say, advanced patients with
Alzheimer disease lose their self-awareness because their memories are
destroyed and the 'match' needed to trigger the 'illusion' of consciousness is
disrupted ?

------
cool-RR
As a meta-comment, notice that threads like these usually result in many
people posting lengthy first-level comments, and not doing much replying and
discussing. (Compared to other HN threads, of course.)

------
run_zeno_run
More helpful would be to look up the Church-Turing- _Deutsch_ principle, aka
the physical Turing thesis or strong Turing thesis. It is more intent in
proposing that everything in the universe, including the universe itself, is
computable. If you hold to this principle, then machine consciousness is
obviously feasible. \-------- Another way of asking your question is: why
_wouldn't_ machine consciousness be possible? Why would there be some special
sauce in the brain that is off limits to being understood and/or modeled?
\-------- And lastly, to address you're worry of wasting your life, there are
many, many applications of AI/Cog.Sci. that would improve the human condition
immensely without needing fully 'conscious' artificial machines. Also, the
main reason why I in particular am dedicated to AI/Cog.Sci (and really the
best reason for devoting yourself to anything) is because it is the most
damned interesting thing I've ever been exposed to, IMHO.

------
extension
Imagine a scientifically minded human being from 500 years ago encountering a
present day computer, complete with software, internet access, the whole
bundle. If asked to speculate on how the computer worked, being a person of
science, they would have to admit that they didn't know. But pressed to form
some sort of model of it's inner workings, they would likely come up with some
fairly simple explanation centered around one all-important aspect or another:
a single magical power of some sort that is the basis for everything
mysterious about the computer. He would be very unlikely to imagine that
countless layers and dimensions of technology needed to come together to make
the computer possible.

This tendency towards oversimplifying the unknown seems to be fundamental to
the way we think and it runs through the history of human speculation.
Consistently, our beautifully unified theories about gods and ether and magic
are replaced by the messy, complicated realities of physics, math and biology.

One of these beautiful unified theories which we are having a particularly
hard time letting go of is the idea of "consciousness", alleged to be a single
remarkable quality of some sort which is both _essential_ and _unique_ to the
mind. That vague description is as close as you will come to a consensus on
the definition of consciousness, and it still won't please everybody. Despite
there being no agreement on what the word actually means, nearly everyone is
nonetheless quite sure that whatever it is _does_ exist and needs explaining.

In the time when the brain was utterly baffling, before we knew about neurons
or brain anatomy, the model of the mind based around consciousness was at
least reasonable. But now we know essentially how neurons work, how they can
grow into complex useful systems, and the _staggering_ quantity of them that
make up a single mind. We have managed to isolate many aspects of human
thought to particular areas of the brain. We've observed people functioning
without certain fundamental faculties when parts of their brain are damaged,
_faculties which we would have intuitively considered indivisible from the
mind as a whole_ , but which we are now forced to consider handy peripherals
which we could do without, if need be.

As the gap between our intuitive understanding of the mind and our scientific
understanding grows ever smaller, as we explain one faculty after another,
cleaving them from the plausible kernel of the mind, and as the operative
definition of "consciousness" continues to vary wildly from one navel gazing
philosophy major to another, sharing nothing in common but the spelling, its
career as a thought provoking topic of conversation nears its end, to be
followed by its retirement to a quaint artifact of our ignorant past.

------
Liron
First of all, we've seen that low-level physical models of our universe don't
gain explanatory power by trying to take "consciousness" as an ontological
primitive. It's clear that consciousness is a high-level property of patterns
of tinier things.

So the algorithm you use to decide that a clump of neurons satisfies the
"consciousness" predicate will almost certainly work by observing high-level
properties of the neuron configuration. Since the consciousness predicate
abstracts away low-level details, it's hard to see why neurons should be
better than other computational substrates at forming predicate-satisfying
configurations.

Reductionists can't be neuro-chauvinists.

------
dkersten
Ignoring you're question; AI isn't as much about machine consciousness and
human-like intelligence as people like to think. Theres a lot of very good AI
used every day: classifiers, pattern recognition, googles pagerank,
recommendation systems, optimization algorithms and so on.

On to your question: Do I think something that looks like consciousness is
possible? Yes. How soon depends on how conscious it looks and how much this is
faked and real.

I think we're still a bit of a way away, but it will happen. Do I think the
machine will be actually conscious like humans? No, I don't. Appearance and
actually being are different things. A machine can be programmed to appear
conscious by following decision making patterns similar to humans, by
incorporating emotion (perhaps as a weighting system to deal with certain
scenarios, or maybe not include emotion at all and go for pure practical
efficiency..), self preservation, priorities and so on. But, this won't make
them _conscious like a human_. I think theres more than a biological computer
in us humans. Religious people like to call it a _soul_ ; I prefer to use that
term to refer to some overall control system which is outside of the
biological control systems: if the brain and nervous system is our hardware
and our thought processes our software, then the soul is our firmware. I don't
think we will ever completely understand what makes us conscious, for this
reason, and therefore cannot ever make truely conscious machines BUT I believe
we will, eventually, get very close to it.

------
nev
Try this thought experiment. Imagine there was a replicator that could take an
exact copy of your current atom states and replicate them somewhere else while
keeping you perfectly intact. Doesn't have to be physical - could be
replicated by a computer program.

Now here's the thing - you would still feel like you, looking out of your
eyes. The other versions of you would be distinctly separate to you - not you.

That bit that makes you feel you are you - that's what some would call
consciousness and others would call a soul.

~~~
coffeeaddicted
Imagine your replicate is put in a world without oxygen. It would very fast
feel rather different from you despite being identical. You assume
consciousness is an inside state which can be copied. But it could as well be
a process in constant need of an external influence which might refuse to
connect to your replica. Like the difference between standing and flowing
water - watched only in one instant they might seem identical. And while the
solution would certainly be to copy the world as well, as long as we can't
define consciousness we can't really say for sure how much we would have to
replicate.

------
byoung2
_which "new lead" in computation do you think will allow it?_

I don't know if hardware is holding back progress in AI, especially with
distributed computing; it is more likely a question of software and
programming. I think in order for a computer to exhibit a reasonable facsimile
of human intelligence, it will have to do more than just run programs written
by humans. It would have to have the ability to write and rewrite its own
code.

~~~
reynolds
Are humans at a place where we can write and rewrite our own DNA?

~~~
byoung2
The computer equivalent of human DNA would have more to do with hardware than
code. When I talk about computers rewriting code, I'm talking more about
rewriting patterns of thought processing. While humans can't rewrite DNA on
the fly, we can definitely teach ourselves new ways of approaching problems.

~~~
dkersten
An FPGA of sorts?

------
DanielBMarkham
AI will be achieved -- over a period of decades and by brute force. At the
end, I can't say whether you'll have machine consciousness or not, but you'll
have something that is indistinguishable from it. And that's good enough.

What we call consciousness is probably very closely tied with having a
physical body perceiving things it the outside world. So I think for a long
time there will be differences between machine and man, but machines will
eventually win out and become vastly superior to people. I just wouldn't count
on it in your lifetime.

The really interesting question is: if we can quantify consciousness, what
happens when we create something that's more aware and conscious than we are?
Would we be considered sentient by a being that thinks a thousand times faster
and in hundreds of thought-trains, lives for a million years, and can converse
millions of ways simultaneously at bandwidths millions of times greater than
speech?

We would be like insects to something like that, and it's not such a far-
fetched idea or that far off.

~~~
frig
Perhaps, but (to the best of our apparent knowledge) the difference between
"doggy brain" and "human brain" isn't really that the "human brain" is orders
of magnitude faster at thinking "doggy thoughts"; it appears to be at least as
much a qualitative difference in _what_ it does as in the amount it can do in
a given unit time...and the moreso if you start comparing, say, "gecko brain"
to "human brain".

Agree with your general thrust but it's the qualitative change that's the more
interesting, as it's perhaps unknowable (in the way that your dog Fido will
never understand most of your thoughts, no matter how patiently you explain
them).

------
reg4c
Seeing how consciousness is defined as being aware of one-self, machines will
never be conscious. Since, we program machines and give them everything they
know, every last algorithm, I don't think that we will ever be able to code a
consciousness algorithm.

Read more about the weak and strong AI theories to understand what I mean.

------
gruseom
Let's address the easy part:

 _Do you think the Church-Turing thesis (anything that is computable is
computable by a Turing machine) indicates that machine consciousness is
possible?_

Only if you assume that consciousness is a computation, which is assuming
everything.

I normally try to resist this topic, but what you're saying here tugs at my
heart-strings:

 _I'm now a little concerned that it may all be a waste. I'd prefer not to
waste my life on something that turns out like the phlogiston theory._

Consider the total failure of algorithmic approaches to even begin, even
pathetically, to replicate anything recognizable as consciousness. Were the
people working on it dumber than you?

Try to find some way to control for the geek fantasy factor, in yourself and
others, before deciding what to do.

(By the way, now I'm curious: what was it that led you to "the scary idea that
machine consciousness may not be possible"?)

~~~
zoba
I've thought of it several times, but it wasn't until recently that it
actually kind of scared me. It was just a surprise thought that arose as I was
once again thinking about the topic.

It probably scared me this time and not others because I'm very stressed over
the GRE, and grad school applications (namely: where the heck should I apply),
and how me telling grad schools "Oh, I'd like to study machine consciousness"
will go over. I could be wrong, but I'm concerned they wont take me
seriously...so I've been trying to think of something that sounds more
acceptable.

~~~
joeyo
My advice, if you are considering a PhD, is to pick your school based on your
potential _research advisor_ more than the department or the school in
general. Contact him or her before you apply and tell them your plans of study
and research interests and then go from there.

------
yason
It is inevitably a question of belief.

If we want to stick to what we seem to know for sure physically and
scientifically, then I suppose we can consider humans equal to computers
albeit much more parallel and complex. In other words, if we reduce a human to
mere electric signals in the nervous system and accept that finally the whole
human life derives from that only, then we can eventually build a similar
machine ourselves.

If we want to think that a human is merely a physical projection of some
greater energy, be it the while universe, gods -- or a single $GOD, if you
prefer -- then we definitely can't produce consciousness ourselves. Instead,
we would have to wait for, or somehow invite, the greater energy or
consciousness itself to find and take presence in the form of a machine that
passes electric signals around.

------
ilitirit
Depending on how you look at it, humans _are_ machines.

~~~
joubert
From what perspective?

~~~
psawaya
Materialism

~~~
joubert
Materialism doesn't imply that organisms are machines.

------
naveensundar
Consciousness is a feature of our external world/ Universe. We should
formulate theories and test it experimentally. We should describe it in terms
of basic components. Unfortunately, this is not how AI has been treating
consciousness. AI treats consciousness as an art/engineering problem rather
than rigorous science. But, there is hope : Journal of Experimental and
Theoretical AI <http://www.tandf.co.uk/journals/tf/0952813X.html>

Current theories say consciousness is caused by a specific mode of
representation/ computation. Like good scientists and rational thinkers we
should submit everything to rigorous testing/ proof. Let us not please take
things at face value.

[Or we can take the easy route and live in a bubble :) ]

------
modeless
I've never heard a good reason why it wouldn't be possible. But even supposing
that it isn't, AI research is not wasted. It is clear that products related to
AI (natural language processing, computer vision, game opponents, etc) can be
exceedingly useful even though they're not sufficient to produce a conscious
AI. So don't worry!

My personal belief is that the reason progress in AI has been so slow is that
AI optimists vastly underestimated the computing power needed to produce good
AI, and when our computers are finally fast enough (20-30 years perhaps, and
the most important metric is probably not FLOPS but memory bandwidth) AI will
actually not be that hard to achieve.

------
njharman
No, because consciousness does not exist. Although, I am sure an artefact can
be constructed that believes itself to be conscious and capable of convincing
others it is conscious. Which is, more or less, identical to having
consciousness.

~~~
teilo
Ah. A religious assertion.

------
teeja
I think it's more interesting to ask whether computer _emotions_ are possible.
(Not merely the emulation of emotions.)

A simple definition of consciousness is the difference between being asleep
(mostly oblivious to self) and awake (aware of self, environment, and
relationship). Even then, a machine can negotiate terrain. But add emotions
and you get potential for creativity (non-programmed, original output) which
is a testable measure of sentience.

Consciousness can be faked. Original work can't - certainly not original work
that 'touches people's souls'. Somewhere in there is a machine that earns my
ungrudging respect. It's not a formula: it's a feeling.

------
lleger
In short: yes.

I think it's painfully clear that sometime in the coming years humanity will
reach the pinnacle of its scientific achievement with the advent of an
artificially intelligent machine: one that is able to think and reason and is
self-aware. Technology is moving at such a rapid pace and in the right
direction that this is just the next logical step.

In order for this to occur, however, significant advances must be made in
fields outside of technology; e.g. quantum computing will probably be a huge
stepping stone, and for that to come to fruition we must first fully prove
quantum mechanics.

~~~
ericlavigne
"In order for this to occur, however, significant advances must be made in
fields outside of technology; e.g. quantum computing will probably be a huge
stepping stone, and for that to come to fruition we must first fully prove
quantum mechanics."

I recently heard about the term "yak shaving". This seems to be a good
example.

------
bpourriahi
You must clarify consciousness.

Perceiving and conceiving. That's all that we are truly capable of. Motivated
by survival, geared towards good and away from bad. Doesn't seem impossible if
that's what you consider consciousness. I think a computer will be able to
easy be able to perceive any kind of sense and conceive any kind of idea. But
you also have to consider, why would it? Why would it speak if it didn't need
to? It doesn't require anything that humans need. It would be pure
consciousness - a zen master. You could only communicate with it if it had
some basic drive/force.

------
Ixiaus
A point of clarification should be made here... Are you speaking of machine
consciousness behaving similar to our own?

Human's consciousness is but one form, in my opinion. We are a complicated
biological machine seeking the fulfillment of our existence; how is that any
different than a machine seeking the fulfillment of its existence?

I believe the money is in self-evolving circuits and programming; allowing the
machine to mold what defines its existence based on external parameters - and
overtime, based on internal parameters (will to change).

------
lee
I believe it's possible and will eventually happen.

I imagine through some randomness, or directed experiment, a piece of software
will exist which will self-replicate and mutate... eventually evolving to some
point of self-awareness.

Most likely we won't have deliberately created the AI. It'll just be through
some random chance that it'll happen, much like how some random collisions of
molecules formed "life" a few billion years ago.

Much like how life was formed on Earth, given enough time and material, the
chance of AI emergence is bound to happen.

------
bpourriahi
You must have a simple answer for what consciousness is. What awareness is.
What intelligence is. These questions are the most important part of the whole
problem. That's why it is much more important to understand them then try to
learn something that doesn't exist. I wouldn't expect much in terms of trying
to work on this problem with a team unless you were at a certain college that
was serious about it.

------
bendtheblock
_The question of whether Machines Can Think... is about as relevant as the
question of whether Submarines Can Swim._ \- Edsger Dijkstra

------
robryan
I think it is possible that some area's are going about the problem wrong way.
With current hardware attempting to simulate the brain as it works I don't
think is the viable option.

I seem to have the problem also that we are using the thing we are trying to
simulate to work out how to simulate it. From our perspective the problem
could be almost impossible.

------
beza1e1
If consciousness is computable then it is also computable by a Turing machine
and a computer. Consciousness is computable, if there is a formal model. The
core question is thus: Can we find a formal model of consciousness (aka
intelligence)?

So far AI research only managed to create sub-models for special tasks (ELIZA,
Deep Blue, Google, ...).

------
leif
EC: I don't think there's a connection between computation and consciousness.
Examine other species for the line of consciousness and the line of
computation. Does it count if something computes without intent, like a spider
approximating a minimum spanning tree? I'm not sure, but I don't see it yet.

------
joshu
If you could simulate a brain at the atomic level, with sufficient precision,
would it think?

I wonder if the mind arises in the brain because of quantum effects?

Anyway - modern AI is not generally about "strong AI" (attempting to make a
humanlike intelligence) but more about "weak AI" (attempting to make things
that solve problems.)

------
fuzzmeister
As soon as we fully understand the brain, and have the computing power to
model it perfectly, we will have machine consciousness. Both are
extraordinarily lofty goals, and machine consciousness may well be achieved
through other avenues, but neither seems so out of reach as to be impossible.

------
fburnaby
It would be just as much of an academic service for you to help find a out
that AI _can't_ work is it would be if you found out it can. The problem is
inherently interesting and worthwhile pursuing, regardless. Phlogiston theory
brought on Oxygen and the rest of chemistry after a while.

------
Tichy
It's not all or nothing: even if true AI would not be achieved in your
lifetime, there would still be loads of useful things with the stuff you
learn.

As for the question, I am sure that it is possible (except I take issue with
the word "consciousness" - what is it supposed to mean?).

------
naveensundar
For those people who claim conscious machines exist read

"Offer: One Billion Dollars for a Conscious Robot; If You’re Honest, You Must
Decline" <http://www.eripsa.org/files/Bringsjord%20Robot.pdf>

:)

~~~
Tichy
Waste of time, sorry. Didn't make it to the end, but does he say anything else
than "there is no definition of consciousness"?

~~~
naveensundar
Is there a definition of consciousness? The gist is that people are dishonest
when they claim they have a conscious program or robot. The notion that a
program causes consciousness is not well defended. Suppose I have a program X
which is conscious and let it be written down. Does it get conscious if a
billion people execute it in parallel? It is not clear what is conscious in
this case. (Searle's Argument)

~~~
Tichy
It's just not very interesting. It would be the same to say "I give you 1
billion dollars if you write me a program that does
hhfdsodifiuuiuuuttueeertz", without saying what "hhfdsodifiuuiuuuttueeertz"
is. Saying "no machine can be conscious" is equivalent to saying "no machine
is hhfdsodifiuuiuuuttueeertz".

I am not even sure it would be dishonest to take the money. It is kind of
insulting to give such a task, so maybe it would serve the sponsor right.
After all, the sponsor would be unable to prove that the program is not
conscious.

Suppose I submit a program that does nothing than print "the weather is nice"
on the screen. Who is to say the machine is not conscious? It could be all
sorts of self-aware, but for personal reasons decide to communicate nothing
but "the weather is nice" to the outside world.

~~~
naveensundar
>It's just not very interesting. It would be the same to say "I give you 1
billion dollars if you write me a program that does
hhfdsodifiuuiuuuttueeertz", without saying what "hhfdsodifiuuiuuuttueeertz"
is. Saying "no machine can be conscious" is equivalent to saying "no machine
is hhfdsodifiuuiuuuttueeertz".

Then, saying your weather-printing machine is conscious is equivalent to
saying that is hhfdsodifiuuiuuuttueeertz which of course means nothing. But a
lay person will really think that machine is conscious.

If you say your machine is conscious, then you must show it is conscious
rather than just claim it is conscious. I can print "I travel faster than
light" and claim that is a proof of faster than light travel to a lay person
who will believe me. That is what is happening today in consciousness and AI
research.

Just cause you can't see the earth is round directly does not mean it is not
spherical. There should atleast be an indirect way, otherwise it is not
Science :).

~~~
Tichy
Except that "it is conscious" is not a hypothesis, because it doesn't mean
anything. That is the whole point: the whole task doesn't contain any testable
bits.

------
bpourriahi
It is not a matter of learning AI. It is a matter of understanding
consciousness. Double major in AI and Philosophy if you must. But all I mean
by that is take some drugs, keep a journal, and learn how to become an
incredible programmer.

------
paraschopra
I'd read somewhere and don't necessarily agree with the argument, but I found
this interesting: if a machine gets conscious, it would convince you that it
is conscious. You won't have to really deduce it is conscious.

------
pjw1187
I'm going into graduate school next year and I plan on pursuing this exact
topic or something similar to it. I believe that consciousness can be achieved
if in fact we can analytically explain what consciisouness is.

------
johnnybgoode
Even if it is possible, I'd suggest reconsidering whether going to grad school
in AI is a good way to accomplish your goals. Like mILK says, look into the
history of AI in academia and its current state.

------
psyklic
The problem with your question is -- no one can agree on a precise definition
of 'consciousness'. If you mean: "Can we get a machine that can act just as a
human acts?", then I believe that yes we can.

------
schnalle
while i belief in machine consciousness, i don't think human-like intelligence
- as described in popular science fiction culture - is likely (though not
impossible). intelligence: yes. superhuman intelligence: yes. but human-like
intelligence, as in "thinks like a human, reasons like a human"; no * . the
bodies, senses, cultures, etc would be just too different.

so silicon-based intelligence would be very strange and incomprehensible for
us, and i doubt we'd recognize it as an intelligent being easily.

* with the possible exception of eliza

------
bufordtwain
Personally, I do not think a machine will ever be conscious. Machines will
always be as dumb as a box of rocks and will need to be told what to do. A
machine follows rules - maybe complicated rules, but rules nonetheless. I
cannot conceive of a situation where a machine that has been programmed to
listen and talk hears a person in the room pass gas and spontaneously laughs
as humans do (unless it has been programmed to do so). Sorry, I'm just not
buying it. That doesn't mean that your trying to make a computer be conscious
isn't a good use of your time. You'll learn lots and maybe you'll prove me
wrong.

~~~
CamperBob
_Machines will always be as dumb as a box of rocks and will need to be told
what to do._

Oh, right, as opposed to most people.

------
elduderino
If you define humans as machines - automated creatures based on input and
output, then yes. However there is this thing called qualia (aka subjective
experience) that I believe is very real. Machines are not capable of this. All
of the AI today is based on complex algorithms with an input output model.
There is nothing subjective going on inside.

~~~
joeyo
We don't know for sure that machines are not currently capable of experiencing
qualia. We don't know that machines cannot ever be capable of experiencing
qualia. We don't even know if animals experience qualia, and if so, which
ones. I don't even know for sure if _you_ are experiencing qualia.

~~~
ewjordan
_I don't even know for sure if you are experiencing qualia._

I'll go you one further and admit that I don't know for sure if _I_ am
experiencing qualia.

My brain certainly tells itself that I am, but how do I know it's not just
wrong?

Perhaps qualia is, in the end, nothing more than the state of asserting to
yourself that you feel qualia. In which case it's really more a question about
whether your brain is properly structured to ask that question than about
whether it really exists...

~~~
joeyo
Ah, yes, we have arrived at the problem of being fundamentally unable to be
sure of the nature of reality. We don't know if our sensors are reporting to
us the "true" nature of the world and we don't know if our brains are
reporting to us the "true" nature of our internal states.

I guess, like you, I am okay with accepting that experiencing qualia and my
brain telling me that I am experiencing qualia are functionally
indistinguishable if not equivalent states.

------
mdoar
Doubt it, but then I'm not holding my breath for a personal helicopter either.

------
frig
You're framing this question in an unhelpful way.

Better: what are the major arguments for the position that machine
consciousness is not possible? What do I think of those arguments?
Particularly taking care to distinguish:

\- does the argument prove what the people advancing it think it proves?

\- do I find myself agreeing with the argument?

Major lines of argument I've heard:

\- the metaphysical argument: consciousness derives from having a soul somehow
linked to your brain (and thenceforth to your body); the purported
impossibility of "machine consciousness" follows from a belief that only
people have souls (of the right type, at least)

\- the limited-smarts argument: consciously building a conscious machine is
beyond the capabilities of a conscious entity (of our type of consciousness)

\- the "difference between silicon and wetware" argument: this ranges from
assumptions there's quantum magic in parts of the brain to assumptions that
the brain organization implements some other, unsimulable-by-silicon computing
architecture (perhaps super-turing)

\- the "consciousness is an illusion" argument: consciousness and intelligence
per-se have little to do with each other despite the apparent overlap one
perceives from reflection on one's own thoughts. Thus machine intelligence
seems to be possible but that says nothing about consciousness per se.

If you're really serious enough about this to consider making it a life's work
(or because you really want to make a conscious AI) I would suggest taking
_none_ of the above arguments lightly, even though it's somewhat fashionable
in some circles to assume most of the above are just nebulous handwaving by
anti-rational mystics.

"Taking them seriously" doesn't mean "pack your bag and go home"; it means you
keep thinking scientifically and analytically and try to answer questions
like:

\- suppose smarts are limited, but we don't know that yet for certain. What
could this intuition _mean_ (in a more formal or more precise sense)? How
could I make the intuition more precise? What would a formal proof of the
intuition look like (and what would be the theorem)? Does this inquiry seem to
be leading me in the direction of possible theorems or nontrivial facts about
the expressive power of symbolic systems (that aren't already known, or a
retread of Godel)? Does there still seem to be work I'd be interested in doing
in this general field if it turns out that smarts are limited?

...as even "wrong" counterarguments can do wonders for pointing you in the
direction of interesting questions

Extra Credit: of the researchers who at least seemed to think they'd solve the
problem pretty quickly, is there a recurring pattern to be found in the
failures those researchers encountered?

Major recurring themes: people who've thought they were within reach of making
conscious machines typically assumed that the part of their own nature that
they valued most was the keystone to consciousness and assumed everything else
was either secondary or easy (and thus could be filled-in later).

Thus a Hofstadter-type -- who loves delving deep into various piles of work
and crafting new and insightful analogies -- winds up thinking that the core
capability an artificial intelligence needs is the ability to craft such
analogies; people with phds in mathematical fields and a more logical bent
assume that the core ability is symbolic inference, and make software that
does symbolic inference; yet others assume that rational hedonism is the core
and work on utility-maximizing decision-theoretic planners and agents; others
still love making systems with complicated interactions and seeing what pops
out and start chasing emergence.

All of that work is good work and has found applications, but the dynamic is
obvious: people who get into the field with the specific goal of making AI --
instead of, say, improving algorithms for multicamera view synthesis with
applications to industrial quality control -- tend to radically overvalue
whatever intellectual style they happen to be good at, but so far none of
those intellectual styles appears to have really scratched the surface of
consciousness in the sense you're interested in. Beware your best ideas and
favorite subjects!

------
keltecp11
what about when the brain is powered by machines? Does that count as a
computer being self-aware?

------
polos
Some things a machine will never be able to:

\- spontaneously ask itself where it is coming from

\- spontaneously ask itself what it will become after its own destruction

\- having spontaneous thoughts

\- having free will

Why do I say spontaneous? Because our thoughts aren't coming from our mind,
but from our soul (that is, from the principle of life, which is invisible by
nature).

Come on, these are all obvious things; humans, don't believe blindly in
science, science is not a religion(!).

A machine could (in theory) more or less be similar to animals, though.

~~~
stevedekorte
But we do all those things and we are machines. Biological machines, yes, but
still machines.

~~~
polos
Your are a biological machine? Really? Exclusively?

Who did convince you of that? Science?

Science is only science, science very often is wrong, and has to correct
itself, sometimes decades, or even centuries later.

I know that I'm not a biological machine. I know that there's a voice inside
myself that asks many many more questions than any science will ever be able
to answer.

Now, where do these questions come from? Certainly not from my brain. My brain
is not able to ask questions beyond its own capabilities.

So, let me repeat the initial question: are you a biological machine, and
nothing more?

~~~
polos
BTW, I can ask all of these questions without going into tilt, and without
having any biological malfunction.

So, these are _all_ valid question. If I were a "biological machine", someone
would already have called for a "biological" doctor...

