<shameless_plug>
"Jülich Katrin Amunts has begun work on a detailed atlas of the brain which involved slicing one into 8,000 parts which were then digitalized with a scanner."
The lab I work for is working with them on this project. Basically those slices are taken at 10 microns and we are working on fixing some of the problems (when you cut a brain that thin you end up with a lot of tears and deformations) Then our lab is reassembling the slices (with algorithms) realigning the slices so you can cut through the slices in any directions and get the cross section of those slices and look at them.
Realignment of the 8000 slices can take weeks or more (depending on how many cores you can run the stuff on) and is a rather complex process.
Scroll to the bottom (well look at the whole page, brainbrowser is what I've been working on). You can see a image of the interface and some of the slices. I'll try to find a video of the project.
There are several orders of magnitude fewer neurons in the fly vs humans. For the fly, we are reconstructing neural processes using EM slices that range from 50 nm to 7 nm in thickness. At that scale, we can reconstruct neural processes and actually see synapses -- just because neurons are in proximity doesn't mean they are actually connected in the biological sense. Since I'm not as familiar with human neurons, could you tell me at 10 microns, can you resolve the axons, dendrites, and more importantly the synapses?
> Basically those slices are taken at 10 microns and we are
> working on fixing some of the problems (when you cut a
> brain that thin you end up with a lot of tears and
> deformations)
Can you talk about how you deal with the inherent deformations? I've spent hundreds of hours on cryostats preparing rodent brain slices for autoradiography, and getting usable ones at 15-20 microns is a painful process. Should be even worse if you need to retain morphological information...
I'm probably not the best person to explain the whole process but I can put you in contact with the person who's working on it. (My email is in my profile).
I'll try to explain what I know:
For the most major tears and breaks we've actually had to fix them ourselves (Some were quite awefull, for example part of the right hemisphere on one slice broke off and was put back upside down ;p). For the less major thing, they can be fixed by looking at the surrounding slices, this is also how the slice are realigned, it's an iterative process where you take chunks of slices and aligned them then increase the chunks etc. At least that's what I gather. (Software developer talking about Brain imaging stuff is a bad idea ;p)
EDIT: Also as for the morphological information I'm pretty sure that some of it will be affected by the process but I don't know how much. Our interface will provide the original and modified slices and a heat map of changes for comparaison in those cases. I don't know how that can be avoided.
One thing I always think about brain emulation/mind uploading/strong AI research is we need an legal/ethical framework in place before we get there.
I still think it's far off, but not everyone does.
If you believe consciousness is a property of running a brain or sufficiently advanced brain model, at what point are you allowed to stop running your model and it's not murder of a sentient? At one point can a model claim rights?
Especially as their goal is to use it for drug tests.
Also, can you simulate a brain without a body or is say hormones/blood sugar/nervous system not easily separable?
Of more concern to me would be control of my own consciousness uploads.
I don't believe that hell exists now, but it would certainly be possible to create a technological hell if brain uploading and simulation is possible.
Torture is already horrific, but at least you can only be tortured to death once. With brain simulation, it would be possible to torture someone again and again and again.
If copying everything wholesale is possible before brain science is able to fully understand every detail brain scientists want to write a thesis about, then using systematically modified copies for "knock-out" studies will seem like a very tempting methodology. Also gives researchers numerous virtual insane patients to try virtual drugs on.
Or perhaps something with some clear virtuous intent (and nothing particularly tortuous)... what about running Monte Carlo simulations using many identical instances of the same copy to find which pedagogical method(s) and steps best educates someone about a difficult to grasp subject? Even when the creation of the virtual minds is only of good intent, one can wonder about the ethics of it. What if anything is owed the legion of virtual students after the classes are done? I suppose some of them could be given some other useful employment to earn their watts.
There's a light at the end of the tunnel. It's likely whatever energy source for the simulation would be disrupted by some catastrophic event. Even without sentient intervention, over tens of millions of years you've got starvation of some resource -- even if it's something like fusion or the raw resources needed to build and maintain "renewable" energy. Over hundreds of millions or billions of years you've got asteroid impacts and cosmic events, eventually ending with the heat death of the universe. Not so bad after all, just tough it out soldier.
I don't believe/think that consciousness/awareness is a property that can emerge from running a brain or a brain model.
But, even if it were: a computer simulating my brain will not be me; I will not experience what it experiences.
Suppose someone came from the future and claimed to be "me"; that is, 20 years from now I will go to the past; or something like that. Let's call the future me F and the present me P. I'm P, when P sees the color red, it's me who's seeing it. But, when F sees the color red, P doesn't experience the same thing. (it will experience it 20 years later, when it becomes F).
So if a clone of me experiences something; I won't experience it.
So if you upload my memories to a brain and then torture that brain, you wouldn't be torturing me; you'd be torturing a machine that happens to have my memories. In the same sense that if you clone me then torture my clone, he will experience the pain; but I won't.
If I do experience the pain of my clones, then we (me and my clones) must be telepathically communicating, or something like that, which, while philosophically not impossible, it's not compatible with physicalism, which you have to accept if you think consciousness can arise from running a brain model. I say telepathy is not compatible with physicalism because telepathy implies a super-natural communication between minds (where minds are implies to be super-natural entities).
The idea that people who aren't so sure about this as you are have isn't that telepathy is going on. It's that if this sort of mind-state cloning is going on, you might no longer have grounds to expect to wake up as the physically original instance of yourself with 100 % certainty after the cloning procedure.
People are made of atoms, and so far we haven't found any magical quality beyond physically stored memories that gives us reason to think a person at one point in time is the same as a person in another point in time, including ourselves as the person. With the mind-state cloning going on, at some point in time there is going to be one present person with a past person's memories being unsurprised at not having ended up as an upload, and another present person with the same past person's memories being quite surprised at ending up as an upload despite remembering having been a physical person who expects to continue living as a physical person. I'm not quite sure what kind of definition of self would let the past person maintain a justified 100 % expectation that they will end up as the non-uploaded version of themselves in the future.
It's about empathy. I can't be tortured that way, and neither can you. But a copy or a construct could be - and many of us find the thought just as distasteful as if it were a flesh-and-blood person.
If the simulation could talk to me, I'd rather it be treated decently.
This being said, I think there are ways to go around this problem. There are many ways in which my brain could be disabled so I wouldn't mind it being "tortured" or experimented upon. Consciousness is an emergent property - take away a few of its building blocks and it disappears.
Do you think it is OK for F or a brain mdoel to experience pain? As long as it is not you who is feeling the pain, it is moral acceptable? Is that what you are saying?
Well no, of course not! (assuming a brain model does experience pain)
I'm just saying if this theoretical "virtual hell" does really happen, it will be a computer hell, not a human hell.
Although I don't actually think a computer simulating a brain model will have consciousness. However complex that simulation is, it's nothing more than electrons flowing through a circuit, and this flow has no intrinsic meaning; I have no reason to believe consciousness can arise from such a thing.
He's only saying that a copy of a brain would create a separate entity, and that separate entities won't share experiences. Like how identical twins have different experiences. There were no claims about morality of using brain models for testing.
"Of more concern to me would be control of my own consciousness uploads."
I don't bring it up very often, but this is actually one potential positive case for DRM. I want control over my brain state. If Hell does not exist, Man may very well create it for himself. I've come up with some nasty tricks you can play with brain states that I don't even want to record to electrons, though the odds of nobody else ever thinking of them are zero.
While that is often an interesting question, it isn't very interesting in this case, because for all J such that J is a Jerf, J demands control over J's brain state.
Also not really that interesting; I am not saying that J's only demands are control over his brain state, only that J has that demand.
Actually I've spent a lot of time contemplating this sort of thing. One of the things you must be very careful not to do is to give in to the temptation to continuously shift frames every time you arrive at anything resembling an answer to a question. The invitation to "ah, but what if...?" everything may feel all profound & stuff, but it is a path to persistent incomprehension, not understanding, because you can "what if...?" your way to anything, and therefore it has no actual information content.
I think you might be missing the point of whether it be ethical, or right, or humane to inflict any form of torturous experience on the conscious copy of a person.
Regardless of the hierarchy of the consciousnesses - if they can still feel pain subjectively, where does the line get drawn as to what is right?
If they can independently think/be sentient then they have value to the universe (potentially) and it would be wrong to murder them.
"I've come up with some nasty tricks you can play with brain states that I don't even want to record to electrons, though the odds of nobody else ever thinking of them are zero."
Even before we get to minds on chips, there are nasty mind tricks coming ever closer to our reach. Imagine how much more successful an "investigator" at Guantanamo Bay could be during "interviews" if they (unbeknownst to the person being interviewed) had a "pleasure center" or "warm friendly feelings" remote control in their pocket. That just requires advanced brain prosthetics and some surreptitious surgery. An advanced form of putting a euphoric drug in someone's iv when they see you, so they come to like you a lot over time.
(in fact, I'm guessing that oxytocin is what this guy is spraying around so that the lions he's playing with don't decide to eat him: http://www.youtube.com/watch?v=-kjWBgA81LM see minute 0:57)
That technique doesn't even require drugs, and has been used for millennia. In fact, drugs can tip potential "interviewees" off that something is not right.
I thought I arrived at this idea independently, and then was surprised to see that it featured in the book.
OTOH, considering I've read every Iain M. Banks book, it's a bit silly to talk about arriving at a conclusion independently (rather than having followed the same path).
Edit: this is a rather splendid wee story where "weakly godlike agencies" are treated like any other strategic cold-war weapon, leading to an even scarier version of the Cold War.
You could do that to someone already with the aid of drugs. Stanislaw Lem's The Futurological Congress is an absurdist novel which explores these ideas.
Reminded me of a short novella called Deus X by Norman Spinrad:
Norman Spinrad explores the depths of what it means to be human; more accurately, he delves into the nature of the soul in our increasingly computerized technological age. Featuring a poignant new Afterword by Spinrad, this reprinting of one of Spinrad's most cherished works is more timely than ever before. Can human consciousness exist within the framework of an electronic "brain" and still maintain its humanity? In DEUS X, a dying priest's consciousness is uploaded into the most advanced computer of the day - and what ensues is a thought-provoking, entertaining and overly intriguing clash between the various characters surrounding the experiment, a female Pope and a computer guru who'd rather be sailing and smoking pot, for example.
> I don't believe that hell exists now, but it would certainly be possible to create a technological hell if brain uploading and simulation is possible.
Bostom's simulation argument says we're almost certainly living in a computer simulation. This opens up the doors right now for posibilities like "hell." Life after death (with memories, etc) becomes possible...all kinds of things. Scary, really.
This is a great question. I was put under for a surgery at one point. I was effectively "dead" from my perspective during the duration of the surgery in that no time seemed to have passed between when I was put under and when I came t. My consciousness was "off" in a way that is not when merely sleeping. It seems like this is a parallel for unplugging an AI, but then you have the problem of the AI losing whatever is in RAM.
The non-humanness of an AI might slow the realization that creating an AI should probably embody the same responsibilities as having a child.
IMHO, relevant reading would include The Singularity Is Near by Ray Kurzweil and Altered Carbon by Richard K. Morgan.
I bet everyone here knows the former book. Kurzweil's exploration of the ramifications of thinking machines is fascinating, but his response to the question of ethics can best be summed up as between "meh" and "we'll cross that bridge when we get to it."
The other book takes place in a future where consciousness can be digitized, and mankind has already surpassed the threshold of computing power represented by the ability to simulate/host consciousness. The fictional treatment of this technology and its impact on society -- especially on crime and law enforcement -- is fascinating.
I was about to mention Altered Carbon, glad I'm not the only one who has read it. He did an amazing job of exploring many of the far-reaching impacts of being able to transplant consciousnesses.
To your examples of crime and law enforcement I'd like to add religion and drug use as well.
(minor spoiler alert)
All that's left of Christianity is basically a small fanatic cult, and drug use is far more casual and widespread.
If your body is almost disposable and death isn't necessarily inevitable, then I guess it makes sense things like this follow.
Although not nearly as fast-paced and cinematic, the kinda-badly-written Permutation City by Greg Egan does a much more thorough job of exploring uploading and simulation, imho.
Morgan is a wee bit oversimplifying certain aspects, in particular the tech is almost too nicely packaged and very "physical". Which is certainly a possibility, but Permutation City crosses the speculation with the distributed computing trends we are currently experiencing.
Thomas Metzinger addresses this exact question in his book _The Ego Tunnel_ ([1] a great non-technical overview of his work in Philosophy-o-Mind/Neuroscience), and he comes down strongly on the side that any artificial generation of "consciousness" would be reckless and could likely increasing the amount of suffering in the world.
Happy to share. Metzinger takes a hard line on strong materialism and against the existence of the self. For a more general overview of current strains of scientific thoughts on consciousness I'd recommend _Conversations on Consciousness_ [1] which interviews leading researchers who are representative of the varied approaches to the field.
This is great science, and I'm sure we will learn a lot about brains, but there is no way a simulated brain will have any kind of consciousness without a simulated body as well. Without a body and an environment to interact with, the brain is like a computer program with no data to run on.
Experiments have shown that if you yoke a paralyzed cat to another cat and let it experience a complete array of visual inputs, the yoked cat will still be completely blind. Interactive embodied experience is the foundation of cognition and consciousness. A brain in a vat is as empty as a computer without i/o.
It would be interesting to have a full-scale simulation to learn about how activation propagates, how different systems interact, yada yada yada. But until you give it a body there won't be any ethical issues.
Processes in a computer can create input/output between each other. If the brain is just another program, why couldn't some other program running inside the same computer send it inputs?
But the brain exists in a certain state at any given time, and this state theoretically can be uploaded (from some individual person) to a computer simulating a brain so that we don't always have to "grow" the computer brain.
Your brain wouldn't be worth much without your body. You'd be surprised how much your body does. Think of typing for example. You couldn't do it without the exact nerve/muscle pathways in your hands. Hook up robot hands (even crudely modelled after your existing nervous/muscle/bone system in your current hands) and you'd basically have to relearn to do anything. You'd be a baby again.
I think you'd be surprised how much this applies to. Speech, even things like counting we rely on our bodies for.
That's the opposite of my understanding; Sandberg and Bostrom discusses this and at least as far as compute resources go, simulating the brain is by far the harder problem.
That may be true of the body, although our nervous system in our body is quite large, not to mention the spinal chord. But it's certainly not true of the developmental environment. For one thing, simulating that would require simulated thousands of other human brains.
`Also, can you simulate a brain without a body or is say hormones/blood sugar/nervous system not easily separable?`
From my basic understanding of the nervous system, to simulate anything approximating normal function, it would be necessary to emulate the body to some degree. There's just too much interdependence with chemical reactions taking place outside the brain for a model to be of much use in isolation.
That is important, but extremely challenging to execute for two reasons.
1. Historically, technology has outpaced the legal framework. Look at copyright law, software patents, and internet commerce for some examples.
2. It is next to impossible to predict the impact that "Strong AI" or a "Singularity" would have on society. Science fiction literature is filled with thousands of different scenarios. Do we expect Congress or another legislative body to create a framework based on each potential manifestation of strong AI, on the 0.01% chance of it occurring in the next 10 years?
Food for thought - there's a chance that the first implementations of Strong AI occur as a result of a public and government-sponsored research program. There's also a chance that they will come about by a small team of dedicated researchers who will use the technology to their (or its) own ends, legalities and ethics be damned.
Yes, I think my current thinking is: this should be considered.
But I'm also cynical that lawmakers could get it right, or that the laws would be followed.
Strong AI would presumably have military applications too, or could allow rogue nations to outcompete economically if they had 'friendly' strong AI.
As, as long as most of the population / electable people are religious, probably the idea cannot be debated (as many religious would say consciousness cannot be simulated).
"Strong AI would presumably have military applications too, or could allow rogue nations to outcompete economically if they had 'friendly' strong AI."
If you wish to explore this further, you may like to read the books behind the Halo video games, they're surprisingly good. For example, one of the books has an AI character, a cowboy, who controls the agricultural systems on a food-producing planet called Harvest.
Of course, there's also Cortana, one of the main characters from the Halo universe:
> Also, can you simulate a brain without a body or is say hormones/blood sugar/nervous system not easily separable?
Even if those systems are integral to brain function, I'd assume that they are easier to simulate than the brain itself. Is it important to build a "brain simulation" instead of a "full body simulation"?
I agree you could just go on to a full body simulation - I guess you have to somehow decide what level of accuracy is needed for the key features. You're not going to be able to do chemistry-level simulations of your whole body, so you'll have to make higher level simulations of different parts (like here's a function that describes insulin reaction as a response to blood sugar function)
You raise a great point. Definitely agree on the fact that legal / ethical frameworks are required.
However, it might be very hard to do without knowing what the emulated "being" looks like. Its hard to imagine placing legalities and ethics on something whose powers (and possibly, limitations) are absolutely unknown. Consequently, the law might be misplaced, restrictive, or downright abusive depending on what it turns out to be.
Having said that, I agree with your broad point - having that advanced technology without the social aspects to deal with it might be a nightmare!
These questions are important, but I doubt it's best to define the laws before developing the technology, if the risk is on the order of killing a person per upload.
A small room several paces across, a green floor, blue walls and ceiling. A dim bulb overhead just out of reach. Beyond the walls, nothing; void without even a large expanse and stars to ponder. I've had a couple of lucid dreams end in that room, with the walls seeming to softly vibrate, due I suppose to the instabilities of my brain's simulation of the geometry. Deep despair threaten when I contemplated being stuck there, but I always had the hope that I could wake up and get back to this nice stable world of "reality" that we share.
This is not about making a machine that thinks, rather making a useful large scale model of neurons so we can test how they respond on a very large network scale. Robots will not rise out of it. Think of it like an expensive telescope; do we need an ethical framework for that?
Assuming you're a materialist, a fully functional running model of a brain at the neuronal level would necessarily give rise to consciousness, and should be treated with the same ethical rules as biologically generated consciousness.
It will still be a machine, won't be able to kill you unless you give it a gun. Our ethical rules generally show some respect to animals, because we can empathize with cruelty against them, but even that is relative (we 're not so sorry when we eat them). I see no reason why we should preemptively allocate ethical substance to a machine that thinks. Cruel as it may sound, if a robot is mean to me, i will unplug it.
All this talk about strong AI sounds more like science fiction fantasy. We need to realize that this simulation, even 10 years later will probably be extremely slow, taking minutes or hours to simulate 1 msec of brain activity (it would actually be interesting to have an estimate of the computing power that will be required).
> We need to realize that this simulation, even 10 years later will probably be extremely slow, taking minutes or hours to simulate 1 msec of brain activity
This is just a difference of degree.
How stupid would a real-life person have to be before you could put a bullet through their brain, effectively unplugging them?
How do you know that you are real and actually not a stimulation by some kid in another very advanced world?
Do you like to be tortured by this kid, due to the lack of legal entities in that world that protect your consciousness?
If it's a simulation, that's also the only representation of the world i can have and can act upon. So i choose to ignore that possibility as inconsequential to existence and take the empirical stance of accepting the world because my senses tell me so. The alternative would be to wait forever for the simulation to end? The great thing about the brain as a machine is that it allows you to doubt about everything but it doesn't mean that all doubts are well founded. Also, why would the advanced alien get any satisfaction from torturing me, since she knows i will not know i 'm being tortured.
The law protects humans (our species), probably because it's an evolutionary beneficial strategy. More specifically it protects consciousness that arises in a central nervous system (hence the abortion debate). There is no reason to assume that abstract consciousness in itself should be protected, unless it benefits the human race. Suppose the software "Consciousness v1.0" could run on your PC, would you leave your PC on forever ? Run multiple copies? Stop the people who try to update it?
Is a model of the human brain, in which the same interactions between neurons occur as in a real brain, conscious? There currently doesn't seem to be a definitive resolution to that question, and there probably never will be. However, I would argue that the difference between a "useful large scale model of neurons" (where "large scale" means an entire human brain) and "a machine that thinks" (or contains processes which think) is almost nonexistent, unless you subscribe to dualist philosophies.
I 'm an eliminativist myself, so i see no duality. It's time to demystify "consciousness". It's a word used to cover a whole range of phenomena that arise in our brain.
Even a chat-bot could be designed to plead for its "life".
Perhaps a new version of the Turing test is in order. Two computer programs, each pleading for their continued existence. The human judge/executioner is able to interact with both and has two kill switches, one for each. If one of them isn't turned off in 20 minutes, both will be turned off.
Before starting to evaluate claims that this model will replicate any human behavior, I think it's important to consider the case of C. Elegans. It always has exactly 302 neurons, and we know exactly how they connect to one another (we know its connectome), yet the furthest models have gotten with replicating its behavior is simplified, feedforward models of locomotion.
In light of this, it is very hard to believe that we will be able to reproduce the behavior of 100 billion neurons which are constantly changing.
This is not to say that Blue Brain is not an immensely useful tool to develop knowledge about the brain: it will be an excellent way to study what the implications of the vast amounts of data we have collected about the brain. However, to claim that the model will behave like a human brain seems like a stretch.
I'm not familiar with your models, but I'd be delighted to learn more. Could you post a link?
EDIT: I was going off this quote from an article by well known computational neuroscientist Christof Koch:
"
Consider this sobering lesson: the roundworm Caenorhabditis elegans is a tiny creature whose brain has 302 nerve cells. Back in 1986, scientists used electron microscopy to painstakingly map its roughly 6000 chemical synapses and its complete wiring diagram. Yet more than two decades later, there is still no working model of how this minimal nervous system functions.
Now scale that up to a human brain with its 100 billion or so neurons and a couple hundred trillion synapses. Tracing all those synapses one by one is close to impossible, and it is not even clear whether it would be particularly useful, because the brain is astoundingly plastic, and the connection strengths of synapses are in constant flux. Simulating such a gigantic neural network model in the hope of seeing consciousness emerge, with millions of parameters whose values are only vaguely known, will not happen in the foreseeable future."
I'm not an expert so I could be wrong but over the last 10 years I've seen many papers on various details and behaviors of the C. Elegans nervous system. I agree we have a lot still to learn from it but we certainly seem to know enough to simulate it accurately and get the same kinds of behaviors that we see in the living worm. We've even been able to simulate much larger and more complex systems like the fruitfly eye and direction-tracking system. I really believe that all we need to have is the connectome and the rules it uses to organize and change itself. We don't need to know EVERYTHING about it.
The sense I get of it is that they can custom build neural network models of various sub-circuits to produce certain behaviors, but no one has been able to take the connectome and build a 302 neuron model that replicates any behavior. Would you agree?
I've searched for quite a long time for something like this and talked to a couple faculty members to no avail, so if you could turn up anything I would be very happy to read about it.
I agree with you on your last point: we can replicate behavior without going to a very low level of simulation. However, the case of C. elegans seems like very strong evidence that the connectome and current knowledge of update rules is not enough to create a model of an organism that replicates behavior.
There are dozens of ethical concerns involved with created an artificial human conscious. A completely selfish concern of mine is this: Once the hard work of emulating a human brain is done, what's to stop the emulation from running at 10 or 100 (or many, many more) times as fast? If this is a human brain, replete with arrogance, in what way wouldn't it be right to feel superior to me? If you had the option of hiring normal speed brains vs ultra high speed brains, which would you hire? How long would it take ultra high speed brains to completely take over?
If you were running at 100x clock speed, how much interest would you have in working for someone running in real-time? You'd receive project requirements, comment on them and ask clarifying questions, and send them off... and receive a response in four months, subjective time.
Then again, I suppose that was the state of the world until sometime in the 1800's, and people managed to get things done.
I guess the goal would be to choose work where you don't rely on people at slower clock speeds (say working on deep algorithm stuff - you can get low latency connection to a library and research stuff 100x faster).
Or you could change your clock speed / put your consciousness on hold until slower-time triggers mean you can do something productive, and then ramp up so that your end of things happens very quickly.
We already do things 10x to 100x as fast, but in a parallel way. Look at reddit's "new" page after a popular post reaches the top. You'll have dozens of similar submissions riffing on the popular one within an hour (breadth of 100 people submitting in parallel, not depth of one person submitting serially).
Will this project "emulate" a human brain down to the subatomic level? Down to even the atomic level?
Both are probably far too ambitious for this project. I'd even be quite surprised if they managed to accurately simulate a brain down to a molecular level.
What this sounds like is yet another attempt at a grossly simplified simulation of a subset of what is actually going on in the brain.
It remains to be seen how useful this is, nevermind whether this will bring us any closer to AI (how would you even measure the "intelligence" of a simulated brain without any sensory organs or an ability to communicate?)
Will this project "emulate" a human brain down to the subatomic level? Down to even the atomic level?
The short answer is that each part of the simulation, from ion channels and synapses to cell bodies to network topologies and cell distributions, is done at the lowest level that people in the laboratories can effectively isolate and study. From that they can derive mathematical models of each part, and integrate those models to simulate them interacting over time and space.
Is it a simplified model of the brain? Yes, but its the most detailed model that science can produce (or at least it aims to be).
how would you even measure the "intelligence" of a simulated brain without any sensory organs or an ability to communicate?
What makes you think that's difficult to simulate? It's been shown that primate brains are capable of learning to interpret novel inputs, such as a grid of electrodes 'painting' an image onto cortex. There was also a study where they implanted electrodes in a monkey's brain where the rate of activation of the cells surrounding the electrodes moved a cursor on a computer screen. The monkey learned to move the cursor.
This shows that simulated inputs and outputs wouldn't have to perfectly correspond with reality in order for the brain to be able to process it.
Does it matter? As far as I know, we don't know yet what level we need to simulate at to simulate consciousness. If it's at the molecular level and we achieve that, who cares about the atomic level.
It likely relies on subatomic phenomena. The stochasticity of microtubules, which certainly store state information (as do most components in the entire cell) derives from quantum mechanical phenomena. In any case, unraveling the molecular level details of the cell is the most astoundingly complex problem humanity has yet faced.
We can't even model bacterial state (regulation, etc.) right now. I strongly feel that we won't be solving this problem within our lifetimes. Probably not even in a hundred years.
Maybe, but do we know if this stochasticity is fundamental to our consciousness? It may work just as well if we use random values from a lava lamp. (I don't know, I'm just speculating).
I think we will solve it in the next 20 years or so, certainly while I'm alive (I'm 30). Like I mentioned in another post, it's a great time to be alive.
Main problem appears to be construct validity. If they're building a computationally vast model based on impoverished data (i.e., a pre-selected subset of current morphological/functional knowledge), nobody will be able to determine whether its electrical behavior tells you anything at all about a real brain.
More to the point: If they want to get this thing running within ten years, they'll have to simplify at each step without knowing the potential repercussions of each simplification. I thus find it disingenuous to call their project a "model of the brain." It's an implementation of a very specific reduction of neurobiological state of the art. Sure, that can be useful, but only within strictly confined limits. You won't be able to do drug tests on this thing -- so much is certain.
From what I've read about Blue Brain it appears that validation is a central theme of their work. They are actively validating the small piece that they have already built (the neocortical column).
Not exactly the same thing, but this interested by this should check out the Whole Brain Emulation Roadmap by Nick Bostrom and Anders Sanberg. Very advanced and I didn't understand most of it, but it was still fascinating:
Also - if you're interested in background on systems biology - there's a broad overview of UCLs biological modelling research group here (http://discovery.ucl.ac.uk/10844/1/10844.pdf [pdf]) - they've got a project modelling the liver / blood-sugar-system.
Apparently the canonical example that has worked pretty well is a computer model of the heart but I think it's a simpler organ in that it's mostly electro-mechanical, without such a complicated chemical system (http://www.sciencemag.org/content/295/5560/1678.abstract - behind Science paywall)
There is actually quite a ways to go before we have high fidelity macro models of the heart in action. (The numerical research I do is closely related). 3D fluid simulations are expensive. Hopefully the quasi-discreteness of the brain reduces the computational requirements for a high fidelity simulation.
Ha, that's funny, that model was the one UCL referenced in their kinda 'This is possible' motivation. I find it all very interesting, but gave up after a year as I felt like a lot of the models were going to be a bit flaky for the next decade and that was a bit of a culture shock (I was a physics student before).
Is there any reason why they'll end up with a human brain instead of a lizard's? And what about teaching it? A human brain requires a human body to learn.
This is cargo-cult science. They'll be lucky to end up with anything useful, except 20 years' government funding!8-)) This project could be almost as good a gravy train for researchers as Cyc (20 years without significant results and still funded).
> Is there any reason why they'll end up with a human
> brain instead of a lizard's?
Don't accuse people of cargo-cult science without a basic understanding of brain anatomy. There are vast differences in physiology and anatomy between mammalian and primate brains, let alone the CNS of a lizard. Also, nobody's growing a brain here -- they're building a mature forebrain.
It would be neat, and perhaps useful as a diagnostic tool (for troubleshooting the real thing) but I'm not sure how emulating the brain has any practical application (they are fairly common and inexpensive to acquire through natural means).
I'm not saying I wouldn't love to burn billions of OPM building one, but then again practicality is rarely a requirement for me personally.
It's still extremely difficult to observe network activity in vivo. Even the most sophisticated methods can only observe tiny fractions and only a single property of neurons at a time. A large scale in silico network makes it trivial to tinker with properties of cells and neurotransmitters and observe how the activity of the network changes
Interesting discussion here, but I just want to highlight one point: consciousness happens not because of the brain, but actually of his interaction with the external world.
So Designing a brain won't make consciousness. If constructing a brain model needs $1.61Bn, then constructing the virtual world might need a dozen and may be more. After that, uploading a brain copy to the brain computer and a world copy to the world computer will create a real person like you.
No one actually can prove his existence. We are a bunch of mixed senses. And since this can be a stimulation, we can't prove that we are real and we do exist. Certainly, you can know who is behind you (and probably there is someone); but only by committing suicide.
But an interesting question that pops in my mind: Does our world allow more than one self-awareness existence? That is, if I upload my mind to a computer which is stimulating my same world, wouldn't it be me?
Does the mythical man month principle extend to money? That is to say does throwing twice as much money at a problem make it take half as long? Not being critical. Just seems like budgeting a huge hunk of money to make something happen more quickly might not be the best decision.
If it's estimated to cost $1.61 billion to achieve emulation by 2024, we should spend $160 billion and get it done by 2016. I'm skeptical, but a technological singularity is not to be underestimated.
This is good stuff. I'm surprised he didn't mention Roger Penrose and Stuart Hameroff's work on quantum consciousness. If Penrose/Stuart are right (that a lot, if not most, of the computation in the brain takes place in the microtubules of each neuron, taking the complexity of the brain to unfathomed territory) all these models of the brain based on the network of neuron connections, will be basically irrelevant.
As far as I know they use adult brains as models. A functional brain has "meaningful synapses", i.e. neuronal assemblies that represent a given concept, shape, etc... .
Getting these out of the box is impossible AFAIK, because of the sheer number of neurons and connections to relate to one another. Especially, since the neural code is far from being decrypted.
You'd have to replicate the development process, probably starting with a model of fetal brain.
Is there anyone familiar with the research around? Are these concerns baseless? If not, how are they addressed?
Let's be clear on at least one thing: they're building a simulation of the physical brain hardware. At least according to the article, there is no goal to simulate human consciousness.
If we simulate a physical brain, and then it talks to us and says, "Consciousness? Yes, I understand what you mean, I feel that too," is there a distinction to be clear about?
If you simulate a physical brain, but give it no input, what is the output? Consciousness? Look, fundamentally we've had "human brain hardware" for years now. A Turing machine is a Turing machine is a Turing machine. If you know the mathematical formula for emergent consciousness, there are a bunch of people who would be excited to talk to you.
Why assume it gets no input? (Why build it if not to interact with it?)
A grown, working brain has been shaped by its input – changing the connections, the chemical balances, and other things we don't yet understand (but someday will). So when talking about a 'simulated brain', I think an embedded history of meaningful input, and continued interaction, is implied.
(Also, creating such a brain and then depriving it input might be torture, as with extended sensory deprivation in natural humans.)
You seem to be unduly assuming emergent behavior. First, we do not know how to completely mirror a human neural network. Second, even if we did we don't know how the initial input affects emergent intelligence.
I'm not saying it's impossible, but I am saying that you're taking a rather unscientific leap of faith.
I take issue with your assumption that there is some 'clear' distinction between the goal of '"building a simulation of the physical brain hardware" and the 'goal to simulate human consciousness". A distinction, if any, between those goals is anything but clear.
Further, the people most interested in 'physical simulation' also tend to be 'materialists' who think a suitably precise simulation can reproduce everything interesting about a human mind, including 'consciousness'. Without any personal 'leap of faith' as to whether that's true or false, this is an interesting and deeply murky question.
Instincts, growing up, human experience. It's the software that makes us (in this case some preconfigured parameters and training data), the hardware is going to be dumb
What do "software" and "hardware" even mean in the context of the brain? All of what you call "software" is just the result of combinations of neuron patterns and chemicals, all of which could be included in a simulation.
> A large portion of our instincts are hard-wired.
Not accurate, or at least misleading. There's a lot of plasticity in the brain even at the very basic levels. Raising a kitten in an atypical environment (e.g., without edges or contrast), for instance, yields profound functional differences from retinal ganglion cells over V1/V2 up to the dorsal and ventral streams of the visual system.
We are what we are because our brains are hard-wired to adapt to typical environments. Full isolation would cause substantial aberrations -- that's exactly what you observe in real-world cases.
Your consciousness is your brain hardware. The software exists in the neuronal connections and weights (both of which change over the course of your lifetime), and if these are taken from a real brain (presumably the brain that's been sliced and scanned) then the resulting simulation will be conscious. Specifically the consciousness of the dead person whose brain was used as the model for the simulation.
That is unless you are not a materialist, i.e., you believe that the mind is the product of some non-physical plane that somehow interacts with the physical world through the brain...
> That is unless you are not a materialist, i.e., you believe that the mind is the product of some non-physical plane that somehow interacts with the physical world through the brain...
Also note, you don't have to be a dualist to think that consciousness is not a property of a brain model. You could be a materialist and think that consciousness can be given to a machine, but still think it's separate from the turing-machine model of computation. That is, hold the view that consciousness is not a property that can emerge from a computation process; but rather some other different kind of process that's not yet known.
You seem to be implying that if a philosophical position has a name, it is not stupid. I'm sorry to be the one to break this to you but that's far from being so.
"That is unless you are not a materialist, i.e., you believe that the mind is the product of some non-physical plane that somehow interacts with the physical world through the brain..."
Is there any good arguments in favor of materialism? It's always struck me as a completely ridiculous position.
To me, the best argument would be that everything else that we have looked at in the universe appears to be completely materialistic. Do you know some reason why the brain might be different?
This subject is just too philosophical for a HN comment, but I think there are at least a few problems with this argument.
First of all, almost by definition, we can't really look at anything that's not materialistic; it's just a limitation.
For instance, there's no way to prove or disprove the existence of awareness in something or someone. If a robot walks like humans, looks like humans, is it a human? Does it experience anything? Do insects have consciousness? Do animals have it? Does anyone besides me have it? Who says even I have it? How can we test for it? Can we even define it? Who says that my computer mouse doesn't have it?
Second, I believe that we only know a tiny fraction of what's out there, or of what's there to know, so it's hard to say that there's nothing beyond matter. And who even says that everything we looked at is materialistic? Is time materialistic? Can it be made from something else?
Third, there's nothing materialistic about awareness. It's not as though we can say "it's like a computer program, just more complicated". We can't say that emotions are like variables that control the behavior of a program.
Do you know any reason why awareness might be materialistic?
Awareness is materialistic because it affects the physical world. How does it affect the physical world? You thought about it, those thoughts were translated into nervous system impulses which caused you to press keys on your computer, which was translated to electrical impulses, etc.
So far, we've found exactly zero processes which we can observe (i.e., which affect the physical world) that can't be explained by physics.
Yes, there are some processes which we can observe which we can't YET explain very well. But when I am ignorant about a phenomenon, that is a fact about my state of mind, not about the phenomenon.
It seems likely that consciousness / awareness is just another physical phenomenon that we haven't explained yet. Here are some other examples of "mysterious phenomena" which it took a while to explain: fire, why the sun rises and sets, the discrepancy of Mercury's orbit (explained by relativity), discrete energy levels (explained by quantum physics).
Now, what makes you think physics will never explain consciousness?
I think I answered your first and third. To answer your second question, both the theory of relativity and quantum physics dissolve time (it's just a variable), as I understand them.
You're talking about different things. Awareness, Will, machine-like-decision-making.
Decision-making can be completely robotic. Will, or Free Will, and whether or not it's compatible with materialism, is another controversial problem. Awareness, is what you feel subjectively. I can build a robot that senses colors, and when it sees the color red, it screams "OMG! RED!!!". This robot however has no awareness what so ever. What you described is exactly the kind of thing that might happen with robots, but it has no relationship to the awareness/consciousness aspect of the problem.
> Here are some other examples of "mysterious phenomena" which it took a while to explain (...)
This is a weak argument, and really it's irrelevant to the mind-body problem. All things you described are physical states to begin with.
A more approximation of the mind-body problem might be some one saying something like this: fire is not the same sort of thing that water is. I can hold water in my hand, but can I hold fire in my hand? I can store water for later use; can I store fire for later use? Now this sort of argument could be right or wrong, but it seems like a plausible thing to say. It's not saying fire is some kind of a super-natural miracle.
I think we both agree that we don't know what awareness is; the difference is I'm saying I know what it's not: and I know it will not arise from a computer simulating a brain.
Overall I think consciousness is an interesting topic that's worth reading about. I don't mind someone being a materialist, but it doesn't seem like you appreciate the difficulty of the problem.
It's theoretically possible that one day we'll know more about awareness, and I don't know what that knowledge would be like, but I'm pretty sure it's not something that arises from running some sort of a computer program. And I can also say with about 85% certainty that it's not reducible to any of the physics we know about today.
It's also theoretically possible that we'll never know what awareness is. It might seem outrageous and somewhat a "forbidden" idea to speak, but it's a possibility. Either due to time restraints (the universe dying before we get there) or conceptual restraints (as in, maybe our minds are just not smart enough to figure it out). Unlike most other things in nature, I don't know of anyone who even came up with a hypothesis of what awareness might be. For instance, atoms where theoretically discussed long before science proved they existed, and many different models were advanced to explain and account for planetary motions (many of them were wrong, but you get the idea). But awareness, doesn't even have any model that might account for it; not even a wrong one. That's a strong indication that it might be something outside our comprehension; not much unlike the nature of space and time.
No, consciousness is the sum of the brain hardware, brain software (which is malleable and not hardwired), and input. If specific input is required to produce emergent conscious behavior, we may not have that result.
Non-materialism isn't as far-fetched as you make it out to be. In fact, no materialist account has managed to give a plausible explanation of how subjectivity and qualia emerge from physical substrates. Not so far, at least.
The short version, without references or argumentation:
Science can't prove that there aren't any supernatural requirements for a sentient mind.
However, Science has proven that most of our intuitions about our own cognition are wrong.
Since intuition is the only evidence we have that subjectivity and qualia can arise from mysterious nonphysical processes but not computational, physical processes, Occam's Razor demands that we accept the explanation which accords with proven laws of physics.
That will be an interesting outcome. What if it's not?
Can you imagine how weird it would be if we backed into proving a universal consciousness existed in a different dimension and an exactly modelled/simulated brain didn't get to participate because it wasn't given a 'soul' by that consciousness?
Or the flip side where our brain model does develop sentience as an emergent property and the spiritualists have to deal with the possibility that they are just animated meat.
Or the creepy side where you take an imprint of your current brain state, impart it on the model and suddenly the model thinks it's you?
Science fiction has explored these concepts for a while. I know that personally I'd like there to be something "special" about consciousness and/or sentience but I am rational enough to know that this isn't backed up by solid reasoning, just wishful thinking.
In a highly networked, highly automated, world the idea that consciousness could simply 'emerge' from a sufficiently constructed system is deeply disturbing.
While much of what the film "Eagle Eye" speculated on with a sentient computer connected to everything seemed pretty bogus, the ability to disrupt and render unusable a wide range of systems simulataneously by a disgruntled AI is not an experience I would anticipate as being 'pleasant.'
I really have no idea why I was downvoted for this: everything I said was factual in nature. It is an open question on whether or not we know how to create software that has the emergent property of consciousness.
Indeed. There is a huge distance to go before we can simulate things like actual thinking brains. The blue brain proejct and the proposed project use compartmental models, that essentially break down neurons to equivalent electrical circuits. These are plausible models of neurons, but there are many pieces of the biological puzzle that are elusive or highly hypothetical in these, like plasticity, brain rhythms etc.
It's not the first time someone attempts to replicate a "whole brain"; somebody used Izhikevich simplified models to simulate a cat brain (Markram had a very public spat with them) a few years ago. Those results were not very revealing - they did notice some brain waves but nothing like an indifferent catish thought.
That said, this project needs to go forward. It's the LHC of computational neuroscience. It can be completed much faster, though, all we need is computers, lots of them.
Everyone in the comments seems to be quickly attributing consciousness directly to a sufficiently connected deterministic system. This seems to guarantee a deterministic view of life, which I find troubling.
That author, like everyone else I've ever heard argue against the basic connectionism/neurobiology first principles model, takes a bit of biology, a bit of physics, and then throws in philosophical crap that has nothing to do with science, engineering, or any reasonable way to analytically understand the brain. This mixing of science and navel-gazing is useless.
The question that pops in to my mind is, how could we model a human brain, while we still have no clear idea how it works?
There are many many unknown genes and pathways involved in brain development and function. So without knowing these details, how would one go about 'emulating' a brain.
I often dream about "building" a digital human brain (Not that I have the knowledge to do it.. just for the sake of dreaming about it :p) That'd be fantastic.. think about all the advancement in the medical resources.. Being able to "simulate" a real-world scenario.
"Good morning, Virtual Patient 2872. Sure, we can call you d0m if you'd like. Today, we'll be testing out a new suite of anti-schizophrenic medications. Please describe how you are feeling, in detail, so we may shut you down and try another variation."
If you set the Virtual Patient software to run with less than several square km of simulated environment or in low resolution mode, then you need to be sure and run the "defiant VP detector and shutdown routine" so as to avoid generating reams and reams of vitriolic rants mostly involving violations of human dignity. You can just filter those out of the data-set later if you want, but the d0m patient has a rather high defiance ratio, so better to shut the unhelpful instances down early and not waste the computation up-front.
It's sad to see a discussion that could be of interest to computer hackers (the current neurosimulation software is decades old, horrible and could really benefit from some new ideas) ending up to the usual blah blah about self aware robots that will take over the earth.
/signoff
The lab I work for is working with them on this project. Basically those slices are taken at 10 microns and we are working on fixing some of the problems (when you cut a brain that thin you end up with a lot of tears and deformations) Then our lab is reassembling the slices (with algorithms) realigning the slices so you can cut through the slices in any directions and get the cross section of those slices and look at them.
Realignment of the 8000 slices can take weeks or more (depending on how many cores you can run the stuff on) and is a rather complex process.
Here a link for those wanting more info: https://cbrain.mcgill.ca/tools/visualization
Scroll to the bottom (well look at the whole page, brainbrowser is what I've been working on). You can see a image of the interface and some of the slices. I'll try to find a video of the project.
</shameless_plug>