Biological systems are nothing like anything we would ever engineer, and to understand them we must remove our "anthropocentric engineer goggles" and look at them for what they are. Analogies between biological systems and computers, software, machines, etc. are very loose analogies meant to illustrate a point. Never take these analogies too seriously.
DNA is not a program-- it is a molecule, and one that may very well do things at the quantum level that are biologically important.
The brain is not a neural network. It is an interconnected colony of living cells of a variety of types, and it has been shown that all types of cells in the brain are involved in cognition.
We are nowhere near anything with the parallelism or information density of the brain. Getting close would take an advancement in computer technology of the magnitude of the transition from vacuum tubes in individual boxes to a 32nm Core i7. Nature has had billions of years, and it is way ahead of us. There's a nascent field called quantum biology that suggests that the brain may very well be a quantum computer, so maybe quantum computers are the vacuum tubes->ICs scale transition I am speaking of.
I think it's now possible with a big rack of multi-core machines in an MPI cluster to approach the capabilities of a fruit fly's brain.
Cells are not machines. I don't think we have the language to really describe what they are perfectly, but the closest I can come is "stochastic quantum probability field device." A bacterial flagellum is not a "motor," it is a quantum-scale chemo-electro-motive... well... our language breaks down. Like Feynman said, don't tell stories. Just speak literally and then use math.
Actually, come to think of it, there's one engineering analogy that might work for biology. Biology is quantum-scale nanotechnology. Yeah, that's pretty close.
(On a related tangent, I've found that many engineers are sympathetic to intelligent design type arguments against evolution. This is because they try to think about biology like engineers and take these machine analogies literally. It just doesn't work like that.)
...so about 35 years? ;)
Advancement is governed by economics as well as technical capability. There must be demand for new technology, or a field stagnates. Witness aviation as an example... utterly stagnant outside of military niche applications.
People seem to no longer want faster and faster computers, and the market seems to be moving toward lighter-weight lower-power portable devices like netbooks, the iPad, etc. Those have slower CPUs than current-generation desktops. I suppose the extreme gamer and server/datacenter markets are still driving performance, but for how long?
One problem is that programmers are not using the capabilities of current-generation processors, partly because the dominant OS (cough Windows cough) makes it horrifically painful to deploy desktop apps. This drives all development to the web and turns desktops into thin clients. In the end this kills demand for performance outside the datacenter market.
If you could pack the "extreme gamer" capabilities of a Playstation or an Xbox into a format as "usable" as an iPad... Then you would of engineered the next iPad.
The iPad was able to come into existence because we've finally hit the point where we can cram that much computation into a small factor form (along with all the other engineering advances like wireless networking, reducing power consumption, improving display and improving battery life).
Most of those advances are directly descended from the pushing of the bleeding edge. Companies / people are not simply going to go "oh we've got iPads now. So no need to make anything faster / better / bigger".
 By usable I'm not talking about some magical Jobsian property of the device. I'm not even talking about the software interface. I'm talking about being able to surf the web / post to your blog / whatever while on the toilet. Try doing THAT in 1995.
If 35 years (according to Henry Makram, linked below it's only a decade) was all it took then we could simulate the brain today at a reduced speed and get meaningful output, after all, all you'd have to do is slow down the inputs accordingly.
We're as far away from having a universally teachable computer (not programmable!) as we were in the early 70's when true AI was only about a decade away.
Some interesting reading about the 'state of the art':
That alone may already be a mistake, it's an observation, not a law after all.
Besides, compared the 35 years ago we can now do things 1,000,000 times faster than back then, but computers are not 1,000,000 times 'smarter' they just give the same answers that you could compute back then but faster and on fewer computers.
The future is parallel anyway, so it isn't Moores law (increase in density of transistors on-chip) per-se that will drive this, more likely there will be a switch to increasing chip packing density with smaller chips (bigger yield) and better communications between the chips (think computing fabric).
We need a huge advance in programming languages before we can really contemplate building an AI by taking advantage of such a structure though, simply simulating the organic soup that forms a brain is going to be a much harder problem computationally and may simulate a dead or an insane brain much more easily than it will simulate a live and thinking one.
Actually, if that last shift took 35 years, the next one of that magnitude will be even faster.
This is Kurtzweil's fundamental insight: exponential growth is faster than people realize. We consistently underestimate it because our brains are pre-disposed to think linearly.
If it takes you 1 year to solve 1% of a problem, your brain feels like you're 99 years away. In reality, you're only 7 doublings from completion.
It may be a physical limit, a supply-side resource limit, or an economic demand limit, but there will be a limit somewhere.
Without limits, a single bacterium could fill the entire universe in a few years.
Sometimes things do grow like that for a while, but Kurzweil's attempt to turn this into a universal law and neglect limits is hand-wavey and silly.
The interesting questions are how much computing power you need to perform equivalent tasks to a human brain, and whether current technology will reach that before it plateau's out.
Exponential growth of tools opens up an exponential number of different avenues of exploration - if computers didn't advance at all for ten years, we'd still come up with many more ways to use them. With them advancing exponentially, we can not only find different ways of using them but new fields where different forms of exponential growth can happen. And so-forth. There's no fixed frontier but a moving process.
This isn't saying it's all wonderful but it's all likely to be a bit beyond our ability to encompass it - to draw a circle around it.
If the solution is 99% easy and 1% hard then you may find out after completing 99% of the problem.
Many problems are like that, simulating the brain is an excellent candidate for being such a problem. If it was just a matter of throwing more computer power at it then we'd have solved it years ago, it's that big a prize. But there is still a large part of our understanding missing and understanding does not yield to Moores law.
ie, if emulating a human brain is only N times as hard as emulating a flatworm, Moore's Law might do the trick.
But if emulating a human brain is more like (flatworm complexity)^(number of cells in human brain - number of cells in flatworm brain) then Moore's Law is unlikely to help for a very long time indeed.
PS: Don't forget this source code has been hot patched (http://en.wikipedia.org/wiki/Patch_(computing)#Hot_patching) over 3+ Billion years.
Edit: To continue the analogy the boot loader has been lost to time (or sacrificed to free up memory), so getting a working system requires copying not just source code, but also much of the current state of the system.
I also dislike those sorts of analogies because they are loaded with aesthetic value judgements that do not apply. Programming code that looked like that would be ugly. The genetic system is beautiful and elegant.
Only because you don't have to maintain it
This encodes protean X, however it only folds up correctly 15% of the time. However, Y bumps things up to 70% and Z get's you to 90%. The other 10% well that depends on the shape some of these are used by Z to do... Why do we know this? Well both Y and Z are defective 2-3 percent of the time resulting in... etc.
PS: Don't get me wrong the happy path works well most of the time and when it fails early it's just a non viable embryo so no problem. However, saying it's elegant is like saying all airplanes in the sky must be easy to maintain because you never see anyone outside fixing them.
It's more elegant thank you think.
This is the minority view in neuroscience.
> In quantum terms each neuron is an essentially classical object. Consequently quantum noise in the brain is at such a low level that it probably doesn't often alter, except very rarely, the critical mechanistic behaviour of sufficient neurons to cause a decision to be different than we might otherwise expect...
—Michael Clive Price
But... I do tend to sympathize with the minority view here. The problem is that I don't think the majority are looking at the complete system. Yes, the macroscopic activation and conduction behavior of neurons probably can be modeled classically. But that behavior, as well as things like where the axons and dendrites connect, is governed at the meta level by the genetic regulatory networks and metabolic machinery of neurons and their support cells. All that involves at least thousands of genes and a lot of interactions that may very well extend down to the quantum level.
It's a living cell that grows and changes over time, not a simple gate that can be modeled by an equation. Modeling a neuron like that is like modeling a star as a single point source of light because it looks like that from far away. You can model the way stars look through a telescope like that, but that does not accurately describe what a star is.
This seems so unlikely. Isn't it more likely that it takes a rack of multi-core machines to simulate a fruit fly's brain using our extremely primitive algorithms for approximating intelligence?
In other words, it's a software problem.
It's a software problem and a hardware problem. Our hardware is not up to the task, and even if it was we wouldn't know how to program it.
Evolutionary computation and non-von-Neumann architectures such as stochastic data flow architectures might be where to start. We would not write the code. We would build the right kind of architecture and then evolve the code within that architecture.
That sounds more like an approach that I think would produce results.
The key here is 'right' though, what is right?
Kurzweil himself would admit that we do not know the details of how all of our technology works. Humans at this point understand very little about atomic physics (we still account for the majority of mass by calling it dark matter and hiding it in a formula constant) yet we can produce atomic explosions. I think that one thing alot of people are missing here is that you only need very little theory before you can apply it. Understanding WHY the theory works is a much harder problem, unfortunately.
I expected a rather stronger argument, especially from someone as high up in the hierarchy of science as Roger Penrose, and after laying such a huge foundation.
Maybe there is one but if there is I haven't found it in that book.
Please, if you've subjected yourself to that nonsense, cleanse your palette with The Road To Reality - math/physics is where Penrose shines, and that book is him at his brightest.
What if the background stuff that you neglect from your model is where all the interesting stuff happens?
Some people see the philosophical zombie as an argument against consciousness being material at all. I don't really see that, but I do see it as an argument against the idea that you can achieve sentience by emulating a coarse-grained approximation of the brain. I think you would have to either really totally understand the brain or harness evolution and let it build you a sentient being whose structure captures the essence of organic life within whatever its embodiment happens to be. You might end up with something that looks nothing like the brain superficially, but that embodies what the brain does somehow.
It sounds like I'm making a new-agey argument against reductionism, but I'm not. What I'm arguing against is overzealous reductionism... the idea that you can get a grainy image of something and quantize it and you're done. You might be able to do that in some areas, but in biology you can't get away with that. Very small causes can be just as important or more important in living systems than very large ones.
To avoid the philosophical zombie problem, I think at the very least we need a solid theoretical understanding of the phenomenon that we are trying to capture. We have to know what we are trying to do, otherwise we do not know how to start to go about doing it or whether we have done it or not. If you want to land on the moon you need to know what the moon is, where it is, and where you are in relation to it.
That means that we need:
1) A quantitative, hard, solid definition of life. Right now what we have is a qualitative phenomenological definition of life as it exists on Earth. I'm imagining a definition of life that's as solid as the thermodynamic definition of entropy or enthalpy. Based on what I've read in this area already, I would say that it's almost certain that a definition of life will be stated in terms of thermodynamics. Google Ilya Prigogine and dissipative structures to get started.
2) A definition of some sort for consciousness. Right now we have basically nothing here... not even a qualitative set of criteria like we have for life. We know that we are conscious, and we suspect that at least some other living things are conscious. We do not know whether all living things are conscious or not. We do not know whether the set of all conscious entities is entirely contained within the set of all living entities or whether something could be conscious but not alive.
IMHO dismissing #2 is chickening out. Dismissing #1 is definitely chickening out. Not answering the questions means you don't know where the moon is and you might be landing in New Mexico instead.
Sure, but Kurzweil's claim is merely that if DNA can encode everything the brain does with N bits of information, then regardless of how the DNA behaves, we have at least a loose estimate of the level of complexity in the brain. I don't necessarily think that he's right that AI will first be achieved in that manner, but I think I have a bit more faith in human ingenuity than he does, and I think we'll get the software there "by hand" before brain scanning hardware can actually do what he needs it to do.
Of course it's true that the decoding scheme and the dynamics of the end result could provide a whole lot of additional complexity, just like it's possible to build a decompression algorithm that allows us to compress a Linux distribution into 1 byte (put the whole program in the decompressor). But quantum effects or no, it seems extremely unlikely that the physics, chemistry, and biology behind all of this are somehow conspiring, magically tuned to provide an awesomely efficient set of basis functions to encode the algorithms that the brain needs to apply - sure, some parts of the brain's work probably exploit physical coincidences that make the overall job easier, but as far as we know, quantum chemistry doesn't offer a "do intelligence" utility function, that will have to be built up from much smaller sub-units. Far more likely is that the bulk of the brain's function is more or less explicitly coded for somewhere within our DNA.
"Involved" is a serious weasel word when it comes to the brain, especially when AI is under discussion; everything in your skull is "involved" in cognition, but that doesn't mean it's doing anything particularly important, and it definitely does not mean that every detail of the dynamics is an absolute necessity to obtain Cognition.
But I think we might be arguing different things...
On a related tangent, I've found that many engineers are sympathetic to intelligent design type arguments against evolution. This is because they try to think about biology like engineers and take these machine analogies literally.
[Aside: I don't know too many engineers that buy into intelligent design, but I guess YMMV...]
The mistake that biologists make is that they always assume we're thinking about biology, and that our goal is to understand biological systems.
But when we're talking about AI, the goal is not to understand the brain, it's to figure out how to do something pretty close to what the brain does. Chances are, that's a much simpler goal than figuring out the dirty details about what every cell in the thing does. But it means that we have to be very careful not to get lost in the muck, worrying about the form of the brain's computations rather than the function. This is tricky, because it may be that the way the brain gets things done is not very well abstracted or comprehensible, so it's possible that we'll need to pick the level from which to draw inspiration from the brain very carefully (I think the apparent uselessness of neural nets in strong AI is a good indication that we might need to abstract at some level other than "neuron").
Again, though, Kurzweil has a very specific (and I'd argue, peculiar, at least relative to most people in AI) view on this, and thinks that full scale detailed brain simulations are the way that we need to do this (or at least that this will be the quickest route to the goal). I suspect even he doesn't think we'll simulate the physics of every neuron in detail, though, preferring to instead abstract away the most important bits of functionality.
It doesn't directly encode the structure of the brain. Development is required.
"Involved" is a serious weasel word when it comes to the brain, especially when AI is under discussion; everything in your skull is "involved" in cognition, but that doesn't mean it's doing anything particularly important, and it definitely does not mean that every detail of the dynamics is an absolute necessity to obtain Cognition.
But I think we might be arguing different things...
We are arguing different things. My point is that you can't write a little equation for a neuron (which is an entire living cell!) and wave your hands and say "done!"
Kurzweil is way way way too hand-wavey for me.
Yes, we're in full agreement on this, especially the part about Kurzweil - I don't think he advances the science of this at all, and his poppy PR approach rubs me the wrong way, for sure, as do many of his specific ideas.
In particular, I think it's a terrible idea to hinge our hopes for AI on the idea that we'll be able to carry out a perfect enough simulation of a full brain to call it "done". Not only does it seem to be a pipe dream at present, depending on a whole bunch of technological advances that may not come for a long time, it would be almost completely inextensible and teach us very little - once we had such a simulation running, it's unlikely that we'd understand enough about how to modify it to improve upon it.
I don't quarrel with the idea that DNA encodes the brain extremely indirectly at all, including a substantial development process that depends on a lot of things other than the pure genetic code, and I also agree that the dynamics of a brain are very dependent on the details of all sorts of cells, not just an oversimplified logical representation of them. Where I think we start to diverge is that I have issues with the idea that the particulars of any of those processes can somehow "piggy-back" any non-trivial amount of functionality into the system that's not coming from the genetic code.
Here's the clearest way, I think, to state my information-content claim: if we were to randomly change the details of the way neurons function in detail (still leaving them with the same highest-level capabilities), and randomly change the way brain development happens, and randomly change all steps of entire process that leads from DNA->brain (again, subject to the restriction that the whole thing still works), and even - I daresay - randomly change the laws of physics, then...
...if all of these changes still permitted us to write down any string of DNA (or whatever our new version of DNA was) that ultimately resulted in the growth of an intelligent human brain, the amount of DNA required would, on average, be pretty darn close to the amount that we see used today. And that has implications for other implementations of the same logic, namely that if we could find some way to optimize towards whatever solution evolution has found, we could probably end up with a shorter code for it than evolution has because we can allow all parts of the DNA->brain process to optimize for compression whereas the real physical process is largely frozen against change.
That's the sense in which I think Kurzweil has a reasonable estimate, nothing more, nothing less; where he takes it from there is another story altogether. :)
Your Java program doesn't directly execute on the CPU, compilation and interpretation are required.
It's just that a brain alters it's behavior and structure as a result of processing inputs (development).
We don't (normally) write Java programs to change their behavior as they process information from their environment during execution.
Translation: translating Lord of the Rings into Chinese.
Development: hearing a short plot synopsis of Lord of the Rings and writing your own fantasy novel based on the same theme.
Development really does do something analogous to that. It takes a collection of proteins and rules and, through embodiment within the laws of physics, constructs the phenotype. The phenotype contains vastly more information than the genotype, and two identical genotypes will not produce absolutely identical phenotypes.
There is something fundamental about development that we do not understand. This is widely acknowledged in the developmental biology, evo-devo, and evolutionary computation fields. A closer analogy than the loose one above might be some of the behaviors we see with fractals and cellular automata, though development is less deterministic than that.
Evolution and development are somehow related. We don't quite get that either. But both processes add vast amounts of information and both involve adaptation.
The problem is that DNA is not sufficient. Organisms don't grow from naked DNA in a vacuum; the DNA is always contained within a cell which is enclosed within a more complex bio structure (egg, womb, etc) which may be contained within the parent organism in the case of mammals.
You need all of this information plus knowledge of the various interactions of the different environments to pursue Kurzweil's approach.
The universe of necessary and sufficient information is much, much larger than the 3 billion base pairs in your DNA.
It's quite a leap of faith to make these statements about the technological future with so little in terms of progress to show in these fields for the last decades. I think the most impressive a-life demos are now almost 10 years old, the best we can really simulate is (drumroll) a cockroach. And personally I think that's a milestone achievement because it means that at least we have a principle that works.
Going through the DNA route to get to a working brain seems to be a very roundabout way of getting there, it will require all of the embryonic mechanisms to be modeled accurately as well as something like the first several years out of the womb before you'd know if you had created something insane or something resembling intelligence.
Assuming you'd recognize it as intelligent even if you succeeded, there may be more ways of being intelligent than we know about.
I simply don't buy the argument that the algorithms of cognition inherit any substantial amount of functionality from their physical implementation, and hence, I see Kurzweil's complexity estimate as somewhat reasonable.
I'm somewhere in the middle on that. I wish for things biological to be clear-cut and deterministic enough that we can fully understand them the way we understand mechanical systems.
But precisely because the brain is encoded in precious little DNA there is some evidence that there is more to it than meets the eye, after all if 50M of gzipped data can encode the whole thing why do we have such a hard time understanding it.
There is enough repetition in there that some died-in-the-wool reverse engineer would have put 2 and 2 together by now if the secret was in the wiring or in some simple algorithm (ANNs for instance).
Apparent order appearing from chaos is a field that has seen some study and the amount of complexity that can arise form simple starting data is quite amazing, witness the mandelbrot set and other fractal forms.
It may be very hard to short-circuit such understanding and to 'divine' the workings of the formula without first going the long way around to understand the whole system rather than the 'seed' from which it grows. This is not simple mathematics where a simple equation on complex numbers gives you the mandelbrot set, it's possibly machinery interpreting an equation with 50 million terms.
In different terms, given a very distorted (dissected) picture of the 3 dimensional mandelbrot set would you be able to figure out the formula that gave rise to it without prior knowledge of the mathematics involved?
But it does suggest that we should be considering the functions that such higher level units might have, such that they become more intelligent as we compose them. Easier said than done,of course, especially since it's very difficult to actually observe brain dynamics in any detail.
Absolutely, and this is one reason why I think Kurzweil's approach overall is pretty foolish.
But I do think that his information-content claim is within the bounds of reason (I've explained my stance on that many other places in this thread, the gist is that it's highly unlikely that the whole developmental process supplies a huge amount of information "for free", in the same sense that it's unlikely that you'll achieve a 10x compression in code size by using Python instead of Ruby to write a program) and it at least tells us that a solution to the problem of intelligence does exist that doesn't require (for instance) explicit hard-coding of every neural connection inside a brain. In fact, it requires massively less information to specify than that, so there's some hope that in the end we'll be able to come away with reasonable approximations to that algorithm since it must be "fairly simple" (meaning: more complicated than anything we've tackled before, but maybe still within the realm of possibility).
That hints at the fact that AI researchers (at least those that don't buy Kurzweil's ridiculous "simulate-the-whole-thing!" approach) should be looking more into building out small "genomes" into massive structures rather than focusing too much on individual explicitly specified processing networks, because nature is clearly doing something like that, and it seems to work pretty well, letting it achieve a remarkably compact solution, given that evolution tends to produce very bloated code as a rule. There has been tentative research along these lines (http://en.wikipedia.org/wiki/HyperNEAT, for example), but there needs to be more, particularly to figure out what sorts of things we actually want these massive structures to do, what sorts of processes we should be focusing on to control that building process, what types of units we need in order to make the processing that we're doing feasible, etc.
Solution to unified field theory. That's about 32 bytes of information, uncompressed. There, since that statement is so low in information content, it must be easy to find. We have a loose estimate of it's complexity with my statement, right?
Like that rug, the brain contains oodles of complexity that doesn't matter at all for our purposes.
A lot of these questions are completely unanswered. We do know at least that the brain is extremely resilient to changes in chemistry and can work quite well even in the face of extensive damage, which is an indication that we might be surprised how little of the overall arrangement is actually necessary to keep it working properly.
We need to make sure that we're being careful with our language, too: the complexity necessary to specify any brain is a whole lot lower than the complexity to specify one particular brain. In AI the goal is the former, but for the latter, we really do have to worry about each strand in the carpet. An AI researcher might not give a rat's ass about reproducing your memories, and will be more than happy to construct something that can form any memories; on the other hand, to you, your memories (and the detailed wiring inside your head, which we might be able to alter substantially without "breaking" the brain) are vitally important.
That leads to 'seed AI', http://en.wikipedia.org/wiki/Seed_AI when run in reverse, so you'd have to implement the 'minimally viable self improving brain', and take it from there.
And even there maybe not all the strands in the carpet have to be just so, but it may very well be that there have to be certain amounts of each colour in roughly such and such a pattern with interconnects between these larger groups and so on. Some of that information is known but definitely not all of it.
The damage angle is a tricky one, some damage seems to be absolutely not problem at all, even if it is major, in other cases the smallest bit of damage seems to be enough to cause terminal failure. There are a lot of clues in there about the organization of the brain.
No even close. Imagine a kilobyte filled with alternating zeros and ones. We have two commonly used methods for describing the information content of this kilobyte. Shannon information content is closely approximated by today's compressors - and 1k worth of alternating zeros and ones compresses to next to nothing. Kogorov Complexity is the length of the minimum necessary program to produce the 1k worth of zeros - again, a simple loop, incrementing a pointer then dereferencing the pointer to fill the address with alternately a zero or a one suffices - a mere handful of bytes are necessary.
Now imagine that the RAM chip holding the zeros and ones is hit by a bunch of cosmic rays , flipping about 10% of the bits. Both the size of a compressed version of the new kilobyte, and the Kogorov program to generate the new sequence are both going to dramatically increase in size.
Now, the thing is that DNA produces a brain to a fairly homogenous pattern - it's the equivalent of my series of alternating zeros and ones, although admittedly it's a bit more complex than that - this is to be expected, there's more Shannon information in DNA for encoding a brain than there was in my simple pattern of zeros and ones.
The cosmic rays are the equivalent of a brain learning, encoding information by weighting connections between neurons. This process massively increases the amount of information stored in a human brain, but this information must be copied if you actually want the copy to behave like the original. I would expect this amount of information to be several orders of magnitude bigger than that found in the DNA, which is why Kurzweil's claim is just completely off the wall.
But I'm just talking about (as are most AI people, and I think Kurzweil, as well) the informational content required to build a brain. Any old brain, could be yours, could be mine, is probably neither. The information required to do this is much lower, and corresponds loosely to your first example, a kilobyte with alternating ones and zeros.
Sure, maybe Kurzweil doesn't understand the brain on any deep level and indeed maybe even those who understand the better than him don't understand it well enough at this point.
But Kurzweil's basic argument really isn't about that, it's about the exponential advance of tools and technologies and understanding on multiple level. Will the blue brain project succeed? Will some lesser known project succeed? Will the process take twenty instead of ten years? All unknown but not crucial to the implications of exponential change. When you have tool that are improve exponentially, what you can do tends to improve also. And then the whole process builds on itself. Do I know where this will go? No but I don't you do either.
Take another example, your post could be considered a message, or it could just be considered electrons emanating from an LCD monitor and getting received by a brain, which in turn generates certain behaviors in a large meat machine. Also true, but your post can also be described as a message about the need for thinking about reality literally instead of abstractly in order to understand truth.
However, if we can only access truth by talking about the world literally, then no one, ever, has access to truth. Even today we cannot describe the world literally, at all. An electron is merely an abstract representation of even more fundamental quantum physic dynamics. And, who's to claim quantum physics is really at the bottom of the reality stack? For instance, we have no idea why waveforms collapse to their particular determinate. There is something even more fundamental behind this phenomena we currently have no clue about. Therefore, by your criterion for truth, everything discussed and thought about throughout history has been nothing but gibberish, including your own post.
So, clearly you do not even agree with your own claim, since you seem to think you are communicating something to us.
I call BS.
>>Biological systems are nothing like anything we would ever engineer
You really believe, you can predict what human technology will look like in 50, 100, 10000000 years?
>>DNA is not a program
A hard disk platter is not a program.
>>Nature has had billions of years, and it is way ahead of us.
That's why birds are so much faster than planes.
>>Cells are not machines..."stochastic quantum probability field device."
All modern computers are quantum machines.
>>Biology is quantum-scale nanotechnology
And so is today electronics.
wtf? As if the only goal for flying is to go as fast as possible. A bird can land on a branch that can hardly support its weight while the wind is blowing said branch.
Basically, we have no idea where we will get the AI source code we can actually do something with, but we have some reason to believe that the most concise version of the source code won't contain more data than the human genome.
The rough intuition might be if we wanted to simulate brain in the caricatured reverse-engineer-the-DNA way, we'd need an impossible computer that could simulate years worth of exact quantum-level physics in a cubic meter of space, but the human DNA and basic cellular machinery for hosting it would be the only seriously difficult bits to stick in there, the rest would just be simple chemicals and the impossibly detailed physics simulation.
I guess the analogy then is that we make the AI source code (which we don't know how to write yet), which is supposed to end up at most around the length of a human DNA, but which can run sensibly in existing hardware. Then, deterministic processing entirely unlike the immensely difficult-to-compute protein folding from DNA will make this code instantiate a working AI somewhere at the level of a newborn human baby, in the same way as the genome initiates protein-folding based processes that makes a single cell grow into a human baby given little more than nutrients and shelter from the external physical environment.
So it doesn't seem like a really strong statement of overoptimism. It's basically just saying that the human brain doesn't seem to require a mystifyingly immense amount of initial information to form, but instead something that can be quantified and very roughly compared with already existing software projects. I'd still guess it might take a little more than the ten years to come up with any sensible code with hope of growing into an AI though.
So the point is that perhaps if you had a system that simulated all of the laws of physics exactly correctly such that proteins folded and interacted exactly right, only then could you get away with an amount of input equivalent to the amount of information encoded in the part of our DNA related to the brain.
Actually encoding those rules is probably the harder part of the problem, and could easily take several orders of magnitude more work. (10x? 1,000,000x? Who even knows).
2) While it's true that those runtime rules (which we can kind of consider as the "interpreter" for our DNA) are extremely complex, this has almost zero bearing on the informational content in our DNA that is put towards creating the "intelligence algorithm", whatever that is. Sure, there's probably a bit of extra compression based on the fact that the physics allows some actions to be "built in", but unless you believe that DNA is physically optimized to make intelligent computer construction very concise, the logical content of these computations is probably explicitly "written" in.
And it's hard to believe that DNA is somehow specifically optimized for intelligence, because it was first used in completely unintelligent creatures and appears in exactly the same form now.
Now, it may be the case that DNA's physics are tailor-made to efficiently code for useful physical structures. But intelligence is a level of abstraction above that, and we're all but guaranteed that very little compressibility exists in the "language" for such higher level constructs.
What would an argument be without a strained analogy: if you're writing a complex web application, the size of your application is roughly independent (within an order of magnitude, for sure) of the architecture that it will ultimately run on (where by "size of application" I mean the size of everything that it takes to run it, interpreters, frameworks, etc.). Sure, the binary size might be slightly different depending on whether you're writing it for ARM, PPC, x86, etc., but not hugely different.
We would be extremely surprised if on three platforms your executable weighed in at 10 mb and on a fourth (which had a few different machine level instructions) it compiled down to 10 kb - the only way we could imagine that happening is if someone somehow "cheated" and embedded large parts of your actual application logic into the processor, adding specialized Ruby on Rails instructions to the machine code, or something like that. :)
Encoding and dynamics details may make differences in compressibility, but past an order of magnitude, you're really talking about "cheating", and it's an Occam's Razor problem to assume that nature optimized in such a way for intelligence...
The problem for AI is not just encoding the DNA, as it were, it's in building all those other pieces around it. Estimating the complexity of building a software brain based on the amount of information in DNA is like estimating the complexity of building a web application using 1950's hardware. "It's only 10,000 lines of code! How hard can that be? All we have to do is write the code, plus the frameworks, programming language, and operating system, plus do all the hardware design."
Except that DNA doesn't even come close to being a high level language, since the low level details were not specifically designed for compressibility of the code (in fact, the low level details, the "bare metal ops", are pretty much fixed by the for-all-intents-and-purposes random laws of physics, which means we shouldn't assume that they enable any particularly high compressibility ratios for anything).
So a more apt comparison would be if we saw an assembly language program in some strange incomprehensible assembly language and said "It's only 10,000 lines of operations on the bare metal! Now all we have to do is figure out how the hell the system this runs on works, and how we can translate that code into a more sensible (and probably vastly more compact) form."
...which might even be a harder problem, to be fair.
Kurzweil's essentially proposed evidence of existence of an algorithm of length N that does whatever it is we mean by intelligence. Which is fine, and I think is probably correct (IMO, even his estimate about the minimal amount of code it would take is probably too high, though that's another story).
But he's overlooking the fact that the mere existence of such a compact algorithm doesn't help us find it at all, and I think a lot of the complaints others have made about his statements are more aimed at that leap of logic, not the existence claim itself. I completely agree that even brain scanning tech might not help us simulate the important bits very well, even if we did have access to that tech and computers fast enough to run the sims.
Instead, it appears that ontogeny must recapitulate phylogeny. The system must develop over time as a result of inputs (and the remembered collection of past inputs encoded in the DNA). It would be as if in order to build Twitter with Ruby on Rails, you first had to program a tax calculation application in Cobol on a 1950s mainframe.
Because the Turing machine is selected entirely by the sequence (the protein folding caused by the laws of physics is selected entirely by the sequence), the number of possible results (the number of different shapes that could result) is limited to the number of different sequences. That is, the information in the phenome seems to be limited by the information in the genome.
If you think of it as a two part message, with the first part encoding a model, and the second part configuring it, then the DNA can be seen as the configuration, and the laws of physics as the model (which isn't actually coded anywhere like DNA - we'd have to write that ourselves.)
This model is constant over all life, so that DNA from all species (plants and animals) share the same "model" (laws of physics that cause protein folding etc.)
Another example of a two-part message is that the first part is a programming language, and the second part is a program written in that language. For a high level language (esp with libraries), it's obvious that a very short program might do an awful lot; but the true information content is not that program alone, but the total including the language and libraries it uses.
However, and this is my point, I don't believe that the laws of physics have been constructed so conveniently that provide as much assistance as a high level language with libraries. At most, nature may have stumbled onto hacks in physics (like surface tension, interfaces and gradients) and exploited them. Actually, given how long it took to get life started, perhaps it had to find a whole bunch of clever hacks (randomly recombining for billions of years over a whole planet) before it came up with a workable model (that is the model that DNA configure.)
hmmm... we might be able to estimate the information content of the 'model' by how many tries it took to come across it.
I think that is a very original use of the word 'limited', limited in this case holds enough room for random chance to come up with human beings.
For all practical purposes that 'limited' might as well be unlimited.
I have a micro SD card, smaller than my little-finger nail that holds 4GB - x8 more than the human genome (using the article's figure of 4 billion bits). And that's pretty much the lowest capacity you can buy. Yet, that amount of information is limited/finite: the possible states that that memory can hold is limited/finite.
BTW: I found the absurdist levity in the top 10 comments or so of reddit version of this thread a welcome relief - and also some penetrating insights, concisely put: http://www.reddit.com/r/science/comments/d24c8/ray_kurzweil_...
Just because a system has a finite description does not means we can predict its behavior at later time!!
The systems such as our brains are extremely chaotic, and even if we were to simulate it and write programs to change its behavior by altering the underlying code bit by bit. It would be akin to moving butterfly wings to generate storms.
also analyzing such a large system would be "at least" an NP complete problem, Assuming we can even recognize a solution [compile and run a modified genome] in P.
The idea with the three-body problem is that after some time has passed, we have utterly no idea where the bodies have ended up. Ova don't grow into random jumbles of cells, most of the time they grow into babies.
It takes a lot of very specific information to grow into a baby instead of some entirely different arrangement of proteins. So either the environment needs to be feeding some rather specific controlling information that makes most ova grow into normal babies, or the cellular machinery itself has a system which compensates for external disturbances and constrains the design to mostly what the DNA directs it to be. As far as I understand biology, it's mostly the latter case.
Ova don't grow into random jumbles of cells, most of the time they grow into babies.
Three bodies don't mutate into 4 bodies or weather does not changes into an ice age in an instant, but at the same time we cannot reprogram weather by introducing small changes.
...it might take a little more than the ten years to come up with any sensible code with hope of growing into an AI though...
Dude, you are all over the map.
... he seems to be using DNA as a measure for the amount of irreducible complexity that needs to go into a system that will end up with the complexity of a human brain.
At best, you could say it's a measure of the amount of irreducible complexity for an encoding of the required proteins. We don't seem to have a measure of the system, by which I mean the thing that models the relationships and interactions of the proteins (and their components) with each other and their environment.
And Myers is saying that DNA is a such a woefully inadequate measurement of complexity that it barely counts as wrong.
he seems to be using DNA as a measure for the amount of
irreducible complexity that needs to go into a system that
will end up with the complexity of a human brain.
This is like saying that the underlying complexity of the Mandelbrot is the set of pairs of real numbers.
As I see it, a significant problem is designing a substrate in silicon, or whatever, that has the requisite complexity. I would not be too surprised to find out that the layout program for an AI is not too different in complexity from today's largest software projects.
This conversation here on HN is a great example of that. Simply by the way the article is written, it is being taken nearly as fact by most participants that a human-scale AI simulation must work by physically simulating the brain. This may ultimately be true but there is no a priori reason to believe it. The brain may implement something that can be simulated "close enough" by a much simpler computation system.
Chaos is chaotic, obviously, but the human brain is a pretty fuzzy system too. It can't be too pathologically chaotic; people speak as if getting the 15th decimal place wrong will blow up the system but the brain simply can not be that sensitive or the removal of a single neuron would break our brains. Our brain state must be at least metastable to work at all. Removing a neuron or getting something wrong in the 15th decimal place may result in some small change of behavior three years later vs. not removing it or getting it right, but our brain states are already so fuzzy and noisy that's not going to be the stopper.
The stopper will be to see whether or not there is a higher-level simulation that can be run that is less complex that simulating the physics entirely. The secondary question is whether we can make something that we would call human-intelligent even if it turns out we can never "upload our brains" without critical data lossage occurring. That would be something as intelligent as us that is nevertheless fundamentally incompatible with human biology, with neither able to simulate or understand the other. I can make coherent arguments either way, as can many people, but by framing the question as physical simulation this has not been one of the more intelligent debates on the topic we've seen here. Physical simulation is one possible path, and not even the most likely or interesting, to AI and brain upload.
So people figure why not "run the program" that already exists, and that's what this conversation is about.
Nevertheless, my gut feeling, too, is that Kurzweil is mistaken. I can't quite put a finger on it yet, but at least one problem I see is this: Kurzweil seems to suggest that the observation that the genome consists in only 50MB of data (after compression) somehow gives us an upper bound to the complexity of the system. I'd however suspect it rather gives us a lower bound: factor in all the epigenetics, external interactions, the not necessarily simple rule set provided by physical chemistry (this is not in the genome, obviously), etc etc, and the problem may be quite a bit larger.
Take for example the way we currently believe gene transcription promoter networks to work. The combinatorial nature of those interactions means that even though the underlying data is "only" a few megabytes, the system you end up simulating gets very big very quickly.
One answer is "They will, and surprisingly quickly. But they will be a completely different set of complex interactions than are observed in the real world, because of some roundoff error in the binary representation of the Nth digit of some apparently unimportant constant. Unfortunately, because the system is complex, you'll probably spend the rest of your career trying to track down that error, and fail."
Another answer is: They would, if the simulation was comprehensive enough. Unfortunately, phase space is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. Seriously: Your mind reels when confronted with the number of different molecular interactions going on inside the "simplest" single-celled prokaryote, so you abstract it away, almost as a reflex, to stop yourself from going mad. Then you abstract away the first-order abstraction. Then you keep going. Soon you begin to imagine that you can model an entire collection of a trillion organisms, just as a naive programmer imagines that they can rewrite Windows in three days if only they use a powerful enough language. It's a mere matter of programming!
And we even have an existence proof that such a thing is possible, given enough design time. Unfortunately, the existence proof says nothing about the odds of doing so very quickly -- in less than, say, a million years, which is very quick by historical standards.
The other answer contains an implicit assumption that's not obviously correct: you suggest that complexity only arises when you enumerate every possible dimension of phase space. But physical simulations have been very successful at reproducing complex behaviour from simple rules, without taking into account every particle's state vector.
Finally, did you really try to equate my statement to the statement that Windows could be written in three days? ...
As for this statement:
physical simulations have been very successful at reproducing complex behaviour from simple rules
Absolutely, but it doesn't follow that every complex behavior can be reproduced from simple rules. To overgeneralize from success in one field is the occupational illness of futurists. It's certainly a key problem for Singularitarians, who tend to get so enthusiastic about Moore's Law that they forget that most of the world has nothing to do with microelectronics.
So the real question then becomes does biology tolerate working on an approximation of the underlying physics and does that simulated biology still have the ability to exhibit intelligence. I think the first is a maybe, the second a yes but I couldn't give you any reasons why other than that it might be that our biology needs 'its' physics to operate and that probably anything Turing complete has the potential to exhibit intelligence, regardless of whether or not we find a way to achieve that.
Because so far, despite the best efforts of many geniuses and heroic computing resources, our simulations don't even reliably predict the real-world outcomes of far, far simpler systems. Ilya Prigogine won the Nobel Prize in chemistry for demonstrating that sufficiently complex systems display emergent behaviors that can never be entirely predicted by studying their components in isolation:
Kurzweil is hopelessly out of his depth in these arguments and is talking nonsense. Personally I'll be very surprised if we can construct anything approaching human intelligence in my lifetime.
Put in another way, what construct would qualify as approaching human intelligence ?
For instance, an intelligence that could be taught to read English and then have a reasonable conversation about a contemporary novel, with its own insights into the style and themes of the book, would qualify.
That said, I do expect to see great strides in the sophistication of machines in the next 30 years. They don't have to think like people to be useful.
I think the point he is making is that if your goal is to simulate the human brain you also have to simulate and thus understand all the little details of biology because transistors don’t magically have the same properties as proteins.
My current project is a search engine for protein chain geometry. We only have ~20% of the known proteins in our database because the data on the other 80% isn't accurate enough to be useful.
My simpler point is that Kurzweil's not taking a useful measure for the size of the system we're solving. (By the way, he plays the same kind of trick on his audience when he's pointing out there are only a few billion neurons in the brain - as if that were the only level of complexity in the brain).
No, common English use of "it's hard" means something completely different from CS "hard". CS hard means NP-complete which to English translates as impossible. Impossible because of well understood mathematical reasons.
Quantum computers may solve it, indeed real life protein folding may have quantum computer-like properties.
If you're a computer guy, you should clearly understand what his fundamental disagreement with Kurzweil is about.
The point is, he seems to suggest that the genome is all you need, when clearly that's not true.
Even the 800MB of base pairs may have higher entropy than the machine language we're used to. 2000 lines of lisp or haskell are worlds away from 2000 lines of assembly.
The media will not end their infatuation with this pseudo-scientific dingbat
chimes with the large majority of bold scientific claims that appear in the press. For example, not that long ago the press jumped on Craig Venter's (http://bit.ly/uEC5) 'artificial cell' (http://bit.ly/c27AL5), hailing it as the beginning of man-made organisms and making bold predictions about the future of life itself, riling up environmental groups no end. (I'm not saying that wasn't a great acheivement. But all his team really did was take out one DNA tape and put back an identical, if newer version. Bread and yoghurt manufacturers have been doing a smaller version of that for a long time. Not exactly playing God.)
It would be nice if there was a scientific-bullshit detector that made sure the press didn't go crazy over wild claims. Proposal for a startup, anyone? :)
This is one reason that many, many times on HN when there is a link to a blog post about some news story on a science discovery, I post the link to Peter Norvig's article on how to evaluate research,
as it seems to be that most readers need more practice in critical reading of statements about science. PZ is one of the few bloggers who knows most of those points already, but he frequently writes about other people who forget them, so here in this thread too I'll remind HN readers about Norvig's advice on how to read about science.
This is slightly off-topic, but here goes anyway. I've long had an issue with the huge disparity between what humanities/arts people (including the large majority of the press) know about science, and what scientists know about the arts and humanities. Most scientists I know are more than able to hold their own in a conversation about, say, a good book, but 99% of everyone else I've ever met doesn't know/want to know the second law of thermodynamics.
I'm not pointing fingers here, I just think there's a serious lack of communication between arts and sciences. I think it's partly this lack of general scientific knowledge that makes the humanities/arts dominated world of the press believe pretty much anything a scientist says. And then, to make a good story into a great one, it's blown out of proportion. Ho hum.
More than that, though, it's a "hello, world" — although the cell itself didn't do anything useful, now we have the compiler working, albeit expensively. Now we can do experiments like the following:
- removing introns entirely to see if that damages viability;
- inserting the gene you want at a specific place in the bacterial genome instead of splicing it in at some random place.
Basically, it's a "control group" for a much more precise set of experiments than we've been able to do in the past. It's easy to take the ability to do "hello, world" for granted as a programmer.
This is more likely a lack of understanding on my part, but I'm not sure where epigenetics comes into it? The majority of the cell components - i.e. all the organelles, chemicals, etc - were already present and arranged in the 'surrogate' cell. So all the 'epigenetic' stuff was already in place. But please correct me if I'm wrong.
Not so sure about the random splicing part, either. Food manufacturers have been inserting genes into specific places on bacterial plasmids for a long a time, using directors like codon relationships and ionic interactions. Again - if I'm outta line... :)
I don't actually know much about how position-specific current transgenic techniques are, so I could be wrong about that.
The only other source of information is the non-genomic environment - extra-nucleur DNA like mitochondria, and the womb (which is arguably already specified in the genome, unless mother nature has done a Ken Thompson http://cm.bell-labs.com/who/ken/trust.html at some point.)
But it's weird to claim that 50% of our genome encodes the brain. Really? Perhaps it's just that 50% is required by the brain, much of it being foundational to the whole organism (like standard libraries.)
Which would be true if the DNA specification for that particular part of the body was the only part of what specifies the brain. Myers is pointing out that assertion is patently false. The environment of developing creature and the interactions between cell types and their environment (and themselves) is a giant information content multiplier, and the DNA need not explicitly specify any of this information for it to exist and be relevant.
Bringing this to a familiar compsci example; imagine software for creating neural net recognizers. You can look at the source code for a net and say, "This will take N inputs and produce N outputs" You can look at a finished classifier and say, "Ah, I see what this does! It tells the airbags in this car when to deploy!" But that's as far as you can go without the training data that was used to train the classifier. This is a doubly good example because it's often very difficult to determine HOW a complex neural net is doing what it does, but it's fairly easy to explain how to train one to do that task.
Whereas the training data for a neural net is extra information - but in utero, what is the training data that is not a predictable consequence of the genome? (ex utero there might be an argument, since humans not exposed to language don't develop it; although a group of isolated humans have developed language spontaneously - complex grammatical structures, the whole bit - which makes sense, given the variety of human languages. This supports the idea that language is genetic, or as Pinker provocatively describes it, the language instinct).
EDIT here's a thought experiment to illustrate why predictable interactions don't add information: taking the figure of six billion bits for the whole human genome, this means it can specify 2^6,000,000,000 different genomes (a lot). You can imagine changing one single bit, and all those complex interactions leading to a slightly different human phenome. Most of the possible phenomes wouldn't be a living human, or even anything recognizably human (or living). But the crucial point is that you simply can't specify any other phenomes (apart from those 2^6,000,000,000). You've changed all the bits - what else is there left to change within the genome?
For starters, the mother's chemistry which is a function of her DNA and environment. And the mother's physical environment, diet, health, etc. These are things that have no representation in the genome but can radically change brain structure in a developing mammal.
I'm not claiming the environment magnifies existing information, I'm claiming it's part of the total set and Kurzweil (and you) are vastly underestimating the amount of state that is associated with the exact details of a developing organism. This seems to be the thrust of Myers's point (at least in the beginning): you are simplifying and you are not allowed to do that.
Myers then follows that point by saying that even if we do manage to isolate all that information and understand it, we actually don't have certain critical problems like protein folding solved, or even reliably simulated yet.
Even if you hand-wave all this and assume it's possible, the notion that 10 years is the timeframe for this seems... excessively optimistic.
> vastly underestimating the amount of state that is associated with the exact details of a developing organism.
Sir, kindly indicate where Myers makes that point. I read him as going straight into protein folding, and the complex interactions required for the expression of the genome. (I believe you agree that state in the environment that is caused by the expression of the DNA is not information originating in the environment - ie that this is merely magnifying information, as you put it.)
While there is an incredible amount of state created, in the from of gradients and so on, this is directed by the genome...
Or maybe this is our basic disagreement: do you think that an image of a mandelbrot set creates information as it is generated (and that pi creates information as each digit is calculated), or do you think that the information is defined within the algorithm that calculates it? [there are other issues, but just taking this one alone]
So I'm not 100% sure what you mean; please clarify if I understand you correctly.
While there's information in the environment, there's not very much: consider the white and yoke of a chicken egg. Like letters etched in metal with acid, most of the information is in the placement of the letters; the exact nature of the chemical reaction contributes a very small amount of information. Can you indicate why you think there is a great deal of information originating in the environment?
Yes, the heath of the mother can have an effect, but that's if she is unhealthy, and development does not proceed normally. Assuming she's healthy, the specific condition of the mother doesn't determine whether or not a human being is created. Are you suggesting otherwise?
> interactions between cell types and their environment (and themselves) is a giant information content multiplier
and we're actually in agreement; my first comment included:
> I'm not so sure about the simulation
> The only other source of information is the non-genomic environment - extra-nucleur DNA like mitochondria, and the womb (which is arguably already specified in the genome, unless mother nature has done a Ken Thompson http://cm.bell-labs.com/who/ken/trust.html at some point.)
Biology is full of bizarre examples of when this process goes awry and our bodies have bizarre features. Dawkins's example of the Laryngeal nerve that makes a crazy loop down into the mammalian chest cavity is the classic example.
That just gives you a newborn baby's brain, which is a pretty poor standard for displaying "human" intelligence. If we didn't get any smarter than that, we'd be pretty dumb by animal standards. To get to "human" intelligence, you have to be able to simulate a rich environment for the brain to learn from. You also have to model the growth of the brain and its response to stimulus -- the physics, chemistry, and biochemistry of the brain. DNA doesn't have to do that, because it runs on a platform with that functionality built-in (i.e., the real world.)
Bear in mind we are talking about at least 20 years hence, in my mind.
Indeed, however due to the Kolmogorov complexity http://en.wikipedia.org/wiki/Kolmogorov_complexity one can argue both for DNA being a resource and, with more common sense, that DNA and all appropriate resources during a lifetime up to a point where you take a measure of a whole brain all funnels into a resource from which you can describe a brain.
And this is only about Kolmogorov Complexity and under assumption that brain is a discrete set. I am not much well read on biology, but I think there were recent advancements where some organisms also showed quantum effects playing a biological role. Even with disregarding our current lack of knowledge we can still argue if brain is a discrete set or not, proving either would be a major contributing factor in our understanding of it I think.
Some biologists understand this just fine, thank you very much.
In this situation, look at the genome as the instruction set for protein construction and folding, an ongoing problem in research we have just begun to investigate. The information contained in the genome is combinatorially descriptive and therefore not as simple as made out to be, if you define information as the amount of "surprise" in the outcome.
Also, having a few set of simple rules is not enough to understand or reproduce a system in general.
You're conflating two very different concepts of complexity here.
One is the complexity of a static state of information, and the other is the complexity arising from dynamical systems.
As roadnottaken pointed out, fractals are perfect examples of systems that are described by very "simple" formulas, yet contain infinite complexity. It could therefore be said that the simple equation of the Mandelbrot set represents infinite complexity.
However, if you take a particular iteration of the formula, then you can get a finite concept of its complexity, i.e. how many bits it takes to represent the image you're seeing.
So, to say that the "the brain cannot be more complex than the data that specifies it" is true in one sense, but completely useless in another.
To put this into terms of the Mandelbrot, you can gather up all the bits that represents some particular iteration of the Mandelbrot, but that doesn't tell you anything about how it works, or even how to generate the next frame. You need the equation for that.
That's just one place where Ray fails. The second is the lingering question of whether computers in their current state are even capable of "simulating" a brain. It's still not an answered question of what the role of the non-determinism found in Quantum Mechanics plays in the brain and the interactions of various chemicals. It was recently shown that DNA relies on QM entanglement to "hold it together". If it turns out that non-determinism and QM effects play a crucial role in biology (which they almost certainly do), then the very rigid and deterministic system that is the CPU may simply be incapable of simulating a human brain.
Continuing from your extension of the fractal analogy, the former is more like an "iteration of the Mandelbrot" while the latter is "how to generate the next frame".
We do not need to know how our human-precursor brains worked, nor do we need to know what our human-successor brains will be like to successfully simulate current-human-brain intelligence.
It seems plausible to me that we will be able to understand how to simulate the functions of the brain without necessarily simulating the physical universe and its remarkable evolutionary unfolding--which seems to be the ultimate level of complexity and one that I agree is far beyond us.
Looks like he understands the brain just fine, to me...
(For physicists the equivalent torture is a movie, which goes by the name of "What the Bleep...", that came out a few years ago. OMFG if you want to drive me into a towering rage just show me ten minutes of that film. It's like watching someone make spitballs out of the manuscript of the Eroica symphony.)
The trailer is Poe's Law in action: http://www.youtube.com/watch?v=m7dhztBnpxg
I guess you don't want to watch that again? ;-)
Now, I need to go calm myself by fixing some bugs before I start to throw things. ;)
(NB I remember reading some Erich von Däniken books when I was 9 or 10 and getting awfully excited - I was quite upset when I found that people could just make stuff up and present it as science).
(Cue a chorus line of Doctor Who cosplayers.)
So, in theory, I could have been okay with What the Bleep. In practice, however, it is just horribly grating -- way more grating than any SF, even the dumbest SF.
The Nova debunking of Chariots of the Gods was epic, and a important lesson to me as a high school student not to take scientific sounding arguments at face value. Also, an important lesson in not underestimating human intelligence and creativity. Most of the debunking was simply figuring out how ancient peoples did things we now think are impossible without modern technology.
One of my personal heroes is this guy:
That Randall Munroe is also a personal hero probably goes without saying at this point.
Even if you could reduce the brain to some sort of bytecode, an interpreter is still necessary to run that bytecode. For instance, a python program might be a few bytes, but the interpreter is still a few megabytes. Yet both are necessary to run the program. Who knows how large a brain bytecode interpreter is going to be, but probably very large.
There are two problems with this argument:
1. For the interpreter to substantially reduce the size of the DNA needed to encode a program to build an intelligent system, that interpreter needs to be optimized to reduce the complexity of such programs. However, as far as we can tell, ribosomes and protein folding work exactly the same way in nearly all living cells, from snottites to kelp. It isn't plausible to suggest that each Salmonella cell contains hundreds of megabytes of information that's optimized for producing intelligent systems. Even if Salmonella contains hundreds of megabytes of information, it would amount to a proof of creationism if that information were optimized to simplify the expression of brain designs. So the size of the interpreter is irrelevant.
2. DNA is not just interpreted; it's compiled into cells. The DNA of a cell contains a complete program for making every peptide that is necessary to the cell from individual amino acids, and those peptides together construct all the other chemicals from a small number of simple molecules found in the environment. So the source code for the "interpreter", down to the hardware level, is actually already present in the DNA.
If one knows python and is given a program in APL, then likely an insurmountable barrier has been reached. If you don't have the docs the describe the language, the one can try to infer the language by running experiments on variations of the stored program. However one needs access to the processor in order to run different experiments and get different results to be able understand how the programming language works.
We don't have CPU in a form that we can experiment with ("brains in a vat"). We have a 50Mb string in APL*2, a mostly unknown language for a mostly unknown processor.
The other part is that this is not a program but a meta-program -- meaning there are multiple levels of indirection. The DNA does not directly specify the brain, but instead specifies rules for components that would eventually arrive at an assembly (guided by a rich context of voluminous other inputs over an extended period of time) that constitutes a brain.
I haven't read Kurzweil's specific claims, but I'd guess he's claiming we can simulate brain function by 2020. We can already simulate simple organisms and neural networks. You can already accomplish quite a bit of complex emergent behavior with those.
Simulating the entire brain would require an enormous amount of processing (though surely feasible some day, if not 10 years, how about 20), but most likely we'd make some trade-offs and sacrifices and still get a close approximation (like we do with virtually every simulation).
Of course simulating a brain and simulating a human are not the same thing. You can't really avoid having to simulate the entire body and its interactions with the environment.
And of course we wouldn't suddenly "understand" the brain just by simulating it. It would still be the same complex system, it's just we'd be able to inspect it more closely. Such a simulation would at least help us understand a good bit more about the brain's role in our cognition and actions.
And yeah, who knows, maybe in 20 years we could have some kick-ass AI in counterstrike.
Also, I wouldn't bet against him building an AI that for all intents and purposes appears to be conscious. Full-human simulation is another thing, but with that we are all out of our depth, and it's pretty likely that a conscious computer would be quite capable of understanding it better than us.
That's not much of an argument. He plays on the standard things that all religions play on: the promise of some form of immortality, which instinctively plays on the fear of death; claims that the near future will be radically different than the present; etc. His extrapolations of exponential growth are not entirely unfounded but he applies the same logic to everything that suits his fancy without taking into account that science and technology have made large but halting steps forward throughout history, and the fundamentals of scientific insight and discovery haven't changed. He talks and sounds like an evangelical to me. No doubt he is a brilliant scientist but that doesn't warrant the cult of personality that has grown up around him, which I perceive as a dangerous thing contrary to the aims of science.
Kurzweil has several functioning products that would have fallen into the mess you dismiss as nonsense two decades ago. Is he wrong on a lot of things? Yes. But so is any other engineer looking for things that haven't been done before. Even if Kurzweil has a spell where he does nothing of any merit for a decade (which has yet to happen) I'll still confidently say you're foolish to rule him out as a crank. He's earned his right to dream aloud.
So if you were to simulate the brain, modeling DNA will be A tiny part of it. Encoding the deepest of physics world (yes, quantum effects play many roles in DNA) AND then having enough computational power to model interactions of those particles in real time IS A REALLY BIG DEAL.
Given how much computation takes to "decompress" information about what a protein is built of into 3d layout of that protein (vide folding@home), statement that you could "decode" half of the genome into working simulated system is bold to say the least.
Trouble with simulating human 1 to 1 is not with how complex human itself is but with how complex, bizzare and computationally powerful is physical hardware on which program "be human" runs.
For another take, http://www.dwheeler.com/essays/linux-kernel-cost.html puts the Linux kernel source at 4 million lines. Can you compile a 25MB kernel? Compressed? (How much of it is 'introns', anyway? This might not be the best example.)
(Not going to touch the argument about applicability. It does seem to me that whenever Kurzweil glosses over details, the closer look always appears less utopian. There are other writers about these ideas who I can read without having to check every assertion, like Anders Sandberg.)
Firstly where Myers is wrong: the human brain ultimately comes forth from a bunch of information roughly equal to 1 million lines of code. If you could reproduce those 1 million lines and set them loose, allowing them to construct a human being (nothing else: what else could they construct?) and letting that human being live in our world, you would have 'created' intelligence. It's as simple as that and attacking that abstraction is completely the wrong approach to pointing out the problem with Kurzweil's argument.
So, then the two points where Kurzweil is wrong:
1) it's not just any million lines of code. It has to be exactly the million lines of code in our genome, give or take some bits. Considering the enormously complex interactions between these bits of code, this is worse than reverse engineering the largest codebase of spaghetti code you could possiblt imagine. The simple example that Myers gives is enough to show this.
2) The million lines of code cannot just be executed anywhere. It encodes for the construction of a human from raw material and the subsequent operation of that human. Give it different materials or a different living environment and something entirely different, in most variations nothing remotely capable of 'life', appears. And even if you could make it build something from electronic components: slight differences in the perceptive systems can create huge differences in the brain and the concepts in the brain. A machine with a finite pixelized array of visual light receptors would build a completely different conceptual model of the world. Reverse engineering the genome is not only extremely hard: it is very unlikely to produce the result you want.
Imagine going back to ancient Roman/Greek/Etruscan/&c times and handing them a ream of paper filled with the hexadecimal representation of an x86 application compiled for a Windows environment, and then showed them what it looked like when running. "Hey, look, now you can play videos and music!" Now imagine it was several orders of magnitude more difficult than that, and you're beginning to get the idea.
"If you could reproduce those 1 million lines and set them loose, allowing them to construct a human being..." The exact point he was making was that there's a lot of handwaved complexity in this statement, and that the abstraction "understand human being program, run code" is abstract to the point where it no longer accurately reflects the reality of the situation.
He's trying to argue that those million lines of code have
to run on hardware which is poorly understood at best
[The brain's] design is not encoded in the genome
If you're taking the (pretty weak) 'source code' analogy further, the compiler is... A host human embryonic cell, running on a womb in a mother.
So thinking about the problem that way is a non-starter for obvious chicken/egg reasons.
He is excellent in marketing, in sales, in networking & as an overall promoter. All qualities we hackers need to cultivate as long as we control ourselves & avoid pitching any vapor-ware.
Some examples of him promoting his product or himself.
1. At 17, appeared on the CBS television program I've Got a Secret - showed off software that composed piano music
2. International Science Fair, first prize for the same.
3. Recognized by the Westinghouse Talent Search
4. Personally congratulated by President Lyndon B. Johnson during a White House ceremony.
Incredibly, it goes on and on. Very fascinating, if read with the heart of an entrepreneur. His whole life seems to be a chain of fantastic promotions.
There are 50 to 100 billion neurons in the human brain and the power of the brain comes from the fact that you can create many orders of magnitude more neural circuit combinations with those neurons. Each cell may be part of many circuits, and learning involves the forming of these circuits. Now, lets compare that phenomenal power with the power of the computer. It becomes especially laughable when you say its 50MB worth of information.
My theory is that, if you look in your own programming, your DNA, it’s about 600 Megabytes compressed… so it’s smaller than any modern operating system. Smaller than Linux, or Windows, or anything like that, your whole operating system. That includes booting up your brain, right, by definition. And so, your program algorithms probably aren’t that complicated, it’s probably more about the overall computation. That’s my guess.
It's (to some extent) true, and potentially interesting philosophically -- but completely meaningless from an engineering perspective.
I do agree that there's no freakin' way this will be done in ten years, or in the 62-year-old Ray Kurzweil's lifetime, or mine.
No, a computer simulation can never predict exactly what a physical system will do, even in the case of a single particle, due to quantum uncertainty. But so what? If I took out one of your neurons and replaced it with an identical neuron, that new neuron wouldn't do exactly the same thing, again due to quantum uncertainty; nonetheless, its long term behavior would be essentially identical and you, as a person, would be no different.
That is, neurons and brains are classical objects, essentially immune to the underlying uncertainty they're built on.
Absolutely not. Prigogine's work demonstrates that systems far from thermodynamic equilibrium (of which all living systems are an example) are intractably non-deterministic. The issue isn't the underlying quantum uncertainties, it's the macro-uncertainties of the higher-level system.
In other words, you won't predict the behavior a neuron by modeling the underlying physics. You have to learn to model the macro behavior in a statistical way.
If I've got a cubic meter of pure water, I can slosh it around and observe all sorts of interesting effects. I can then model that cubic meter of water with another cubic meter. That second cube won't behave identically. A cubic meter of water has a very high Reynolds number and can have considerable chaotic turbulence (chaotic in the classical sense, not quantum). The exact motion of the water simply won't be the same, no matter how precisely you mimic the 'input' into the system (forced motion of the cube, for instance).
Nonetheless, the second cube is a fantastic way to understand the first cube, and in some way is qualitatively identical, even when the specific motions aren't replicated exactly. This is exactly the same for computational simulations of the fluid. Of course they can't predict chaotic behavior, but for all intents and purposes they can be just as useful as having that second cube of water.
Likewise, a computer simulation of a neuron may never exactly predict what a real neuron will do. Just like one neuron can never exactly predict what another neuron will do. Just like one bucket of water can never exactly mimic another. But who cares?
That no two non-equilibrium systems are exactly alike seems to me a different question.
Many still believe in the reductionist idea that a perfect understanding of physics would lead to a perfect understanding of chemistry and then biology. This is not the case.
Define "emergent". I can believe that a protein's behavior is extremely sensitive to the initial configuration of its atoms and that as a practical matter we can't (currently?) get detailed enough measurements to predict exactly what's going to happen. But without exceptionally compelling evidence I'm not going to believe that there are different physical laws for proteins than for their atoms.
The math behind is pretty gnarly but if you want to understand it I recommend his book:
A CA is not a good model for this.
I'm a big fan of Scott Aaronson, too, btw :-) Here's a pic of him demoing the soap bubbles experiment he refers to in that paper you linked to <http://www.scottaaronson.com/soapbubble.jpg >.
Still, as discussed in another comment I made, I seriously, seriously doubt that we will ever simulate any sort of intelligence by raw physical simulation. It just isn't feasible with any realistic computational technique.
I recall reading an interview with someone who founded a company that builds computers for simulating biological systems in silico who thought that there were much better algorithms waiting to be discovered because nature can do it quickly. I can't find it now.
It's not a trivial problem. There are lots of bright minds working on this problem. If you understand the difficulty behind it, you wouldn't make such ignorant statements.
You sir, are a genius. To make up for my previous lack of initiative, I will do so immediately. Please arrange for the world to be ready for my announcement of the solution at noon tomorrow.
As far as I know there is no fundamental reason to assume that this problem won't be solved. It's not the halting problem.
More generally its fun to imagine things like this but also good to realize real world limits as we live in a finite world with limitations such as living 700 years not being at all likely (or desirable, IMHO).
But it's grating to see that he's getting so much media attention for such blatant disregard of the human condition. No consideration what a post-sentient-AI world would accomplish. No regard for what happens when people live to be 700 years old and population growth doesn't slow. His ideas are like genetically engineering society with no regard to the collateral damage it'd cause to the societal environment. I don't dislike Kurzweil for being an optimist, I dislike him for being arrogant.
Population growth is slowing. Dramatically. There's tons of evidence that as women are educated, they will have fewer children. And women are starting to get better educations all around the world. There's absolutely no doubt that population growth will slow (slash, is already slowing down) a bunch in the coming decades.
Second, I'm confused about the 700 number. Where does that come from? Does he think that in the next 30 years, we will be able to extend life so much that he can keep on living? Why does that stop at 700?! Doesn't he think that some time in the next 700 years we'd be able to find a way to live longer than 700?
Frankly, I doubt there's much difference between finding a way to live to 700 and finding a way to live until the end of time.