These are massive efforts, and involve horrendous heaps of diligent busywork. This makes me the boring naysayer, but please don't be distracted by the startup-like appearance and the peculiar financing situation. It's possible but unlikely that the major obstacle here is simply the combination of available techniques!
Honestly, what I'm most curious about are his thoughts on model-driven interrogation of an in vivo system -- biologists, and even computational neuroscientists, are a bit too hesitant when it comes to letting computers find and test hypotheses. In the age of highly advanced genetic techniques (e.g., binary expression systems in Drosophila or zebrafish) and 2-photon imaging, the process of actually evaluating hypotheses has become a bit old-fashioned...
Unfortunately, most academic scientists are not in a position to risk their careers on such a bold and encompassing proposal. However, I benefit from this in some ways, because academics who are interested in this problem often work on a small piece of it instead of going for the whole thing, and are then incentivized to share that piece with me to integrate with all the other pieces, so they can see their contribution realize its full potential.
I wouldn't underestimate the technical demands of the project - it certainly would have been unthinkable 10 years ago. Sydney Brenner once famously wrote: "Progress in science depends on new techniques, new discoveries, and new ideas, probably in that order." However, you're right to point out that it's really the abstract methodology of delegating experimentation to a machine which truly distinguishes my technical proposal from related work. I have to admit that I haven't worked out in detail what the math will look like, though <http://arxiv.org/abs/1103.5708>; is a pretty good start. The tricky bit is defining the probability space (that is, the family of models under consideration). As a probability space, it must have a measurable structure. But for efficient and effective inference, it should also have additional structure, like a vector space. Yet any particular choice of vector space representation will trade off dimensionality against non-convexity, so it's desirable to have multiple representations of the same space. I've been taking a "cross that bridge when I come to it" approach, as I'm occasionally reminded by academics that if all I manage to do is collect a bunch of time series data about hundreds of neurons simultaneously, that would still probably be scientifically interesting and novel, even with ordinary, human-driven analyses.
Additionally, this pushes technology. The hope is to push the technology far enough a system such that not only do the components work, but you can build actual useful engineering systems with them.
The Big and Little Oh of this story are both extreme events.
But yes, being an entrepreneur is an added layer, as you've to learn to arrange people to attain a larger goal than just research.
And he shows signs of that even on the jobs page while avoiding the broken method of interviewing that is oft practiced, he's almost defining a boundary for relatively high signal from the applicants.
His fluid approach is reminiscent of the caper that the Google guys pulled at Stanford.. and as it turns out, I think he also got funded by Larry.
Probably in the process of researching any of these topics a potential intern would gain a good understanding of the upcoming internship.
Sometimes I wish job interviews were laid out like this.
It makes you wonder, with so many people independently studying and creating observed and simulated data, are there centralized places to aggregate and share these?
I would love to see how this project progresses. It has fascinating implications for artificial intelligence and singularity.
Isn't that awesome? But ... "the philosophical assumptions fail, and human immortality through uploading is fundamentally impossible".
Could anybody explain this a little bit?
Maybe try lizard in between them.
Personally, though, I'm limiting my scope to the worm (at least for now!). The particular path which is best to take beyond that is much less clear.
Doing a nematode-version Turing test is one thing. You could even do it as a nematode in a Chinese box. But isn't there a flaw in comparing how a real nematode's brain reacts to light vs. a nemaload function to a specific computed wavelength of light energy? I think I'm not understanding something.
To boil it down: They assume roughly that simulating the state and functionality of the neurons is sufficient to reproduce consciousness.
If there's something more to consciousness - say for example that consciousness requires the specific organisation of matter of the human brain - then uploading, at least into software, will fail.
It's a few thousand hours of videos of the worms under different stimulation conditions for observing/identifying common behaviors.
We miss you in IRC.
(This projects is going to collect a totally new dataset)