Hacker News new | past | comments | ask | show | jobs | submit login

Same can be said about level 5 autonomy!



To me, the fatal flaw of HBP (or USA's competing HBI) is that they are akin to "cargo cult science", the idea that we can replicate the superficial structures to a significant enough degree that they system they impart will suddenly somehow become activated.

But just like the Melanesians with their coconut-shell headsets, there won't be anyone listening on the other end...


If we replicate a car with a sufficient accuracy it will start. I don’t see why this wouldn’t apply to any other piece of machinery, including a brain.

HBP should have tried a simpler task first, e.g. replicate a fruit fly’s brain.


Having replicated the fruit fly's brain, what are you going to do?

Replicate the rest of the fruit fly so that you can install the brain in it? And then replicate the world so that your software-simulated fruit fly has a natural context in which to operate?

Or just stimulate the brain with random inputs not associated with any real-world stimulus?


I’m pretty sure simulating fly’s neural inputs would be a far easier task than simulating its brain. After verifying it works correctly we would proceed to simulating a more complex brain, say a frog. And so on.


To simulate the fly's neural input, you need to simulate the fly's entire environment, including fluid dynamics for the air around the fly, the physics of every object the fly interacts with, hormonal responses to changes in the flies blood concentration of various substances (O2, glucose, ...).

This already strains our technical capabilities (at least for the amount of money we are willing to spend on it).

For any animal whose behavior is the product of a lot of learning, and who deals with other such animals in daily life, you have to solve all the problems you had to solve for the fly and also deal with a much larger connectome and also deal with the fact that, until you can passably simulate mouse, you will never be able to simulate the way a mouse learns to behave in the presence of another mouse.


No need to simulate any of the environment. You only need to record the neural inputs to a fly’s brain. Sure that’s also challenging, but nowhere as challenging as simulating the entire fly’s brain. My point is if you manage to accomplish the latter, the former would be a breeze.


> You only need to record the neural inputs to a fly’s brain.

I don't think that's true. As soon as your simulated fly's behavior diverges from the actual fly's, all of the recorded input after that point is invalid/useless because it will not match the simulated fly's position/orientation/whatever.

Also, how many dollar of investment and years into the future do you think we are from being able to record all of a fly's neural input while it is moving freely?


To start with you only need to study input-response pairs (sensory input causing motor commands). You enter the neural inputs from a real fly into the simulation, and compare responses of the real fly and the simulated one. Once you understand what's going on, proceed to do sequence of inputs, and compare the sequence of responses. The goal is not to produce the identical sequence of actions by tuning the simulation, it's understanding how the actions are being computed from the inputs.

Having a detailed simulation like this would greatly accelerate Numenta's style research, where instead of piecing together information from published papers, you would get it straight from the experiments you control.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: