Hacker News new | comments | show | ask | jobs | submit login

uh, you can begin to describe the amount of information. if someone grew up with sensory input limited to HD video meaning something like 6-10 GB/hour they would be handicapped versus other humans but not drastically so. In 6 years there are 52,560 hours. A large library, sure, but we already have all this digitized anyway.

I'm cutting a lot of corners, and the human substrate in which our brains are embedded is finicky - you can't just leave a toddler in a roomful of DVD's with food and get a fully functioning adult after 48 months of unsupervised training.

but it's not "can't even begin to describe the amount of information encoded there" either. 50,000 blue ray discs' worth oughta do it.

I'm not saying we've figured out any of the other stuff - it's just that computationally (horsepower) we're there; the later training set for unsupervised learning is also there; etc.

The missing parts might well seem insurmountable - but every result that shows AI performing at the level of a 2 or 3 year old is wonderful. This is it. This is beginning of the turning point. It can happen at any moment.

Someone at this very moment could be setting up a neural net that after 48 months of training can deduce its own status in the world, make novel and correct sentences, maintain a coherent world-view, and be trained on the entirety of the Internet at 10x the speed of adult brains. (The limit is 1,000,000x speed of adult brains - because the silicon substrate we're using today propagates signals literally a million times faster - today.)

We're there. It's all there. It's "just" 86,000,000,000 neurons (with 7k connections each) and 12 years of supervised training to get to the level of a 12-year-old. 3 pounds. 20 watts. This is happening. Amazon's and Google's server farms blow it out of the water computationally compared to a brain, today. we might not come up with the same architecture but what we are coming up with is making breathtaking progress.

------------

EDIT: I've been submitting too much, but I agree with JonnieCache's thoughts below. However, as a thought exercise there is no reason we couldn't train AI interactively, putting it in a VR room and literally talking to it and correcting it, etc, like a pet. Granted this is not a normal approach to take but since we're discussing the theoretical limitations you can certainly envision it. Obviously nobody is trying to do that - we're not trying to come up with sentience using an approach like this and have no idea what steps humans go through exactly to get there. But it's not computational power that keeps us from getting there - and we could be surprised at any time.




>50,000 blue ray discs' worth oughta do it.

By this logic, the handful of megabytes of unicode making up War and Peace in the original russian should be enough for a non-russian speaker to fully grasp it and all its meaning and implications. It isn't even enough for a native russian speaker to do so.

Humans aren't raised by simply looking at their surroundings, they're raised by interacting with people, who were in turn raised by interacting with people, going back for the whole history of humanity, or arguably mammals. That information isn't all in the genome, although natural selection has put some of it in there. The bit that we don't know how to describe information-theoretically is the bit that isn't in the genome, because we don't know how it's encoded.

I don't see how you'd even try to put bounds on it: this is essentially the problem posed by post-modernism/post-structuralism/literary theory, once you strip away the marxism, and science's response has understandably been to reject it but it can't do so forever if it wants to create AGI.

Or maybe I've misunderstood your point.

I might be pursuaded that a 2 year old could come sooner than we think, via brute computational force as you describe, but I'd argue that a 2 year old with the capacity to become anything more than a 2 year old is much farther away than we think.

EDIT: if, as you claim, we are close to having the computational power to simulate human children, then why aren't we already successfully simulating much simpler animals? IIRC the best we can do is a tiny chunk of a rat, or the whole of various kinds of microscopic worms, and those are just computational models, not turing test-passing replicants.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: