Hacker News new | past | comments | ask | show | jobs | submit login

I'm surprised he'd make such an optimistic statement. I think a better analogy would be:

We figured out how to make icing, but we still don't really know what a cake is.




He addresses your query, not in the analogy but the additional comment.

"And that's just an obstacle we know about. What about all the ones we don't know about?"

With sufficient scrutiny all analogies break down, so readers must be generous with their interpretation.


We can describe Solomonoff-based agents like AIXI. None of them are fully sufficient for true general AI, but you could probably accomplish quite a bit with an AIXI-like agent.


As fas as I know, the only thing existing AIXI implementations have demonstrated, is to learn to play Pac-man at a somewhat reasonable, but not in any way stellar level.


Yes, it is not tractable. It serves as an example of a definition of an agent though.


I thought full AIXI wasn't computable though?


You can do time-bound AIXI and for a large enough time bound it's sufficient for all practical situations.

AIXI is not tractable, but I'm responding to the parent comment saying we don't even know what a cake is.


I don't consider this mathematical model of induction/prediction to be an accurate description of "intelligence" in the sense of "True AI" (which itself is open to interpretation).

This is the cake: human intelligence. Right now we have pieces of it, but even the end goal isn't well defined. We know the human mind makes predictions, recognizes patterns, can formulate plans, works with both concrete and fuzzy information, and so on. But we still don't understand what human intelligence really is, overall.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: