Hacker News new | past | comments | ask | show | jobs | submit login

The point of this is what is known as model-based learning. Basically, the long-term goal is to be able to predict the output of a given action (jumping, walking left, etc.) by an AI agent. When you can do this, then the agent doesn't need to die to know that jumping down a hole will end the game-- it can predict it. Once you've done this, AI techniques like that of Watson can control robots. They won't need to kill someone to know that driving a pole through a head is no good. They'll be able to 'reason' it out.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: