Hacker News new | past | comments | ask | show | jobs | submit login

It's even hard to say how "genuine" cleverness in human beings works - and we've had a lot more time to study ourselves - granted that most of that time we've not had the tools to understand how our brains work at the algorithmic level. I am not claiming that achieving some version of "genuine" AGI necessarily involves understanding how human intelligence works, but it is reasonable to expect that knowing more about it will definitely reduce the search space of possible systems.



Is it reasonable? What would you learn about wheels by studying cheetahs? Or about steam engines by studying strong elephants? Or about cranes by studying giraffe? Or about jet engines by studying birds? What if AI is like that to a human brain? Completely different and way better in some dimensions?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: