Hacker News new | past | comments | ask | show | jobs | submit login

> Bostrom is a philosopher

That is my main concern about people writing about the future in general. You start with a definition of a "Super Intelligent Agent" and draw conclusions based on that definition. No consideration is (or can be) placed on what limitations AI will have in reality. All they consider is that it must be effectively omnipotent, omnipresent and omniscient, or it wouldn't be a super intelligence, and thus not fall into the topic of discussion.

which right now is (and imo will continue to be) that you need a ton of training examples generated by some preexisting intelligence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: