Hacker News new | past | comments | ask | show | jobs | submit login

It’s a fun thought experiment but it essentially begins by assuming “the computer becomes God in a box”. Yes, if you make this assumption, everything goes to shit.

In a world where problems have computational complexity and the halting problem is undecidable, I find it very unconvincing that once we design an AI that can improve itself, it will achieve that level of being a God in the Box and not like, a 20% better version of itself after racking up $100MM in AWS compute to work on the task of improving itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: