Hacker News new | past | comments | ask | show | jobs | submit login

Accelerating the efficiency of an optimization algorithm doesn't get you AGI, this should be clear by now. As for fielding such systems, one quick way to destroy humanity to a degree is to turn everything into a glorified optimization problem which will no doubt be turned against people to maximize profit.



If you'll accelerate AIXI (optimization algorithm) [0], you'll get (real-time) AGI.

[0]: https://en.wikipedia.org/wiki/AIXI


If only this wasn't an fundamentally flawed theory that isn't scalable based on computational complexity and information theory.


Approximations to AIXI are computable.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: