Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> For a counterargument from one of the believers of bootstrapping, superintelligent AI: http://lesswrong.com/lw/qk/that_alien_message/

Woah, Yudkowsky takes really long to get to the point, doesn't he? So his point seems to be, that humans aren't very efficient in processing empirical evidence, and an AI might be better at this, because it would be really, really smart?



Yeah, I've always had that same complaint about his writing. Unfortunately just the relevant portions I might have quoted still wouldn't have fit concisely in the comment. Primarily the parts about how models used to describe the world often predate the observed phenomena they're used to describe (Riemannian geometry for general relativity, in his example), and human ability to eliminate hypotheses based on observed data isn't even vaguely imaginably close to the limits set by information theory.

Not that all this is to say advances in AI will automatically guarantee wonderful new physics, much less a final theory explaining it all, but it seems really unlikely that massively improved cognition (through AI or other means) wouldn't lead to much faster progress on all sorts of problems. We're certainly not making optimal use of our observed data about the world, and even if that was the bottleneck, AI should be better at collecting and correlating more data at once anyway. The LHC already relies on machine learning to sort through the massive number of collisions and detections being produced, so the recent experimental advances are basically reliant on AI as it is.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: