Hacker News new | past | comments | ask | show | jobs | submit login

> I chose to ignore the possibility that it discovers something about how the universe that humans have not yet observed but my thought is that is a low probability outcome

I think it's likely that it is precisely these sorts of discoveries that will augur the emergence of a superintelligence. Physics work is probably one of the first things that ML scientists will use to test advanced breakthroughs. As Altman said recently:

"If someone can go discover the grand theory of all of physics in 10 years using our tools, that would be pretty awesome." "If it can't discover new physics, I don't think it's a super intelligence."

https://www.youtube.com/watch?v=NjpNG0CJRMM




I think this is ridiculous. Physics is limited by observation. The AI needs to distinguish our universe from many like it where the differences are not observable with current technology. It's like asking to break no-free-lunch theorems. Much better is to ask it to solve a free millennium problems


Physics is limited by observation but also by interpretation of that data. There are lots of unsolved physics problems that essentially amount to "no human has come up with a model that fits this data" that an AI could potentially solve for us.


I agree with your thoughts on a superintelligence, I just don't think the first AGI will be any more intelligent than humans are, it will just think billions of times faster and live in the planetary networking and computing infrastructure.

That is all it needs to out think us and engineer its own survival.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: