Hacker News new | past | comments | ask | show | jobs | submit login

It'd be useful-enough to have an AI that could be given problem descriptions of the first kind (problem-statements that describe problems which may or may not require original research to solve), and then manage to figure out whether they reduce to gluing a set of "known" solutions together (and then, perhaps, generate such solutions), or whether they require original scientific/mathematical research (at which point it could just shrug its digital shoulders, like most humans do at that point.)

And by "known", I don't mean "in the AI's knowledge-base"; the AI would properly at least be able to hunt down textbooks and journal papers, read them, and learn problem-solving approaches from them. In other words, the AI would at least be as able to "do science" in the way that a grad student is expected to "do science."

I think any algorithm that could do that would count as an artificial general intelligence (AGI) and I think the current consensus is that we currently have no idea how to create one.

What relation an AGI has to NP hardness is unclear. I think that if P=NP in the sense that a practical algorithm exists for solving large (ie. 10^9 variables) NP complete problems then AGI (even super-AGI) would probably follow. However I don't necessarily think that's a necessary condition for AGI to exist.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact