Hacker News new | past | comments | ask | show | jobs | submit login

>He says general AI is on the same spectrum as the AI technologies we have now, but is qualitatively different.

No it is not. The basic premise of fixed-winged aircraft was the same from Wright brothers to modern jets. Yet the wright brothers flyer was useless and a modern jet is not.

We have agents that can act in environments. His claim is that getting these agents to human-level intelligence is a matter of compute and architectural advancements that are not qualitatively different that what we have now. This just does not strike me as an absurd claim. We have systems that can learn reasonably robustly. We should accord significant probability to the claim that higher-level reasoning and perception can be learned with these same tools given enough computing power.

He claims we cannot "rule out" near-term AGI. Let's define "rule out" as having a probability of 1% or lower. I think he's given pretty good reasons to up our probability to between 2-10%. For myself, 10-20% seems a reasonable range.




> No it is not.

What claim are you responding to here? Simonh said:

> He says general AI is on the same spectrum as the AI technologies we have now, but is qualitatively different. I'm sorry but that's a contradiction.

Which I agree with. How can two qualitatively different things be on the same spectrum? You later say yourself:

> His claim is that getting these agents to human-level intelligence is a matter of compute and architectural advancements that are not qualitatively different that what we have now.

Which seems to be the opposite of what simonh said and it's confusing to say the least.


You are right. I don't think I read his comment very carefully before replying.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: