Hacker News new | past | comments | ask | show | jobs | submit login

There’s a lot of good stuff in this article, but it nevertheless misses the point (as did Yoshua Bengio as described, to be clear).

When alarmists like myself take about danger, we rarely use the term “superhuman intelligence”. We tend to use the term “superintelligence”. And the reason is that our working definition of an intelligent agent contains agents both vastly broader and vastly narrower than humans, but that nevertheless are a danger to humans.

So the question isn’t “but does this agent truly understand the poetry it wrote”. It’s “can it use its world model to produce a string of words that will cause humans to harm themselves”. It’s not “does it appreciate a Georgia O’Keefe painting.” It’s “can it manipulate matter to create viruses that will kill all present and future O’Keefes”.




“Can its world model BE USED to produce…” “Can it BE USED to manipulate…”

FTFY


No, you didn’t. I meant what I wrote. If it is only a tool, obviously our problems are human. The question is whether the AI is setting non-human-desired goals.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: