Hacker News new | past | comments | ask | show | jobs | submit login

Moreover, it's likely that a proper GAI would be able to ideate and come up with such concepts completely on its own. In fact, "lying" need not necessarily be a concept that has to be learned, the concept at play here is getting people to do what you want them to do by means of words. Lying falls out naturally.

The whole idea behind dangerous superhuman AI is in that AI seeing possibilities that humans fail to see and gaining capabilities that humans do not possess. Without superhuman intelligence, AI is no large threat to human civilization, exposed to dangerous concepts or not.

It's not so much that lying is intrinsically complicated (I mean, stupid computers provide me with inaccurate information all the time). It's that when evolutionary pressures have heavily selected for things like supplying selected humans with accurate information and distrusting unexpected behaviours from non-human intelligences, it's expecting a lot to get a machine that not only independently develops a diametrically opposed strategic goal, but is also extremely consistent at telling untruths which are both strategically useful and plausible to all observers, without any revealing experiments.

Humans have millions of years of evolutionary selection for prioritising similar DNA over dissimilar DNA, have perfected tribalism, deceiving other humans and open warfare and are still too heavily influenced by other goals to trust other humans that want to conspire to wipe out everything else we can't eat...

Seeing possibilities that humans don't can also involves watching the Terminator movies and being more excited by unusual patterns in the dialogue and visual similarities with obscure earlier movies than the absurd notion that conspiring with non-human intelligences against human intelligences would work.

> Without superhuman intelligence, AI is no large threat to human civilization, exposed to dangerous concepts or not.

The problem is partly that average humans are dangerous and we already know that machines have some superhuman abilities, eg super human arithmetic and the ability to focus on a task. It's like that AI will still have some of those abilities.

So an average human mind with the ability to dedicate itself to a task and genius level ability to do calculations is already really dangerous. It's possible that this state of AI is actually more dangerous than superhuman ones.

This comment reminded me greatly of the game Portal and its AI character GLaDOS:


Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact