The whole idea behind dangerous superhuman AI is in that AI seeing possibilities that humans fail to see and gaining capabilities that humans do not possess. Without superhuman intelligence, AI is no large threat to human civilization, exposed to dangerous concepts or not.
Humans have millions of years of evolutionary selection for prioritising similar DNA over dissimilar DNA, have perfected tribalism, deceiving other humans and open warfare and are still too heavily influenced by other goals to trust other humans that want to conspire to wipe out everything else we can't eat...
Seeing possibilities that humans don't can also involves watching the Terminator movies and being more excited by unusual patterns in the dialogue and visual similarities with obscure earlier movies than the absurd notion that conspiring with non-human intelligences against human intelligences would work.
The problem is partly that average humans are dangerous and we already know that machines have some superhuman abilities, eg super human arithmetic and the ability to focus on a task. It's like that AI will still have some of those abilities.
So an average human mind with the ability to dedicate itself to a task and genius level ability to do calculations is already really dangerous. It's possible that this state of AI is actually more dangerous than superhuman ones.