Why should we need AGI for AI to be more of a threat than a random human? We already have assassination bots that allow people to be killed from across the globe. Add in your average sociopathic corporation and a government that has a track record of waging wars to protect its economic interests, and AGI becomes no more than a drop in that same bucket.
(You didn't respond with an answer to my question, which discourages me from answering yours! But I'll answer any way in good faith, and I hope you will do the same).
(1) Possible in the forseeable future: The strongest evidence I have is the existence of humans.
I don't believe in magic or the immaterial human soul, so I conclude that human intelligence is in principle computable by an algorithm that could be implemented on a computer. While human wetware is very efficient, I don't think that such an algorithm would require vastly greater compute resources than we have available today.
Still, I used to think that the algorithm itself would be a very hard nut to crack. But that was back in the olden days, when it was widely believed that computers could not compete with humans at perception, poetry, music, artwork, or even the game of Go. Now AI is passing the Turing Test with flying colours, writing rap lyrics, drawing beautiful artwork and photorealistic images, and writing passable (if flawed) code.
Of course nobody has yet created AGI. But the gap between AI and AGI is gradually closing as breakthroughs are made. It seems increasingly to me that, while there are still some important, un-cracked nuts to the hidden secrets of human thought, they are probably few and finite, not as insurmountable as previously thought, and will likely yield to the resources that are being thrown at the problem.
(2) AGI will be more of a threat than any random human: I don't know what could count as "evidence" in your mind (see: the comment that you replied to), so I will present logical reasoning in its place.
AGI with median-human-level intelligence would be more of a threat than many humans, but less of a threat than humans like Putin. The reason that AGI would be a greater threat than most humans is that humans are physically embodied, while AGI is electronic. We have established, if imperfect, security practices against humans, but none tested against AGI. Unlike humans, the AGI could feasibly and instantaneously create fully-formed copies of itself, back itself up, and transmit itself remotely. Unlike humans, the AGI could improve its intrinsic mental capabilities by adding additional hardware. Unlike humans, an AGI with decent expertise at AI programming could experiment with self-modification. Unlike humans, timelines for AGI evolution are not inherently tied to a ~20 year maturity period. Unlike humans, if the AGI were interested in pursuing the extinction of the human race, there are potentially methods that it could use which it might itself survive with moderate probability.
If the AGI is smarter than most humans, or smarter than all humans, then I would need strong evidence to believe it is not more of a threat than any random human.
And if an AGI can be made as smart as a human, I would be surprised if it could not be made smarter than the smartest human.