If any AI product is dangerous, it'll be because it's not good enough, or its limitations aren't well known, or because it's been poorly used or built or is otherwise buggy... like any other piece of software.
The biggest "danger" in regards to AI is that people often wrongly assume that any significant level of human like intelligence is required, beyond that they also wrongly assume that we would be even able to recognize if it does reach a level of intelligence that poses a risk.
One of the most common thought experiments is a basic optimizer that goes out of control.
This "AI" doesn't need to be intelligent it just needs to have sufficient agency in the real world and basic problem solving capabilities.
In fact a stupid AI with excessive agency arguably scares me more than a rogue Cortana.
If only because the latter can be negotiated with while the former could launch a nuclear strike against every population center except 3 because it thinks it's the best way for American Idol to beat Big Brother in ratings.
When radio first came out anyone could set up a broadcast station until the government put regulations on who could broadcast. Same with TV. It's becoming more and more apparent that the internet has the potential to threaten the current power structures, as does encryption. AI will be another threat to the current power structure.
50 years from now the internet will be as regulated as television and you'll need a special license to use encryption, set up a server, use AI, etc.