Now, don't get me wrong -- I like sf as well as the next nerd who grew up on that and microcomputers. But it shouldn't be mistaken for a roadmap of the future.
Bostrom's doesn't understand the research, he doesn't understand the current or likely future of the technology, and he doesn't really seem to understand computers.
What's left is medieval magical thinking - if we keep doing these spells, we might summon a bad demon.
As a realistic risk assessment, it's comically irrelevant. There are certainly plenty of risks around technology, and even around AI. But all Bostrom has done is suggest We Should Be Very Worried because It Might Go Horribly Wrong.
This isn't very interesting as a thoughtful assessment of the future of AI - although I suppose if you're peddling a medieval world view, you may as well include a few visions of the apocalypse.
I think it's fascinating on a meta-level as an example of the kinds of stories people tell themselves about technology. Arguably - and unintentionally - it says a lot more about how we feel about technology today than about what's going to happen fifty or a hundred years from now.
The giveaway is the framing. In Bostrom's world you have a mad machine blindly consuming everything and everyone for trivial ends.
That certainly sounds like something familiar - but it's not AI.
That is my main concern about people writing about the future in general. You start with a definition of a "Super Intelligent Agent" and draw conclusions based on that definition. No consideration is (or can be) placed on what limitations AI will have in reality. All they consider is that it must be effectively omnipotent, omnipresent and omniscient, or it wouldn't be a super intelligence, and thus not fall into the topic of discussion.
which right now is (and imo will continue to be) that you need a ton of training examples generated by some preexisting intelligence.