I hope so. What's really funny is I was reading through some of your articles on lessWrong just this morning and almost put in a caveat along the lines of "Assuming no singularity event".
Out of curiousity, how confident are you about the general idea of humanity surviving and in a good state? I know your work focuses on the idea that AI, poorly implemented, will probably destroy everything of value; so how confident are you that it will be well implemented?