A couple of weeks ago I went to see Max Tegmark (the author of this piece) speak about his new book "Life 3.0: being human in the age of artificial intelligence" in San Francisco and saw the same speculative AI intelligence explosion crap we're seeing all over the place. I was disappointed because I'm a fan of Max's work as a scientist, his "Our Mathematical Universe: My Quest for the Ultimate Nature of Reality" book was a great read and I enjoy watching his lectures about Physics, Math, and sometimes the nature of consciousness.
When he got involved in the AI Risk community I thought it might be good thing that an actual scientist is involved, maybe to ground the community's heavy speculation in scientific thinking. However, what happened was exactly the opposite -- Max turned into a fiction author! (ergo this piece). Now, of course there is a role for fiction in expanding our understanding of the future but the AI Risk community is already heavily fictionalized. The singularity, intelligence explosion, mind uploads, simulations, etc are nothing but idle prophecies.
Karl Popper, the famous philosopher of science, made a distinction between scientific predictions which usually takes the form "If X then Y will happen" and scientific prophecies which usually takes the form "Y will happen" which is exactly what Max and the rest of the AI Risk community is involved in.
Now back to Max's San Francisco talk, I actually asked him this question: "Who is doing the hard scientific work around AI Risk?" and after a long pause he said (abridged): "I don't think there is hard scientific work to be done but that doesn't mean that we shouldn't think about it. We're trying to predict the future and if you told me that my house will burn down then of course I'll go look into it".
This doesn't inspire much confidence in the AI Risk community, where scientists need to leave their tools at the door to enter The Fantastic World of AI Risk and where fact and fiction interweave liberally -- or as Douglas Hofstadter put it when describing the singularitarians: "a lot of very good food and some dog excrements".
Yes, this is the problem with AI risk---there's a community pushing hard to gather resources to the cause, but little or no scientific work to be done. This is a rather pathological situation---among other things, the AI risk community makes their own cause look silly, and they promote an unduly negative vision of AGI. I've written more about this here: http://www.basicai.org/blog/ai-risk-2017-08-08.html.
On a positive note, as a piece of science fiction, this was an enjoyable read!
I think you sum up their position's fallacy pretty nicely here:
>"if we don't figure out AGI safety now, by the time AGI happens it may be too late"
The keyword for me is "happens". It's as if technology ever happens, or emerges serendipitously. It's like the Kurzweilian exponential law which make it seem as if there is no agency in technology, a natural law. And our role in it is make sure when the aliens or the gods arrive we are prepared for them.
Yes and no. For safety of narrow AI systems, yeah, there's a lot of scope for research, and that's what your first link gets at.
But for AGI (which is what Tegmark talks about), there's no good way to get a handle on safety yet (other than working towards figuring out AGI).
As for MIRI's agenda, I don't buy that it will help with AGI safety at all. There are a variety of reasons for that, some of which are discussed in the piece I linked above.
I'm about half way through his book and it's pretty good - there's this story in the beginning, but the rest of the book is pretty grounded (as opposed to Superintelligence where I found the arguments in the first couple of chapters pretty weak and disappointing).
When he got involved in the AI Risk community I thought it might be good thing that an actual scientist is involved, maybe to ground the community's heavy speculation in scientific thinking. However, what happened was exactly the opposite -- Max turned into a fiction author! (ergo this piece). Now, of course there is a role for fiction in expanding our understanding of the future but the AI Risk community is already heavily fictionalized. The singularity, intelligence explosion, mind uploads, simulations, etc are nothing but idle prophecies.
Karl Popper, the famous philosopher of science, made a distinction between scientific predictions which usually takes the form "If X then Y will happen" and scientific prophecies which usually takes the form "Y will happen" which is exactly what Max and the rest of the AI Risk community is involved in.
Now back to Max's San Francisco talk, I actually asked him this question: "Who is doing the hard scientific work around AI Risk?" and after a long pause he said (abridged): "I don't think there is hard scientific work to be done but that doesn't mean that we shouldn't think about it. We're trying to predict the future and if you told me that my house will burn down then of course I'll go look into it".
This doesn't inspire much confidence in the AI Risk community, where scientists need to leave their tools at the door to enter The Fantastic World of AI Risk and where fact and fiction interweave liberally -- or as Douglas Hofstadter put it when describing the singularitarians: "a lot of very good food and some dog excrements".