But yes, I'm also concerned about the lack of safety-focused headliners at OpenAI, given the message that they think safety is important.
All of the hype around ML today is in deep learning (let's be honest, OpenAI would not exist if that wasn't the case), and AFAIk there is almost no overlap between people who are prolific in deep learning and prolific in FAI.
1. AI / ML is not AGI.
2. Deep learning may be a tool used by an AGI, but is not itself capable of becoming an AGI.
3. MIRI believes it would be irresponsible to build, or make a serious effort at building, an AGI before the problem of friendliness / value alignment is solved.
So are they philosophers? Of a sort, but at least Eliezer is one who can do heavy math and coding that most engineers can't. I wouldn't have an issue calling him a polymath.
There are lots of individuals who disagree to various extents on point 3. Pretty much all of them are harmless, which is why MIRI isn't harping about irresponsible people. But the harmless ones can still do good work on weak AI. You should look up people who were on the old shock level 4 mailing list. Have a look into Ben Goertzel's work (some on weak AI, some on AGI frameworks) and the work of others around OpenCog for an instance of someone disagreeing with 3 who nevertheless has context to do so. Also be sure to look up their thoughts if they have any on deep learning.
I'm not speaking about anyone's abilities, but from my perspective Eliezer's work is mostly abstract.
It's true that Bostrom and Yudkowsky, as individuals, aren't deep learning people. However, I know that MIRI and I think FHI/CSER do send people to top conferences like AAAI and NIPS.
>...imagine a hypothetical computer security expert named Bruce. You tell Bruce that he and his team have just 3 years to modify the latest version of Microsoft Windows so that it can’t be hacked in any way, even by the smartest hackers on Earth. If he fails, Earth will be destroyed because reasons.
>Bruce just stares at you and says, “Well, that’s impossible, so I guess we’re all fucked.”
>The problem, Bruce explains, is that Microsoft Windows was never designed to be anything remotely like “unhackable.” It was designed to be easily useable, and compatible with lots of software, and flexible, and affordable, and just barely secure enough to be marketable, and you can’t just slap on a special Unhackability Module at the last minute.
>To get a system that even has a chance at being robustly unhackable, Bruce explains, you’ve got to design an entirely different hardware + software system that was designed from the ground up to be unhackable. And that system must be designed in an entirely different way than Microsoft Windows is, and no team in the world could do everything that is required for that in a mere 3 years. So, we’re fucked.
>But! By a stroke of luck, Bruce learns that some teams outside Microsoft have been working on a theoretically unhackable hardware + software system for the past several decades (high reliability is hard) — people like Greg Morrisett (SAFE) and Gerwin Klein (seL4). Bruce says he might be able to take their work and add the features you need, while preserving the strong security guarantees of the original highly secure system. Bruce sets Microsoft Windows aside and gets to work on trying to make this other system satisfy the mysterious reasons while remaining unhackable. He and his team succeed just in time to save the day.
>This is an oversimplified and comically romantic way to illustrate what MIRI is trying to do in the area of long-term AI safety...