1. AI / ML is not AGI.
2. Deep learning may be a tool used by an AGI, but is not itself capable of becoming an AGI.
3. MIRI believes it would be irresponsible to build, or make a serious effort at building, an AGI before the problem of friendliness / value alignment is solved.
So are they philosophers? Of a sort, but at least Eliezer is one who can do heavy math and coding that most engineers can't. I wouldn't have an issue calling him a polymath.
There are lots of individuals who disagree to various extents on point 3. Pretty much all of them are harmless, which is why MIRI isn't harping about irresponsible people. But the harmless ones can still do good work on weak AI. You should look up people who were on the old shock level 4 mailing list. Have a look into Ben Goertzel's work (some on weak AI, some on AGI frameworks) and the work of others around OpenCog for an instance of someone disagreeing with 3 who nevertheless has context to do so. Also be sure to look up their thoughts if they have any on deep learning.
I'm not speaking about anyone's abilities, but from my perspective Eliezer's work is mostly abstract.