Hacker News new | past | comments | ask | show | jobs | submit login

You won't find direct ML work from MIRI because:

1. AI / ML is not AGI.

2. Deep learning may be a tool used by an AGI, but is not itself capable of becoming an AGI.

3. MIRI believes it would be irresponsible to build, or make a serious effort at building, an AGI before the problem of friendliness / value alignment is solved.

So are they philosophers? Of a sort, but at least Eliezer is one who can do heavy math and coding that most engineers can't. I wouldn't have an issue calling him a polymath.

There are lots of individuals who disagree to various extents on point 3. Pretty much all of them are harmless, which is why MIRI isn't harping about irresponsible people. But the harmless ones can still do good work on weak AI. You should look up people who were on the old shock level 4 mailing list. Have a look into Ben Goertzel's work (some on weak AI, some on AGI frameworks) and the work of others around OpenCog for an instance of someone disagreeing with 3 who nevertheless has context to do so. Also be sure to look up their thoughts if they have any on deep learning.




We are in agreement on the facts (1 and 2). I was quibbling with pmichaud's implication there is any significant overlap between deep learning / traditional ML and the FAI/AGI community.

I'm not speaking about anyone's abilities, but from my perspective Eliezer's work is mostly abstract.


Using the term philosopher for researchers in friendly AI is not derogatory anyway. Much of the interesting stuff that has been written about AGI in the last decade is absolutely philosophy, in the same way that the more concrete pre-industrial thoughts on space and the celestial were philosophy. Philosophy and science go hand in hand, and there is often an overlap when our fundamental understanding of a subject is shifting.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: