Hacker News new | past | comments | ask | show | jobs | submit login

You are not correct, there are people working on these problems who are experts in the relevant technical fields. Just not enough of them.

But yes, I'm also concerned about the lack of safety-focused headliners at OpenAI, given the message that they think safety is important.




Who are those researchers? I'll admit I don't follow the stuff written by the friendly AI folks very much; I only know of Bostrom/Yudkowsky, both of whom are very much philosophers.

All of the hype around ML today is in deep learning (let's be honest, OpenAI would not exist if that wasn't the case), and AFAIk there is almost no overlap between people who are prolific in deep learning and prolific in FAI.


You won't find direct ML work from MIRI because:

1. AI / ML is not AGI.

2. Deep learning may be a tool used by an AGI, but is not itself capable of becoming an AGI.

3. MIRI believes it would be irresponsible to build, or make a serious effort at building, an AGI before the problem of friendliness / value alignment is solved.

So are they philosophers? Of a sort, but at least Eliezer is one who can do heavy math and coding that most engineers can't. I wouldn't have an issue calling him a polymath.

There are lots of individuals who disagree to various extents on point 3. Pretty much all of them are harmless, which is why MIRI isn't harping about irresponsible people. But the harmless ones can still do good work on weak AI. You should look up people who were on the old shock level 4 mailing list. Have a look into Ben Goertzel's work (some on weak AI, some on AGI frameworks) and the work of others around OpenCog for an instance of someone disagreeing with 3 who nevertheless has context to do so. Also be sure to look up their thoughts if they have any on deep learning.


We are in agreement on the facts (1 and 2). I was quibbling with pmichaud's implication there is any significant overlap between deep learning / traditional ML and the FAI/AGI community.

I'm not speaking about anyone's abilities, but from my perspective Eliezer's work is mostly abstract.


Using the term philosopher for researchers in friendly AI is not derogatory anyway. Much of the interesting stuff that has been written about AGI in the last decade is absolutely philosophy, in the same way that the more concrete pre-industrial thoughts on space and the celestial were philosophy. Philosophy and science go hand in hand, and there is often an overlap when our fundamental understanding of a subject is shifting.


Here's an overview of the technical work MIRI has done: https://intelligence.org/research/

It's true that Bostrom and Yudkowsky, as individuals, aren't deep learning people. However, I know that MIRI and I think FHI/CSER do send people to top conferences like AAAI and NIPS.


Skimming through that list, are any of those papers about actual running AI systems? It's important to realize that all the stuff about deep learning is mostly heavy engineering work (which is why one criticism of deep learning is the lack of theory - which is a totally valid criticism as most work is in engineering/devising new architectures). Real systems implemented in Cuda/C++ that you can download and run on your computer.


What I've heard is that MIRI has an explicit philosophy of concentrating on the more abstract & theoretical aspects of AI safety. The idea being that if AI safety is something that you can just tack on to a working design at the end, they don't have a comparative advantage there: it's difficult to predict which design will win and the design's implementor is best positioned to tack on the safety bit themselves.

>...imagine a hypothetical computer security expert named Bruce. You tell Bruce that he and his team have just 3 years to modify the latest version of Microsoft Windows so that it can’t be hacked in any way, even by the smartest hackers on Earth. If he fails, Earth will be destroyed because reasons.

>Bruce just stares at you and says, “Well, that’s impossible, so I guess we’re all fucked.”

>The problem, Bruce explains, is that Microsoft Windows was never designed to be anything remotely like “unhackable.” It was designed to be easily useable, and compatible with lots of software, and flexible, and affordable, and just barely secure enough to be marketable, and you can’t just slap on a special Unhackability Module at the last minute.

>To get a system that even has a chance at being robustly unhackable, Bruce explains, you’ve got to design an entirely different hardware + software system that was designed from the ground up to be unhackable. And that system must be designed in an entirely different way than Microsoft Windows is, and no team in the world could do everything that is required for that in a mere 3 years. So, we’re fucked.

>But! By a stroke of luck, Bruce learns that some teams outside Microsoft have been working on a theoretically unhackable hardware + software system for the past several decades (high reliability is hard) — people like Greg Morrisett (SAFE) and Gerwin Klein (seL4). Bruce says he might be able to take their work and add the features you need, while preserving the strong security guarantees of the original highly secure system. Bruce sets Microsoft Windows aside and gets to work on trying to make this other system satisfy the mysterious reasons while remaining unhackable. He and his team succeed just in time to save the day.

>This is an oversimplified and comically romantic way to illustrate what MIRI is trying to do in the area of long-term AI safety...

http://lukemuehlhauser.com/a-reply-to-wait-but-why-on-machin...


From what I've seen, MIRI's work is primarily on rule-based systems. Is any of it relevant to neural networks?


MIRI is basically an organization surrounding Yudkowsky's cult. He's a senior "research fellow" who hasn't published anything in respected peer review generals, and generally holds some pretty questionable beliefs.

http://laurencetennant.com/bonds/cultofbayes.html




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: