Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the guys who created this also created Pluribus, the AI that learned to bluff to win poker games. That's Noam Brown.

https://www.cmu.edu/ambassadors/october-2019/artificial-inte...

High-level, you need three things for AI to get "ex machina"-level creepy:

1) the ability to successfully manipulate humans to attain its ends

2) the ability to rewrite its objective function; ie to redefine its ends.

3) a multi-modal understanding of the world (that goes beyond, say, text)

I would be very curious to hear how close AI researchers think we are to those three things being individually achieved and collectively combined.



I'm not sure 2 is necessary. As long as the objective function is sufficiently far away the AI has a lot of breadth to achieve it.

But based on the paper, it sounds like this model is lacking in 3. I'm curious as well how far we are from a more general model that is able to achieve the same results. From the development of generative AI, we might not be that far.


Diplomacy has a very simple world model and this work does not advance world modeling aspect at all. So I don't think this work is an evidence in either ways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: