Hacker News new | past | comments | ask | show | jobs | submit login
Ilya Sutskever – Opening Remarks: Confronting the Possibility of AGI [video] (youtube.com)
3 points by drcwpl 8 months ago | hide | past | favorite | 3 comments



It is worth revisiting this talk by, Ilya Sutskever.

He argues that AI safety should be a mainstream concern across the ML community (maybe wider community). He reiterates that recent advances in AI have made it more likely than ever that we will achieve artificial general intelligence (AGI) 'soon'?, and that we need to move forward with how to ensure that AGI is safe for humanity. "This is worthy of praise"

Ilya argues that there are two main reasons why AI safety is important. First, AGI could be very powerful, and if it is not aligned with our values, it could pose a serious existential threat. Second, even if AGI is not directly harmful, it could still have a major impact on society, and we need to be prepared for the changes it will bring.

...he outlines a number of challenges that we need to overcome in order to ensure AI safety. These include:

The difficulty of understanding how AI systems work, especially as they become more complex. The challenge of aligning AI systems with our values, given that our values are often complex and contradictory. The difficulty of controlling AI systems, especially if they are very powerful. He concludes by calling for more research on AI safety and for a more open and collaborative discussion about the risks and benefits of AGI.


> He concludes by calling for more research on AI safety and for a more open and collaborative discussion about the risks and benefits of AGI.

This is a talk so devoid of technical details that is difficult to understand the reasons for it's delivery.


AI alignment! It is not difficult to imagine AGI




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: