Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This paper indicates that we should probably be less fearful of Terminator style accidental or emergent AI-misalignment. At least, as far as the existing auto-regressive LLM architecture is concerned. We may want to revisit these concerns if and when other types of artificial general intelligent models are deployed.

The "mis-alignment" we do need to worry about is intentional. Naturally, the hyperscalers are deploying these models in order to benefit themselves. Ideally, customers will select models that are most grounded and accurate. In practice, there's a danger that people will select models that tell them what they want to hear, rather than what they should hear. We've seen this with journalism and social media.

The other danger is that absent a competitive marketplace for AI, a single corporation or a cartel will shape the narrative. The market valuations of some AI providers seem to be based on this assumption.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: