Hacker News new | past | comments | ask | show | jobs | submit login

They're worried about a new type of colonialism through AI and I get that. But why not then use those countries' laws and culture as a baseline. Yes, you'd have to manually input this data somehow. Have an AI (or a questionaire) ask a simple set of questions to a set of people, start from there and learn from there. Keep the AI asking questions every so often to better its model.

I'm sure I'm missing something. It's never that simple, right?




There's no good way to validate that a model aligns with human values. The closest thing we have now would be extensive behavioral tests; e.g. does the chatbot say things that a majority of cultural members strongly agree with in the vast majority of cases, including in large numbers of nuanced situations where the context matters and differentiates a Western vs. non-Western response.

No one knows how to make those kinds of behavioral tests comprehensive, either.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: