Hacker News new | past | comments | ask | show | jobs | submit login

> We can and should, however, discuss the more immediate related dangers of our real-world machine learning, such as self-reinforcing bias.

Quoting to emphasize.

It is one thing to acknowledge that there is a non-zero probability that an uncontrollable AI might be created, but that we have other things to worry about and so why hand-wring over it.

What should be the strong counterpoint to this hand-wringing is that what we should be doing is better understanding the behavior of the "weak" AI we have now. We need grasp where it does and doesn't work, a difficult problem to "sell" since it lacks that alien-like doomsday aspect or predatory agency.

These kinds of problems are the problems of our own creation and thus we must take ownership.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: