Hacker News new | past | comments | ask | show | jobs | submit login

I would say that it means whatever procedures we build for taking pictures of "known criminals" and applying recognition to someone in your store, they need to be designed and implemented and carried out by people who are aware at all stages that there is a good possibility that they had the wrong person -- how would you want, say, your grandma to be treated if someone wrongly identified her from a criminal picture but wasn't sure? Treat that person that way.

This is hard, we generally do the opposite. Especially in racialized ways in the USA.

AI systems are often promoted as some kind of a solution to this, that somehow avoids human bias/mistakes. I think your comments even revealed that kind of thinking. I don't think they should be thought of that way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: