Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

ML is a huge field outside of modelling humans and their behavior. For instance, image recognition of vehicles, financial data prediction and analytics, and weather forecasting, to name a couple examples. Those don't draw scrutiny. The problem comes with generalizing humans. And generalizing using biased data. And applying generalized algorithms in areas that cause a lot of harm. I think these researchers should properly be placed under the microscope since they have the potential to be very hurtful to society. I do not think they should be subject to death threats or loss of income or whatever the social media mob throws at them these days, but I don't think researchers should be cavalier in creating algorithms that generalize humans without taking very careful steps to not create bias in the end result.


I think it's more appropriate to hold companies, governments, organizations that use these algorithm on the general public under scrutiny. Research that doesn't materially impact anyone shouldn't be placed under such scrutiny.

I understand that the research is what is driving this and vice versa, because companies and governments are funding a lot of this research (face recognition research specifically had significantly increased funding due to 9/11). It's the companies and governments who should be scrutinized and put under pressure instead of researchers who are trying to get ahead in academia or publish their next article or are incentivized by funding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: