Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some of this would be considered a feature by companies. It's more defensible to have unknowable AI deciding to do illegal things than programmers hard coding illegal things. Which really boggles my mind. When my kid does something illegal I'm held liable. When an ML algorithm programmed by a team of people does, nothing we can do about that!



I'm not sure if the courts will see it this way. Anyway, in some fields some stuff (e.g. credit scoring in banka in my country) is regulated and it's the humans who have to make the final decisions - algoriths can only provide an input. In practice, the input is "approve/deny", but it's still a human who is making a "decision" based on this "input".


I don't see this as really a unique situation or one where people are exempt from liability. If someone starts a fire, they are held liable; that's analogous.

On the other hand, the difference between a team of people and a particular person with mens rea, that's not unique to AI either.


>When my kid does something illegal I'm held liable

This isn't true.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: