Hacker News new | past | comments | ask | show | jobs | submit login

The idea that this is only an issue of disguised pedestrians is a red herring that should not stop people considering the broader implications of the fragility of vision and other ML systems. When a system does not always function according to its intended purpose, it is sound engineering judgement to consider whether this has implications beyond the specific cases that have been found, and there have been some tragic outcomes when the people in charge found it expedient to not do so. In the case of ML, the principle that systems can generalize appropriately beyond their training sets is central, and anything that raises concerns over the generality of that capability needs to be taken seriously. You can certainly hold the opinion that it will not turn out to be a major problem, but the burden of proof lies with those claiming that the systems (after modification, if necessary) are safe enough, and avoiding the question is the opposite of discharging that burden.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: