Hacker News new | past | comments | ask | show | jobs | submit login

No, a killbot is a way to kill parties while trying to reduce the number of 'false positives', aka collateral damage. But because a killbot does that the result then is that you get far more deployment and eventually the collateral damage in total rises because the weapon is seen as effective. That's why killbots are bad.



that is certainly a very plausible and worrisome risk, and i agree that facial recognition doesn't require strong ai

but i think there are other, possibly larger risks that result from strong ai in weapons, and plausible incentives to put it there


Can you give some examples of those risks?


like biological weapons, its collateral damage might be a lot less predictable, because it's hard to predict what someone who's smarter than you will do




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: