Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> CI

That'd be pretty cool, actually! A little... maybe ironic? How many stories are there about "they didn't stop to ask if they should, only if they could", and then... outsourcing that check to an AI system. Like I love it, but also, sus.

> bugs

Yep! I think positronic brains would notionally use a heuristics system ("human detector says 89%"), and AFAIK so do our AI systems. That said, your larger point still stands: what happens when such a system either fails, or is mis-calibrated, or is calibrated in a sus way?

(such as facial recognition, at first, not working on BIPOC faces... because the devs used themselves as the test subjects and weren't BIPOC)

I'm almost certain there's at least one story by Asimov on this issue; I know I've read other SciFi on this issue, although I think it's much more common to have the AIs act "better than the humans" rather than vice versa. Something like: "The rules you programmed in say Group X are humans even tho you don't treat them that way". Usually (I think?) when it's "Group X aren't humans by the programmed rules" it's apocalyptical because no-one / almost no-one fits.

I think you could probably write a cute short story about robots anthropomorphizing a lot of things in order to catch all the odd human edge-cases ("not all humans have a face").



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: