I really enjoyed the video presentation in the Do you speak Human? survey [1]. However I was expecting the survey to not present the information as black and white, yes and no. For many of the questions, I wanted something that was in-between. I'm worried that the available choices will affect the results.
It's a little worrying that people think an AI should "reflect your values and worldview." Maybe a more appropriate approach to modeling artificial intelligence would be how they are described in Iain M. Banks' the Culture series. [2]
> Although drones are artificial, the parameters that prescribe their minds are not rigidly constrained, and sentient drones are full individuals, with their own personalities, opinions and quirks.
There's a couple of ways of interpreting that phrase. One is as you mention - how do we explain to a superhuman AI that turning the entire solar system into paperclips or biogas or whatever is actually bad? The other is the perception in some groups that AI should believe what they believe, regardless of whether that belief is borne out by observed facts.
(A recent example was a system which predicted recidivism rates among ex-cons, which predicted based on its observed data that a particular demographic was much more likely to re-offend. Rather than asking accepting the result and trying to solve the problem, there was an outcry that the program was 'wrong' and must have been fed biased data.)
You could argue that kind of outcry is less about legitimate belief, and more about political skepticism and/or suppression of demographically inconvenient truths.
While some of those may be "inconvenient truths", punishment based on race is, strictly speaking, illegal in the US. Proxies for race (such as where they live) aren't much better.
Another part of the problem is the historical over-policing of minorities (particularly blacks and hispanics) in the US and the lower economic class. It's easy to view them as more likely to be recidivists until you realize that the neighborhoods they're going back to have more than normal police presence and they're more likely to be picked up for "nuisance" crimes like jaywalking, public drunkenness, and loitering. This will skew the numbers substantially in these sorts of models when the police are (largely) not applying the same standards to white people or those from higher economic classes (or whose neighborhoods are less tolerant of such behavior so it's not seen as often to begin with).
When you dig into it though, these are really the same problem. Despite what we like to tell ourselves, almost nothing we do is based on facts - so it's recursive.
> It poses questions that refer to how AI might find purpose in your home like “should your AI fulfil your needs before you ask?” and “should your AI prevent you from making mistakes?”
Those two questions seem like they are worded to elicit a positive answer. They don't give any hint who exactly "mistake" is defined and how the AI can be sure you actually have a need.
Why not use more neutral formulations - e.g. such as:
- should an AI perform certain actions on its own?
- should an AI in certain situations prevent you from performing an action?
It's a little worrying that people think an AI should "reflect your values and worldview." Maybe a more appropriate approach to modeling artificial intelligence would be how they are described in Iain M. Banks' the Culture series. [2]
> Although drones are artificial, the parameters that prescribe their minds are not rigidly constrained, and sentient drones are full individuals, with their own personalities, opinions and quirks.
1. http://doyouspeakhuman.com 2. https://en.wikipedia.org/wiki/The_Culture#Artificial