
IKEA dives into world of Artificial Intelligence - morehuman
https://www.thememo.com/2017/05/02/ikea-ai-survey-ikea-artificial-intelligence-ikea-smart-home-furniture/
======
snailletters
I really enjoyed the video presentation in the Do you speak Human? survey [1].
However I was expecting the survey to not present the information as black and
white, yes and no. For many of the questions, I wanted something that was in-
between. I'm worried that the available choices will affect the results.

It's a little worrying that people think an AI should "reflect your values and
worldview." Maybe a more appropriate approach to modeling artificial
intelligence would be how they are described in Iain M. Banks' the Culture
series. [2]

> Although drones are artificial, the parameters that prescribe their minds
> are not rigidly constrained, and sentient drones are full individuals, with
> their own personalities, opinions and quirks.

1\. [http://doyouspeakhuman.com](http://doyouspeakhuman.com) 2\.
[https://en.wikipedia.org/wiki/The_Culture#Artificial](https://en.wikipedia.org/wiki/The_Culture#Artificial)

~~~
AndrewKemendo
_It 's a little worrying that people think an AI should "reflect your values
and worldview."_

That's all that the people who are scared of AGI have been banging on forever.
How do you give AI "human values?" It's non-sensical.

~~~
taneq
There's a couple of ways of interpreting that phrase. One is as you mention -
how do we explain to a superhuman AI that turning the entire solar system into
paperclips or biogas or whatever is actually bad? The other is the perception
in some groups that AI should believe what they believe, regardless of whether
that belief is borne out by observed facts.

(A recent example was a system which predicted recidivism rates among ex-cons,
which predicted based on its observed data that a particular demographic was
much more likely to re-offend. Rather than asking accepting the result and
trying to solve the problem, there was an outcry that the program was 'wrong'
and must have been fed biased data.)

~~~
Chris2048
You could argue that kind of outcry is less about legitimate belief, and more
about political skepticism and/or suppression of demographically inconvenient
truths.

~~~
Jtsummers
While some of those may be "inconvenient truths", punishment based on race is,
strictly speaking, illegal in the US. Proxies for race (such as where they
live) aren't much better.

Another part of the problem is the historical over-policing of minorities
(particularly blacks and hispanics) in the US and the lower economic class.
It's easy to view them as more likely to be recidivists until you realize that
the neighborhoods they're going back to have more than normal police presence
and they're more likely to be picked up for "nuisance" crimes like jaywalking,
public drunkenness, and loitering. This will skew the numbers substantially in
these sorts of models when the police are (largely) not applying the same
standards to white people or those from higher economic classes (or whose
neighborhoods are less tolerant of such behavior so it's not seen as often to
begin with).

~~~
Chris2048
wrt considering AI systems and the future, the current legal system, or even
an American one seem irrelevant.

> Proxies for race (such as where they live)

a proxy is only a proxy if there is bad intent, otherwise it is just
correlation.

> This will skew the numbers substantially

Only if your analysis is as simplistic - otherwise you can afford to look into
the reasons the discrepancy exists.

You suggest over-policing, or higher police scrutiny, results in more arrests
- why does it not result in fewer "nuisance" crimes?

------
xg15
> _It poses questions that refer to how AI might find purpose in your home
> like “should your AI fulfil your needs before you ask?” and “should your AI
> prevent you from making mistakes?”_

Those two questions seem like they are worded to elicit a positive answer.
They don't give any hint who exactly "mistake" is defined and how the AI can
be sure you actually have a need.

Why not use more neutral formulations - e.g. such as:

\- should an AI perform certain actions on its own? \- should an AI in certain
situations prevent you from performing an action?

~~~
naasking
Exactly. These questions are silly even if you substituted a fully sentient
human. Should your butler prevent you from making mistakes?

