
How Should We Program Computers to Deceive? - fortepianissimo
http://www.psmag.com/navigation/nature-and-technology/technology-deception-elevator-crosswalk-programming-robots-lie-89669/
======
loup-vaillant
> _In the 1960s, the hardware that comprised the byzantine switching systems
> of the first electronic phone networks would occasionally cause a misdial.
> Instead of revealing the mistake by disconnecting or playing an error
> message, engineers decided the least obtrusive way to handle these glitches
> was to allow the system to go ahead and patch the call through to the wrong
> number. Adar says most people just assumed the error was theirs, hung up,
> and redialed. “The illusion of an infallible phone system was preserved,” he
> writes in the paper._

Then later:

> _One relatively benign class of examples occurs when an operating system
> fails in some way and a piece of software is programmed to cover up the
> glitch. The misdials of the early phone switching system fall into this
> category._

Really, that depends how you're covering it up. The case of the telephone
network is clearly _malicious_. It didn't do much harm, but it _was_ harmful,
and the benefit was all for the owner of the phone network: they basically
caused more mis-dials than was necessary, wasting people's time in the
process… then they tell the users it's their fault! (The lie was by omission,
but no less efficient.)

Even in cases where short-terms benefits are clear, like placebo buttons or
progress bars, I fear the lack of trust that it could pose long term. People
already see their computers as independent thinking agents. Some of them are
_afraid_ of their computer, but use it anyway because they've become so
dependent. Deceitful interfaces will likely reinforce this learned
helplessness.

This kind of insidious, diffuse damage may be far worse than any observable
benefits. The end doesn't justify the means when the remedy is worse than the
curse.

