
Making AI Better by Making It Slower - mbellotti
https://medium.com/@bellmar/making-ai-better-by-making-it-slower-34e09ba9fcb9
======
hinkley
I finished "Blink" recently and one of the little revelations in that book is
about the police and why nobody has partners anymore. Some friends and I used
to joke that it was because the city was trying to save labor costs (an extra
car being a lot cheaper than extra cops, we reasoned).

Turns out that a cop on their own is more conservative. They have to think
about whether to engage - they have no backup, so any situation they get
themselves into, they can't entertain any fantasy that their partner will dig
them out of it.

It slows them down, makes them assess the situation, reason about it instead
of reacting. It improves citizen and officer safety.

Using computers to audit human decisions instead of circumventing them just
sounds like a more realistic option. Send the questionable xrays for a second
opinion (or have the same tech look at them a second time on a 'good day'
instead of 4:30 on Friday). File code review comments instead of blocking a
merge. File PRs to upgrade dependencies that appear to pass the test
automation.

The human still consciously chooses in these situations.

In the old days we had some UX luminaries who talked about the importance of
having systems (especially where Customer Relationships are involved) whose
business logic can be overridden by a human operator. Waive the fee. Exempt
from taxes, what have you. It's in many ways the same kind of problem, just
magnified.

~~~
papeda
> Turns out that a cop on their own is more conservative.

They are? It's plausible to me that a cop on their own is more scared and more
likely to take drastic action if they feel threatened.

~~~
hinkley
I thought so too, but the statistics seem to disagree. Wait for backup.

It seems like having a partner was intended to keep cops out of trouble (safe
and honest) and that doesn't seem to be working out that great either.

------
tedivm
I find blog posts like this interesting, in that they seem to justify a
certain set of ethics (in this case why it's okay to make machines that
ultimately result in people dying) by basically ignoring it and replacing it
with a completely different problem (type 1 versus type 2 thinking, both of
which can be used ethically or unethically). It really seems like a form of
self delusion to me.

