
Courts reshape the rules around AI - morisy
https://www.muckrock.com/news/archives/2019/oct/22/courts-could-put-important-limits-on-the-use-of-ai/
======
akersten
This is a pattern we see time and time again: companies hiding behind "we
can't tell you how it works, it's a trade secret, you just have to trust us
that it does." And these companies land exclusive government contracts. Police
drug tests, electronic voting machines, and now police face recognition.

We need to demand transparency of any company that receives this kind of
special treatment, and require them to disclose statistical analysis of their
solution at a minimum. How often is it wrong? How was it verified? If they
can't do that, then no deal, and no taxpayer funded boondoggle.

------
anon1m0us
Three of the cases cited are incidents where state organizations are buying
AI, which then benefits that state organization at the expense of its
citizens: People were arrested others got less benefits.

These systems are black boxes. The software companies have a financial
incentive to sell them. The programmers have a financial incentive to get the
customer what they _want_ not what is honest and true. If this software meant
the customer would have to pay out _more_ benefits, how many states would buy
it?

The same thing happens when the product is _not_ AI. AI is a product.
Manufacturers of the product are liable. The product should be open to
investigation.

When Ford Pintos were killing people, the Ford Pinto could be examined. The
737MAX can be examined.

AI can't be examined. The decisions it makes can't even be _explained_ a lot
of times.

Companies are using AI as a shield. Someone here said the other day that they
think people are actually making a lot of the decisions Google makes and
saying it was Algorithmic absolves them of the responsibility to explain the
decision.

This is not a good way forward. You can't say, "I don't know _why_ the machine
is hurting people."

It's hurting people. Shut it off.

------
YeGoblynQueenne
"Algorithmic Decision System" is not a good term. Some of the systems that are
described this way do not make decisions, strictly speaking. For example,
facial recognition systems don't make decisions- they make _identifications_.
They are classifiers, yes? Planners, game-playing algorithms, decision trees
and decision lists, etc, those are systems that are commonly thought of as
making "decisions"\- but those are very rarely the subject of scrutiny of AI
systems these days.

Take for instance a system that is used to determine whether a person is in
risk of recidivism. The system will cough up some number, probably a float
from 0 to 1. The number will be _interpreted_ as a probability that the person
will recidivate. Then, based on this _interpetation_ a decision will be made
by the person or persons using the system, whether to treat the person as
having a high risk of recidivism or not. The system hasn't decided anything at
that point- it's the person using the system that has made a decision.

The matter is complicated somewhat by the existence of systems that
_incorporate_ AI algorithms in a more general automated decision process. For
example, self-driving cars use image recognition algos to identify objects in
their path but navigation decisions are not taken by the image recognition
algos! However I'd wager that those kinds of integrated systems are not what
most people think of when they speak of "algorithms" making "decisions". But I
may well be wrong.

~~~
tyingq
_" For example, facial recognition systems don't make decisions- they make
identifications."_

Maybe. Not everyone enjoys the presumption of innocence. In the US, for
example, probationers and parolees. I could see facial recognition being judge
and jury for them, with little recourse, and serious consequences in the
balance.

------
AlanYx
The title is misleading -- few or none of the examples given in this article,
as far as I can tell, use AI or machine learning. They're just "automated
systems" in the sense of computer systems that execute regular business rules.
For example, the system at issue in the K.W. v. Armstrong case discussed in
the article wasn't an AI system, it was just a pretty amateurish ad-hoc Excel
spreadsheet.

The report quoted in the article (the 2019 AI Now "Litigating Algorithms"
report) also shares the same basic problem, making no serious attempt to
distinguish between AI and non-AI systems.

~~~
kbenson
It might be the title is too narrow then, not that it doesn't apply to AI/ML.
I imagine anything that applies to an automated system or algorithm will also
apply to AI/ML, as those are just very sophisticated examples of that.

