

The scored society: due process for automated predictions [pdf] - anigbrowl
https://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1318/89WLR0001.pdf

======
tacon
I took Caltech's "Learning from Data"[0] (machine learning) MOOC a couple of
years ago. Of course, one of the classic ML applications is loan approval. In
one lecture, Prof. Abu-Mostafa mentioned he was a consultant to a large
financial organization and his project built a successful loan selection
system. But then the organization's CEO asked "Why did you turn down these
loans? Under the Fair Credit Reporting Act, we have to tell the applicants why
they were rejected." Of course, he couldn't say "because that applicant was
not on a high enough peak in a 10,000 dimension space." A question came back
from the audience, "What did you end up doing?" and the professor told us,
sheepishly, "I can't tell you." And so it goes...

[0]
[https://work.caltech.edu/telecourse.html](https://work.caltech.edu/telecourse.html)

~~~
dragonwriter
Seems to me that you ought to be able to apply simpler techniques than those
used to determine whether the loan was approved or denied to determine a set
of changes that would have caused the application to be approved, and use that
to report a set of reasons that the application was denied.

