
Using Algorithms to Determine Character - denzil_correa
http://bits.blogs.nytimes.com/2015/07/26/using-algorithms-to-determine-character/?ref=technology
======
CPLX
There are multiple problems with these kinds of things, but two of the most
serious -- perhaps fatal -- issues with this sort of methodology come when you
start questioning the assumptions implicit in these algorithms:

One of the problems comes from the kind of virtual redlining you can get via
closely correlated metrics that serve as a proxy for problematic criteria. For
example, someone's preferences in music or entertainment or food purchasing
could closely correlate to their ethnicity, which in turn could correlate to
lowered access to capital due to historic racial discrimination. You may be
telling yourself you're writing an impartial algorithm, but it's not
impartial, it's just opaque. How do we know that the "prepaid cellular" metric
isn't just a proxy for "black people" or "recent immigrants" for example?

Likewise, you have the other problem, which is trusting the data in the first
place. We have some stringent laws in this country about the ability for a
consumer to see and correct data that leads to negative lending decisions
being made about them. Perhaps its interesting that "giving up a prepaid
wireless number" is a really bad data point when deciding to lend me money.
But what if I never actually had a prepaid phone at all, and some database
somewhere that I can't see says that I did? That's not only a problem it's
actually illegal.

The article makes a show of taking a critical eye towards this method of
evaluating creditworthiness, but somehow doesn't raise two of the really
serious Achilles heels that are baked into all such schemes and have been
problematic for awhile.

~~~
raceyT
Are you saying that if one uses a criterion that incidentally correlates with
race that is racism? And wouldn't a correction for this itself cause a much
more disturbing racist exercise ie; profiling every possible data point
against race and forever maintaining such profiles?

~~~
CPLX
If a bank made a decision not to lend to people that subscribed to Spanish
language HBO, would you consider that to be legal? Ethical?

How about if they wrote software with a single criteria for loans, and that
criteria was "don't lend to black people"?

Right, we know that's not OK intuitively. What about an algorithm that
automatically declines all applications from people who went to historically
black colleges?

So then, where exactly do you draw the line? If said algorithm is opaque does
that make it OK? What if it wasn't intentional, but the algorithm just kind of
turned out that way by machine learning? Is that better somehow? And how can
we tell, and how can we hold the bank accountable for those decisions?

One answer to "where do we draw the line" is to say that we only make credit
decisions based on history and experience with actual credit. Things like
payment histories and so on, and that the both the criteria and the data used
to make those decisions must be transparent, and consumers have the right to
challenge it.

This, in fact, is the system we currently have. There are reasons for that.

~~~
raceyT
I would consider it not only reprehensible but absurd, as these examples are.

I can't think of any advantages a racist bank would have, can you? The
approach by this lender is rather to dig deeper into individuals and give them
an opportunity based on personal data. It seems just as likely that resulting
correlations could help any given race.

I believe algorithmic attempts to understand people are a form of intelligence
to support, even if they discover things we might not like or make mistakes
along the way. Should they find uncomfortable truths we can own up, have a
laugh and grow; it will be a lot easier than fighting facts.

Prejudice is literally pre-judging, making assertions based on fear without
knowing the facts. Perhaps it's the algorithms themselves that are in need of
protection here.

~~~
flashman
> I can't think of any advantages a racist bank would have, can you?

Banks can, which is why redlining was a thing:
[https://en.wikipedia.org/wiki/Redlining](https://en.wikipedia.org/wiki/Redlining)

------
tristor
I think their algorithm probably works well for people who fall into a typical
social mold, but for those of us who are atypical it falls apart. I'm an
almost didn't graduate high school, college dropout, who never studied, and
has changed phone numbers five times. According to the minimal information I
have from the article, they wouldn't consider me loan-worthy.

The flip side of this is that in the StrengthsFinder personality assessment,
it lists "Responsibility" as my top strength. I'm a man of my word who has
never reneged on a promise or failed to pay back a debt, even if I have had to
struggle or sacrifice to do so, and consequently I have near perfect credit
and am financially relatively well off compared to the average situation for
someone in my age bracket in the US.

While it's useful and good to seek to classify things into quantifiable
buckets of data, it's also important not to lose sight of the fact that people
are not easily quantifiable, and that any attempt to segment people into
classifications will inevitably treat someone unfairly or misclassify them
because they somehow differ from the typical set.

------
discardorama
FTA: One signal is whether someone has ever given up a prepaid wireless phone
number. Where housing is often uncertain, those numbers are a more reliable
way to find you than addresses; giving one up may indicate you are willing (or
have been forced) to disappear from family or potential employers. That is a
bad sign.

This is such bullshit. I had a prepaid phone for a few months, because my
previous one was broken, and the new iPhone was going to be out in a few
months. So I moved to Verizon prepaid: $40/mo, 2GB data, no
taxes/fees/nickles/dimes. It worked great. Best part is: I paid VZ $150 for a
used iPhone 4s; and traded it in for $200 credit for a new iPhone. When the
new iPhone came out, I switched to it and a postpaid plan. And I have stellar
credit.

The point is: just giving up a prepaid phone by itself means nothing. GIGO.

~~~
iamthepieman
Yeah I've changed numbers 6 or more times

1\. First cell phone

2\. Changed providers before it was easy to port your number so just got a new
phone and new number at the same time

3\. Moved to an area where there was no coverage so cancelled #2 phone and we
consolidated as a household to just my wife's phone

4,5,6+ re-activations on a prepaid line that give me a new number every time.
I will activate my old cell when I'm travelling and forward my business line
to it but it doesn't make sense to keep it active all the time since I don't
get service at my house. I also work from home so a business land line is all
I need.

------
nickpsecurity
I think their methods can only be as good as the honesty of the input.
Depending on how they validate, this company might be an easier target for
people who will just cook the books on their scores. I'm hoping that they
thought of that ahead of time.

Far as character, there's an old legend where JP Morgan was asked by Congress
on what basis he lends out money. He reply was the person's character, not
ability to repay. His reasoning in the story was that people of poor
character, but having the money, would make up any excuse to avoid paying.
Whereas people of good character would do everything they could to get the
money and pay up. Realistically, ability to pay is a huge consideration but
the story's lesson about character was wise. Interesting seeing it in action
and automated to a degree.

~~~
sarwechshar
This reminds me of the stories in The Richest Man in Babylon which had an
example just like this.

~~~
nickpsecurity
Interesting book. Never heard of it before. Thanks for the reference.

------
zippzom
Are they using machine learning to determine this or simply a hard coded
algorithm? I would imagine a combination of both, but I'm very curious how
they generated enough data to train their model.

~~~
sedachv
ZestFinance (former employee here) uses machine learning, I imagine Upstart
does as well if they are talking about signals. There is a ton of data you can
obtain about people from marketing and third party credit agencies, of course
you mostly end up relying on your own ongoing loan portfolio performance to
train and validate models as time goes on.

Using ML for loan applications is an old idea. There is a whole chapter about
credit scoring using decision trees, genetic algorithms, and neural networks
in Miller's 1990 _Computer-Aided Financial Analysis_.

------
baseballmerpeak
> “Every time we find a signal, we have to ask ourselves, ‘Would we feel
> comfortable telling someone this was why they were rejected?’ ” he said.

Looking at other factors + feelings = trouble

These folks have already been rejected from traditional financing. There is a
fine line between those who barely didn't qualify (but should) and really
didn't qualify (and shouldn't). Where do you draw that line?

------
bakhy
what scares me about these things is what typically happens when people are
given probabilistic prognoses. your behavior is analyzed, a system makes a
prediction, "60% chance of XY". whoever reads the result is primed to think XY
is already true, or almost true. just look at the way scientific results are
accepted in the public. qualify it all you like, the effect it leaves seems to
me to be stronger than the actual result.

we're already forced in many ways to indirectly signal our responsibility.
e.g. we dress in suits for a reason. this is unavoidable to some extent, of
course. but, if in the future some trivial constellation of facts which we can
not influence starts to cast shadows over our possibilities, the world will
become completely schizophrenic.

------
akshat_h
What are the ethics of this? Though credit score anyway is penalising you, but
it is at least based on financial history. There might be a slippery slope
here.I can't imagine what would happen if google were to use something like
search history for say insurance, mortgage etc.

~~~
dsg42
What are the ethics of credit scores? Credit scores are a terrible way to
judge whether someone is credit worthy. Maybe this is a little better?

