
Viral Tweet About Apple Card Leads to Goldman Sachs Probe - gyc
https://www.bloomberg.com/news/articles/2019-11-09/viral-tweet-about-apple-card-leads-to-probe-into-goldman-sachs
======
kyrieeschaton
I guess it worked for him to throw a hissy fit, but there's absolutely no
reason to actually believe they're conditioning on gender.

It could be "it's a new product, we randomly assign credit limits to see how
it affects behavior".

It could be "it's a community property state and we're overexposed to this
household if we give the second card the same limit as the first".

It could, realistically, be almost anything except for _evil bankers_ deciding
to use illegal criteria to underwrite that has a side effect of limiting the
amount that can be charged to the account each month (you know, how they
actually make money).

Oh, random CSRs don't get a pithy explanation of a multivariate nonlinear
underwriting decision to poorly convey to customers? That's kind of
precedented!

~~~
elvinyung
I mean, don't you think it's kind of a strawman to assume anyone thinks
machines are explicitly sexist? It seems much more likely that whatever method
they use to do feature selection on this obviously high-dimensional data just
happens to end up picking something that was a proxy for gender.

You can be implicitly biased without being aware of it. This is true for both
humans and algorithms.

~~~
gwd
> It seems much more likely that whatever method they use to do feature
> selection on this obviously high-dimensional data just happens to end up
> picking something that was a proxy for gender.

Yes, this is exactly what everyone thinks happened.

Nobody thinks people at Goldman-Sachs wrote

    
    
        if (applicant.gender == "F")
            limit /= 20
    

somewhere in their algorithm.

But regardless of _how_ the thing happened, if millions of people are treated
significantly differently for no reason other than their plumbing, that's a
major problem. People have been talking about this "accidental proxy for
gender" for years now; there's absolutely no excuse for no doing a basic
sanity check to make sure that this kind of thing isn't happening.

edit: typo

~~~
kyrieeschaton
"for no other reason than their plumbing" is _exactly_ what you just excluded
in the prior sentence.

------
rndgermandude
Other than him being very outraged and claiming gender-based discrimination
again and again, is there actually anything to suggest the algorithm is biased
on the gender axis, as opposed to any other axis?

Just to be clear: I am not dismissing the possibility that the algorithm
(meaning the training data, really) is gender biased, it just isn't clear from
what I have seen in the tweet storm that this is necessarily the case.

E.g. I had an ex girlfriend who had a slightly lower income and slightly worse
credit score (well, German Schufa score) than me, and yet she got offered like
twice as credit when she applied for a card. I am guessing (just guessing)
that this was due to her having paid off e.g. a car loan in the past and being
generally more consumerist than me, while I never had taken out any major
loans.

~~~
function_seven
> is there actually anything to suggest the algorithm is biased on the gender
> axis, as opposed to any other axis?

None of us are allowed to know that. That's the big problem here. They
delegate the decision to a black box that cannot be questioned.

When the process is set up like that, I think it's fair to assume the worst
case scenarios and put the burden on the company to prove otherwise.

Adverse inference is a sensible way to combat secret decisions like this.

~~~
setpatchaddress
it’s notable that a lot of mid-20th-century fear surrounding the increased use
of computers was centered around this exact scenario: a black box, the
judgement of which cannot be questioned, deciding your fate, with no recourse.

We all scoffed at this for a long time. ML makes it real, apparently.

------
minimaxir
The original Twitter thread is still being updated, and it's a doozy.

This is not an isolated incident.

[https://twitter.com/dhh/status/1193240508845510656](https://twitter.com/dhh/status/1193240508845510656)

[https://twitter.com/dhh/status/1193242909111398401](https://twitter.com/dhh/status/1193242909111398401)

~~~
DoreenMichele
I haven't read the whole thread, but I am seeing him bitch about her credit
score being higher -- _from a single agency._ There are other agencies which
may have different info.

I know nothing about this particular credit card. I'm not as up on credit
stuff as I once was (and the world has changed a lot since then). But when I
was a homemaker, I had a credit card in my name with a much higher limit than
my husband had on any of his cards. That's not exactly the norm.

There are various factors that go into this. He's not wrong to suggest that
there is a very big problem with employees having no idea what went wrong. I'm
less confident that it is reasonable to infer gender is the entire
explanation.

~~~
w4
> _I haven 't read the whole thread, but I am seeing him bitch about her
> credit score being higher -- from a single agency. There are other agencies
> which may have different info._

If you read the thread, he was specifically told the Apple Card uses
Transunion, which is why they checked with Transunion, and found her score was
higher on that report:
[https://twitter.com/dhh/status/1192945415538106369?s=21](https://twitter.com/dhh/status/1192945415538106369?s=21)

~~~
DoreenMichele
He was also repeatedly told "It's the algorithm, man! And I have no clue
what's in it!"

I worked for a Fortune 500 company at one time. Lots of entry level employees
were not exactly reliable sources of info about how decisions got made there.

You may or may not be talking with an entry level employee in their call
center, but you probably aren't talking to a departmental head or member of
the C suite.

~~~
heartbreak
Let’s assume that the tweet author is reasonably well-versed in business
operations, since he is himself a very successful businessman.

~~~
DoreenMichele
Sure.

Let's assume I am very well versed in other pertinent domains of knowledge,
like social psychology and the tendency for people to get mad as hell about
social justice issues and leap to ugly conclusions that fit their SJW
narrative about evil in the world that can be conveniently lumped under a one
word heading, like _sexism._

Let's further assume that this is actually actively counterproductive, so it's
reasonable to point out that correlation is not causation and it's unhelpful
to insist on a particular conclusion you cannot prove.

I already noted he's right to be outraged at the situation and critical of the
black box nature of the decision. I'm just not comfortable with him ranting
that it's clearly and obviously due to sexism.

~~~
kennywinker
There a many people chiming in that they have had the same experience, and
nobody saying they had the opposite. If the credit limit difference was
unrelated to sex there would be random distribution of couples with the
inverse experience.

So you sayyyy that he flew off the handle because he's an SJW or whatever, but
because his initial assumption continues to be proven correct as more data is
accumulated... it seems like you are wrong that it was an overreaction.

~~~
DoreenMichele
_So you sayyyy that he flew off the handle because he 's an SJW or whatever,
but because his initial assumption continues to be proven correct as more data
is accumulated... it seems like you are wrong that it was an overreaction._

That's basically a dismissive personal attack.

One of the most frustrating and crazy-making aspects of participating on HN as
openly female is the frequency with which one must politely endure phenomenal
open disrespect from people trying to position themselves as pro women's lib
while violating the guidelines here concerning how to engage respectfully with
other members. The only thing more crazy making is that there tends to be hell
to pay should a woman dare to point it out or otherwise try to defend herself.

For me, it is made more bearable by the quiet support of the many people who
upvote my comments and posts, flag the worst replies and comment thoughtfully
on pieces I submit.

Yes, sexism is very much alive and well. I get to experience it on a daily
basis.

It still does little to no good for powerful men to engage in public white
knighting and level accusations they cannot backup.

I will note we are reading _his_ tweets on HN, not his wife's. We are
discussing the opinions of a powerful man, not a woman. We are reading them
largely because he is a powerful man, not because he can back up his
assertions.

Discussions of this sort are sometimes a case of "two steps forward, one step
back." But as a woman participating in them, they all too often feel like a
dystopian bit of theater in which men get to claim virtues they don't have and
treat a woman badly while loudly proclaiming themselves against this evil
thing called _sexism._

~~~
kennywinker
Not all attempts by men to advocate for equality or call out bullshit are
“white knight”-ing. White knighting implies that it is unwanted. I am charging
in to “rescue” someone when they do not want to be rescued. It is legitimate
for a man to call out a bias when he sees it. To ignore it and leave it for
women to point out is... frankly helping maintain the status quo.

I’m sorry you found my comment dismissive and a personal attack. I found your
use of the term “sjw” dismissive of the issue, since it’s such a loaded term.
So my tone was a bit... glib... in reaction to that.

I also definitely didn’t read your username, so don’t take anything i said to
be a reaction your female handle. I definitely assumed a male writer (which is
it’s own problem).

~~~
DoreenMichele
_I also definitely didn’t read your username, so don’t take anything i said to
be a reaction your female handle. I definitely assumed a male writer (which is
it’s own problem)._

It is, in fact, a much larger problem.

To my mind, white knighting is about men playing hero in order to enhance
their ego and public reputation as the primary or sole goal such that actually
addressing sexism is not only incidental, it's actually counter to their goal.

Being chewed out by you and lectured about how I'm wrong to find any of this
offensive amounts to _mansplaining_.

At every turn, no matter how much men theoretically decry the existence of
sexism in the world and pretend to fight against it, when push comes to shove,
they expect to be treated with respect by women while not themselves being
respectful to women. That expectation amounts to demanding deference from
women.

Start by working on treating actual women you are actually interacting with in
the here and now with actual respect instead.

That includes not assuming everyone you speak with on HN is male. If you don't
know, don't assume. That assumption based on the odds is a fundamental part of
sexism, racism, etc. It's a really huge issue.

If you really want to see change in the world, get with the man in the mirror
and work on his bad habits. He's the person you have the most control over.

If every man who ever beat his chest about how sexism is a bad thing spent
more time working on his bad habits, things would change.

Instead, what happens is every time I comment, multiple people treat me like
shit and then come up with justifications for their behavior and reasons why
the problem is me and then fail to see the irony in decrying sexism while
basically telling me "Shut up, woman." in the same breath.

~~~
badcede
> If every man who ever beat his chest about how sexism is a bad thing spent
> more time working on his bad habits, things would change.

You can say that again. This place and others would be unrecognizable.

------
purple_ducks
The response to codinghorror's "then don't use the Apple Card? Solution seems
obvious" is on point:

> This is such a shallow, disappointing take. If we relegate all
> responsibility for discrimination to the individuals discriminated against,
> nothing is going to change! Individual action against structural problems is
> INSUFFICIENT.

~~~
scrollaway
The difference between a solution and a workaround. "Not using the product" is
a workaround. A solution addresses the root of the problem, and the root of
the product's problem is seldom "The consumer bought the product".

~~~
zamalek
In theory it corrects the problem too, as Apple would lose customers. In
practice, Apple has customers no matter how badly they screw up.

------
mikestew
Since we are slinging anecdotes around, starting with TFA, my wife and I file
jointly, have what I believe to be a good credit score. I filled out what
little form there is for both of our cards. Put down the same income, etc. I
think our credit limits are the same (need her phone to verify), but she got
13% and I got 18% APR. Now how the hell is there a five percent difference?
Not that it matters because they both get paid off, but WTF? (And 18%;
seriously, GS?)

But on topic, _she_ got the considerably better interest rate.

------
slg
>“Any algorithm, that intentionally or not results in discriminatory treatment
of women or any other protected class of people violates New York [and
federal] law.”

I worry about this a lot with the growing importance of algorithms and machine
learning. You can't just not actively program the thing to discriminate and
assume that is enough. You have to specifically program it to not
discriminate.

~~~
taway87
So, "credit realism", to match "IQ realism"?

I see this a lot, and the background assumption seems to be "in the real
world, minority group A is actually riskier, dumber, or objectively worse in
some other way, so in order to comply with anti-discrimination, we have to
introduce special cases."

Maybe instead we should start with the assumption that women are _NOT_
riskier, dumber, or objectively worse, and fix the likely bug, instead?

~~~
MaupitiBlue
Because the goal was to be realistic, not to assuage white guilt.

------
tootie
One data point really isn't enough to draw a conclusion and I actually doubt
it's true. GS is extremely scrupulous when it comes to avoiding liability and
they probably put a ton of diligence into their risk model. It's possible
there's some emergent gender bias but we'd need a better show of proof.

~~~
Terretta
> _One data point really isn 't enough to draw a conclusion_

My wife's credit history is longer and historically her score higher because I
used no revolving credit until recently, and not using credit cards counts
against you. Also married filing jointly...

For the Apple Card, I was given her limit _several times_ over.

I did notice that the expected limit is shown before the hard pull on credit.
This means they've got a pretty good idea before getting the latest credit
report.

> _GS ... probably put a ton of diligence into their risk model._

To your point about the risk model, I generally think the credit score the
bureaus give us is wrong, while I think the decision GS made on the card
limits is probably plausible ... if there's some chance or probability we
might split up.

For instance, we work in different states, so data patterns might look like we
are already separated? I also don't know if she put _her_ income or
_household_ income. My income has been several times hers since long before we
got together.

Rather than gender bias, I would imagine that given the probability of
divorces at a certain age, executive level, and income bracket, that would put
a thumb on the scale for ... what if you were not married filing jointly? Who
makes more? What cash payment can they carry without going broke? Weighted
that way, we _should_ have different limits, and it's not a gender thing. This
is the kind of correlation humans might not come up with, but ML probably
would.

If this is purely actuarial, the decision might be correct, while not feeling
moral.

~~~
judge2020
> To your point about the risk model, I generally think the credit score the
> bureaus give us is wrong, while I think the decision GS made on the card
> limits is probably plausible

I find this likely too, the first time I applied I got denied and the credit
score in the email was considerably lower (-100 points) that what is shown on
Credit Karma.

~~~
sgerenser
This is a common misconception. There is no such thing as _a_ credit score.
Theres FICO2008. FICO2015. FICO for car loans. FICO for revolving credit.
Vantagescore. And so on and so forth. And then each of these can be based on
data in any of the three different Angencies, which may be significantly
different. So have if two different credit scores that are off by 100 points
isn’t even remotely unusual. Many times even the scale itself is different
(max of 850 vs 950 on another model).

------
brandonmenc
Credit card company extends known rich guy tons of credit - news at 11!

Every credit card company probably maintains a list of "big fish" \- i.e.
famous people - and grants them all a huge credit line.

It's a non-story.

------
judge2020
There are two arguments here: A. "is the credit limit algorithm sexist" and B.
"we should be able to know how an algorithm decides things and/or we should be
able to see the algorithm".

You can't determine A, simple as that. The only way would be if Apple/GS comes
out and says something like "he capped out the household credit limit", which
Apple/GS would probably only tell him anyways.

B is a different issue, one with lots of room to actually converse over. But
it's one which the author seems to pivot to after some reasonable arguments
are presented as to why her credit limit was literally $57, making his
argument for A very weak.

------
shuckles
Credit lines are given based on stated income, not on credit score. A common
problem in this industry is that non-working partners state their personal
income instead of household income. Even if an applicant is approved due to
strength of credit, the bank won’t underwrite a large line for low stated
income.

------
kchoudhu
What I don't see _anywhere_ in that thread is how much his wife put in as her
income when applying.

------
esotericn
It strikes me that longer term, we're just going to end up playing a cat and
mouse game with people continually inventing different categories that
businesses aren't permitted to 'discriminate' on.

Today it might be gender and race. That makes a lot of sense, because the
alternative is to further entrench what are basically inheritances.

But aren't we just going to rattle on through and have the algorithms discover
(whether we actually realise it or not) that, say, someone diagnosed with X is
less creditworthy than someone diagnosed with Y, or that someone bullied in
school is less creditworthy, or whatever else?

The whole point of ML is to extract this sort of information from a dataset.

Is it even possible or meaningful to create an unbiased model? Doesn't a
model's profit imply bias, whether we currently consider it morally correct or
not?

I'd be interested in an argument to convince me otherwise. My view at the
moment is basically 'we spend all of this time building models, and then we
have to stop using them because they're socially negative/immoral, but for a
brief period shareholder value was maximised'?

~~~
matthewmacleod
That’s not a great argument though - essentially “we can’t prevent an ML model
from being biased so we should embrace it.”

Western society has mostly accepted the idea that—outside of some specific
cases—we should avoid building systems and processes that systematically
discriminate against people on the basis of a selection of characteristics
which have historically attracted it. The exact application of this concept,
the interpretation of it, and the boundaries of discrimination or protected
characteristics will continue to be subject to gray areas and refinement. The
rules are not perfect.

But I do think a better solution to the problem (“we have implemented a whizz-
bang new technology which is inherently subject to bias”) is to either fix or
discard that technology, rather than discarding the concepts of equalities
regulation and civil rights.

~~~
esotericn
I think you've mistaken me.

My argument is that if we take the standpoint that we don't just want the
metric to be whatever is short-term economically optimal for the designer of
the model, we should just stop/ban it now, because we already know that we're
going to have to kill it once we actually understand it properly.

It's only allowed now because we haven't figured out the bad things that are
happening.

Inventing more and more categories that businesses are not permitted to
discriminate upon is precisely the wrong approach - if we're talking about
huge companies and not the bakery down the road, we need something that's more
like 'you need a very good reason to exclude someone', rather than 'you can
exclude someone for whatever reason they like, unless they're a member of the
set of continually extending list of protected categories'.

------
Simulacra
I think he’s making a lot out of nothing, there’s no way really to prove this,
and we only have he and his wife as a sample size.

------
themgt
I would take DHH's interpretation of this with a large grain of salt. He seems
to assume that living in a "community property state" will be taken into
account (presumably then considering his wife to have parity with his
income/score) by the credit limit algorithm, but we have no way to know that
that's the case.

He also adds "It gets even worse. Even when she pays off her ridiculously low
limit in full, the card won’t approve any spending until the next billing
period. Women apparently aren’t good credit risks even when they pay off the
fucking balance in advance and in full."

Are we really to imagine the Apple/Goldman algorithm has some
"if(gender.female){ cc_payment_terms = :discrimination }" sort of code in it?

FWIW I'm a male and have a mid-700s credit score and was denied Apple card
approval. I am fairly certain there was no sex discrimination involved in the
denial.

~~~
jacquesm
It won't have that line in their code. What it very likely will have is some
Bayesian algo or maybe if they're really high tech some ML black box that will
have gender as one of its inputs. And it probably shouldn't have that input.

The whole idea behind decisions like these is that they have to be explainable
and ML especially does not lend itself well for that.

~~~
matthewmacleod
Note that an ML box doesn’t even need gender as an input. Name alone probably
gets you 90% of the way there.

~~~
xtacy
That points to the core of the issue. "Fairness" in ML algorithms can be hard
to define and assess.

It's easy to say "omit gender from the model", but the real issue here has to
do with the _causal_ pathways between your input variables and the output
variable.

Since ML mostly works by exploiting correlations between the input and output
variables, omitting gender doesn't mean gender's influence is removed. You'll
have to omit all the causal pathways from gender -> the output, effectively
"d-separating" [1] gender from the output. Whether that's practical or not
depends on how well we understand the data generating process.

[1]
[http://bayes.cs.ucla.edu/BOOK-2K/d-sep.html](http://bayes.cs.ucla.edu/BOOK-2K/d-sep.html)

~~~
rightbyte
You can simulate it with artificial querries with the same data except gender
and name and see if women get less or more.

------
bobobob420
Highly doubt Goldman Sachs included gender discrimination in their risk
model...do regulators get to see the risk models credit companies use to
determine how much credit one can use?

~~~
tempsy
It's well known there are proxies are you can't use when building a credit
risk model e.g. zip codes are off limits because it can be used to a proxy for
ethnicity.

Intentional or not, it's possible they could have used something that proxied
for gender.

------
dghughes
Hopefully anything linked to gender is looked into for example car insurance?
Men typically signifigantly more than women for car insurance.

~~~
zrail
Car insurers can point at actuarial data to back up their rating decisions.

~~~
xenihn
Are you saying that there are insurers who don't base their rating decisions
on actuarial data? Isn't that a requirement?

~~~
zrail
Parent comment said car insurer.

------
donjp
Ok, let's try to put it all into context. Denmark. one of the most egalitarian
countries in the world. 1% of the 1%. I live here and aren't a Dane, their
vision of the world can be quite bubblelike and skewed. Im almost sure that
he's jumping into conclusions.

------
tareqak
Same story from the Associated Press: “NY regulator vows to investigate Apple
Card for sex bias”
[https://apnews.com/8754cf30526b4b94a3ba6e1cfc1d5054](https://apnews.com/8754cf30526b4b94a3ba6e1cfc1d5054)

------
szczepano
Can you get insurance from algorithm damage ? I see a niche here.

------
neonate
[http://archive.is/fo7My](http://archive.is/fo7My)

------
r99g7
And my wife's Apple Card limit is 20x higher than mine. Are they're
discriminating against men too‽

If I was as rich as DHH, who makes millions per year, I would cancel my credit
cards and refuse to do business with companies like Goldman Sachs entirely.

I wouldn't give a crap about 2% cashback and I don't know why he does.

And despite the fact that he's probably wrong about the entire complaint, the
least he could do is cancel his cards in protest. Instead, he seems to have
accepted the "bribe" (his word) quite willingly.

------
paggle
There could be a manual process here - someone knows that DHH has a >$1
million car and manually edits his credit limit to super high.

------
jacquesm
Good for him. Of course if he had been a nobody this would never have gotten
traction but typically DHH is found on the right side of issues like these and
I commend him for speaking out. This is likely one of very many such instances
and let's hope that insurance companies, banks and other financial service
providers take note.

~~~
rhexs
Speaking out about what? The guy appears to be claiming that a company is
discriminating against women with no evidence aside from a single anecdote.
Seems like he's some sort of rich 1%er founder, probably not approved under
any sort of standard model the typical applicant will fit into, and thus has a
crazy high limit.

Is there anything more to this than typical twitter mock indignation and
outrage?

Now, if his wife was enormously wealthy before they met and this is the
outcome, I suppose that would raise some eyebrows!

~~~
heartbreak
As explained in the very first tweet in the thread, DHH’s wife is exactly as
wealthy as DHH. That’s part of the frustration expressed.

~~~
rhexs
So, if that's the concern, Europeans are free to pass regulation that requires
credit companies to extend the same credit line to husband and wife.
Naturally, that will potentially have unforeseen consequences with regard to
modeling risk, buyer beware.

Seems like my initial comment was on point then.

------
function_seven
Let's assume that there _is_ some valid reason for the difference in credit
limits. (I'm not assuming one way or the other, but let's just grant it here.)

This is still bad for Apple and for consumers. The "ALGORITHM" that can't be
questioned, inspected, explained, or overruled is a massive failure. Whether
its criminal justice, credit scores, or behavioral predictions, ceding
authority to some "AI" overlord can't end in just or fair outcomes. (Scare
quotes around "AI", because the black box may just be a chain of if/then
statements that conveniently proxy in the worst of our institutional biases.
Or it could be sophisticated ML... proxying those same things.)

I don't think we're done hearing about this. I'll be very interested in what
Apple has to say about it, or what—if anything—is discovered. And it'll
doubleplusungood if gender is an explicit input into the "ALGORITHM".

DHH is absolutely right to push back against all the respondents that offer
plausible explanations. They're all missing the point. The point is that Apple
should be providing a concrete explanation for their credit decisions. All
credit providers should.

~~~
matthewmacleod
Yep. This is part of why GDPR is so important - the “right to explanation” for
automated decision-making processes.

------
im3w1l
This is a problem that needs to be solved, no question about it.

------
mc32
Maybe these CC issuers should just take gender out of the equation and solely
base their credit extension on economic and behavioral data points.

~~~
matthewmacleod
That would be the obvious first step and an absolute minimal requirement. The
problem is that as these systems get more black-box-y there is little to stop
them making predictions that are affected by inferred hidden characteristics.
Like, “we don’t take race or ethnicity into account, but our insurance
algorithm says that if you have black hair and brown eyes and a name commonly
found in black communities and you live in an area with a large black
population then you are more of a risk sorry but it’s nothing to do with race
honest”.

~~~
MBCook
The chances there is a line in the algorithm that says ‘women get less’ is
basically 0%.

It’s got to be inferences like you said. And those are so much harder to spot.

