Hacker News new | past | comments | ask | show | jobs | submit login
Viral Tweet About Apple Card Leads to Goldman Sachs Probe (bloomberg.com)
144 points by gyc 26 days ago | hide | past | web | favorite | 163 comments



I guess it worked for him to throw a hissy fit, but there's absolutely no reason to actually believe they're conditioning on gender.

It could be "it's a new product, we randomly assign credit limits to see how it affects behavior".

It could be "it's a community property state and we're overexposed to this household if we give the second card the same limit as the first".

It could, realistically, be almost anything except for evil bankers deciding to use illegal criteria to underwrite that has a side effect of limiting the amount that can be charged to the account each month (you know, how they actually make money).

Oh, random CSRs don't get a pithy explanation of a multivariate nonlinear underwriting decision to poorly convey to customers? That's kind of precedented!


I mean, don't you think it's kind of a strawman to assume anyone thinks machines are explicitly sexist? It seems much more likely that whatever method they use to do feature selection on this obviously high-dimensional data just happens to end up picking something that was a proxy for gender.

You can be implicitly biased without being aware of it. This is true for both humans and algorithms.


> It seems much more likely that whatever method they use to do feature selection on this obviously high-dimensional data just happens to end up picking something that was a proxy for gender.

Yes, this is exactly what everyone thinks happened.

Nobody thinks people at Goldman-Sachs wrote

    if (applicant.gender == "F")
        limit /= 20
somewhere in their algorithm.

But regardless of how the thing happened, if millions of people are treated significantly differently for no reason other than their plumbing, that's a major problem. People have been talking about this "accidental proxy for gender" for years now; there's absolutely no excuse for no doing a basic sanity check to make sure that this kind of thing isn't happening.

edit: typo


"for no other reason than their plumbing" is exactly what you just excluded in the prior sentence.


It is absolutely not a strawman. It is incredibly easy to find people granted authority by the state that will claim that, eg, mathematics per se is an instance of exclusionist masculine thinking. It's even easier to find instances where people granted authority by the state will claim that any discrepancy in outcomes is ipso facto intentional.


I would love a customer-company relationship in which the customer gets meaningful information by privately contacting the company with a polite message, but in my experience this is rare. When an answer comes, it tends to misdirect rather than tell the whole truth.

Here you have a bank and a company known for secrecy.

But you acknowledge this;

> random CSRs don't get a pithy explanation

What the throwers of hissy fits are pointing to is that, by building blackboxes, you can get whatever result you want (which doesn't mean all results are expected) with plausible deniability.


Perhaps because this exact approach of a candid private question can be abused to support litigation or corporate espionage.


I've heard anecdotally a number of times that actuaries peg men with higher car insurance rates. I'm only just learning that there's anything like this in the credit system to prevent gender bias. Does something like that protection exist for auto insurance?

As a side note, Idon't really understand the thread's appeal to credit scores here, considering that the TransUnion rating system is supposed to be the inferior one that Apple is looking to replace.

But most of all, I'm shocked by the number of people here outraged by Apple's behavior who will not even be considering switching off their iPhone (Twitter OP included, hell, he even posted a screenshot of his recurring TransUnion payment still being served via Apple Pay). I actually happen to agree with OP, but this blaise attitude towards real customer complaints has kept me from using Apple products for years.


For car insurance? This is incredibly well known and not at all anecdotal. There was even a popular car insurance firm in the UK that only gave women car insurance if they were women called “Sheilas Wheels” which touted lower fees than “other insurance companies that have to also cater to men”.

https://youtu.be/GzNJh1o84-E


It's also now illegal for insurers to do this in the EU since the ECJ reasoned that discriminating on gender, even if as a result of such a correlation, was prohibited.

Anecdotally I've been told that (British car) insurers don't care very much about your real actuarial risk, they're focused more on whether you'll actually pay their premiums. Specifically I was told that work to integrate with credit checking services was a priority whereas an integration with the UK Government's service which gives them access to driving offences and other records related to a driving license was back-burnered.

The reason I was given was that in practice they'd found if you require drivers to give their license details, it causes a big drop in purchases, if you make the license details _optional_ lots of people fill them in, and you can just give all those people a better price even though you don't use the details to actually check anything automatically.


Even though it wasn't a protected class, GEICO (Government Employees Insurance Company) originally started as an auto insurance company specifically for federal government employees, a pool perceived to be less risky than the driving population


> I guess it worked for him to throw a hissy fit, but there's absolutely no reason to actually believe they're conditioning on gender.

There certainly is. Ignoring the anecdotal data of people replying to him who saw the same outcome, credit scores for women skew lower than for men.

One of many sources: https://www.federalreserve.gov/econres/notes/feds-notes/gend...


> credit scores for women skew lower than for men.

Which is crazy, because for most of my guy friends (myself included), saying that their significant others/women make significantly better financial decisions is putting it far too lightly. I basically wasn't making decisions at all (besides savings) until my s/o set me right.

This discrimination is tragic and disgusting.


"Good financial decisions" are only at a very specific margin associated with credit score. If you have an underwater car loan you can't really afford comfortably that you nonetheless make the minimums on whilst eating rice and beans, you'll have great credit after a couple years. If you strategically defaulted immediately after buying your ridiculously underwater condo in 2008 and it took them 3 years to foreclose on you, you made a great financial decision and your credit is trashed for the next six years.

Credit scores specifically correlate with default risk, not some abstracted measure of financial health.


Women tend to have lower incomes and lower net worths. Here is a recent article squeeing about women finally being willing to buy homes on their own:

https://realestate.usnews.com/real-estate/articles/the-rise-...

One of the statistics in it: Single women can only afford about 39% of homes. Single men can afford more than half.

Historically, couples with traditional marriages (primary breadwinner husband, wife whose primary responsibilities were women's work) often did not bother to put her name on a real estate transaction or car purchase. This substantially impairs a woman's ability to establish a credit record and reduces her legal claim to assets.

If you get divorced, the person with a real career and resume to match will likely continue having a good income. Someone who was a full-time wife and mom will face serious barriers to establishing a real career at all and may well remain poor for years to come.

I'm quite financially savvy. Financial savvy only goes so far. Ability to pay still matters and men are more likely to have that piece covered.


> If you get divorced, the person with a real career and resume to match will likely continue having a good income

Since I believe that gender roles is about splitting benefit and responsibilities, I asked myself what the outcomes of a divorce should result in in gender stereotypical situation.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5992251/

A man who focus on the traditional gender role of breadwinner and a woman who focus on the traditional gender role of social management will results in decreased income for the woman and decreased social support for the man past divorce. This puts the woman at increased risk of poverty, while the man has an increased risk of loneliness.

There is also some implied finding that women who remarry has on average a decrease in house income, while men do not gain persistent increase in loneliness. I do not think there is a major mystery why that is considering differences in inter-sex competition for men and women.


Yeah. It wasn't that long ago the mortgage industry was rife with blatant gender discrimination, which contributed to this issue. I've learned this from the compliance training at the mortgage lender where I currently work.


I don't doubt it, but I think it's far more complicated than that. I think a great deal of potentially low hanging fruit is actively overlooked by the tendency the world has to go looking for ill intent.

In cases where the explanation is something like "No, seriously, she simply cannot afford it." efforts to root out presumed bias are not only unhelpful, they are actively counterproductive.

I need more income. Period. If you were to approve me for a mortgage this minute for the $550k building I desire, I simply cannot make the payments. It's not going to fix anything.

First, get me the money. That's the one thing no one on this planet seems seriously willing to help me with and there is always some BS excuse.


Totally agree. When I taught high school in a impoverished area, someone somewhere said that poverty is acute lack of money, period. Everything else is a symptom (or feedback cause). So we spend all this time creating convoluted strategies to address poverty, except actually giving people money. Obviously, doing so wouldn't fix all problems on Day One for all people, but I now suspect it would be a lot more efficient than a lot of our efforts, and it would make the remaining problems more tractable.


> Women tend to have lower incomes and lower net worths.

Very true, because they aren't as smart and they don't work as hard.

No, wait, sorry, it's actually because there are stereotypes about women that aren't even remotely true that are reinforced decades after they were found to be offensive through institutional sexism such as the gender pay gap and the glass ceiling.

So, maybe the lower credit scores simply reflect the deck stacked against women in the workplace.


I'm plenty frustrated with actual sexism of the sort you and others here keep insisting is the entire explanation.

But, no, that's not remotely the entire story. Other factors that help suppress female income:

1. They tend to spend far more time on unpaid activities ("women's work") than men.

2. When moving to a new city, a man is very likely moving to a new job. This usually entails a pay hike. A woman is much more likely to be following her man's career to a new town. This typically involves taking a pay cut and derailing her career aspirations.

3. Men are more likely to pick a college that supports their career goals. Women are much more likely to attend whatever happens to be locally available and affordable.

4. Men who start businesses are typically making a career move. Women who start businesses are more likely to be starting a "lifestyle business" to accommodate other demands placed on their time and energy, such as special needs children.

I'm sure I could go on. I've read research on the topic for literally decades to try to understand where my life went wrong, but it's ultimately pointless since I can't cite specific sources, most of the members of HN aren't familiar with such stats and, no, just taking my word for it as a SME in the sorts of things that have negatively impacted my life is simply not anything anyone here will do.


Except for

1) according to him, the bureau they were using for underwriting data indicated she had higher credit score

2) "conditioning on credit score which is correlated with gender" is not conditioning on gender. The algorithms that derive credit score from a set of credit data do not condition on gender.


> 2) "conditioning on credit score which is correlated with gender" is not conditioning on gender. The algorithms that derive credit score from a set of credit data do not condition on gender.

It does if the algorithm is based on flawed assumptions, or worse, fed data based on institutionalized sexism.


I dare you to claim Goldman is using gender as an input variable for credit underwriting.

Since no one is actually doing this, we're back to something like "maybe (altho there's no evidence for this either) people indistinguishable from Mrs. DHH actually do look more likely to default, which is society's fault and thus Goldman should just take the hit".


I mean you know the system is sexist when one gender out-lives the other by a considerable number of years yet the retirement age for both is the same.


It could, realistically, be almost anything except for evil bankers deciding to use illegal criteria

FTA:

“My belief isn’t there was some nefarious person wanting to discriminate. But that doesn’t matter. How do you know there isn’t an issue with the machine-learning algo when no one can explain how this decision was made?”


They can explain, they most likely don't want to. Also how does this guy know this is a "machine-learning algo" and not just a series of ifs and elses?


It’s much more likely a series of ifs and else’s, not a fancy ML model. Companies have been doing automatic credit decisions for decades and there’s no evidence that GS bank is doing anything different than all the others.


> but there's absolutely no reason to actually believe they're conditioning on gender.

The anecdotal evidence, that many women are claiming that they received much lower credit limits than their partners, despite similar or superior credit conditions, would seem to indicate otherwise.

I don't believe anyone is saying that they're directly doing it. Just that their algorithms have that outcome.


It could also very well be "We trained our neural net on historical allocations of credit limits by professional analysts, who had a subconscious gender bias". The point is there's no way to know.


> I guess it worked for him to throw a hissy fit, but there's absolutely no reason to actually believe they're conditioning on gender.

Of course not, but why spoil the outrage mobs victimhood narrative du jour?


Other than him being very outraged and claiming gender-based discrimination again and again, is there actually anything to suggest the algorithm is biased on the gender axis, as opposed to any other axis?

Just to be clear: I am not dismissing the possibility that the algorithm (meaning the training data, really) is gender biased, it just isn't clear from what I have seen in the tweet storm that this is necessarily the case.

E.g. I had an ex girlfriend who had a slightly lower income and slightly worse credit score (well, German Schufa score) than me, and yet she got offered like twice as credit when she applied for a card. I am guessing (just guessing) that this was due to her having paid off e.g. a car loan in the past and being generally more consumerist than me, while I never had taken out any major loans.


The whole thread is less about his wife having a lower credit limit, and more about Apple reps not understanding the fucking algorithm.

The axis doesn’t really matter. Maybe it’s gender, maybe it’s not. In any case, it should be about money, credit score, trustworthiness. Apparently, it’s not. Being a women could be one indicator. It might be "being black" for other people.

There seems to be something fishy going on and Apple doesn’t even know why. Hence, there is a bias.


>There seems to be something fishy going on and Apple doesn’t even know why

That seems a little silly, though, since Goldman Sachs is the bank in charge of the card and approval process. Just like Amazon doesn't know anything about the processes that are used to determine credit worthiness for their cards (that falls to Chase and Synchrony), Apple is merely licensing services from GS.

On top of that, GS representatives that work in a call center aren't going to know anything about the details or inner workings of the process either. With the amount of potential abuse for such information, it feels like it would be irresponsible for those people to have access to that info.

The bigger issue is that this couple had a valid complaint and had to turn to Twitter to resolve it instead of GS having some formal method of requesting the info.


Although after this, I would think that this is a good opportunity for Apple to work with Goldman to better understand the algorithm and strive to make the process become more fair and transparent. Just like Apple how deals with 3rd parties making their devices where they can help enforce work and environmental rules.


Definitely. And it sounds like this is exactly what's happening.


DHH posted about this exact thing today. Basically: who cares? It has the Apple logo on it. Apple says: "It was created by Apple, not a bank."

They are responsible and nobody else is to blame if the parts they outsource fail.

https://mobile.twitter.com/dhh/status/1193716881351294976


>who cares?

Apple said they weren't aware of the details surrounding the decision and had the customer (DHH) work with Goldman Sachs to determine the details of the decision. I don't think that's unreasonable and it's not like Apple just wiped their hands of it and said "We don't know, not our problem."

My response was directly responding to the person that said that Apple employees didn't know why the decision was made. Apple was attempting to get an answer for their customer. Especially with credit and finance information, this isn't something that would be acceptable for a low-ranking Apple employee to discuss with the customer and, if DHH is claiming that it should be because it's "Apple's card", then I think he's full of crap.


> is there actually anything to suggest the algorithm is biased on the gender axis, as opposed to any other axis?

None of us are allowed to know that. That's the big problem here. They delegate the decision to a black box that cannot be questioned.

When the process is set up like that, I think it's fair to assume the worst case scenarios and put the burden on the company to prove otherwise.

Adverse inference is a sensible way to combat secret decisions like this.


it’s notable that a lot of mid-20th-century fear surrounding the increased use of computers was centered around this exact scenario: a black box, the judgement of which cannot be questioned, deciding your fate, with no recourse.

We all scoffed at this for a long time. ML makes it real, apparently.


Black-box algorithms that affect your livelihood are bad, and I don't understand how anyone can be opposed to full transparency.

I don't care what the context is. Whether its credit ratings, job applications, or college admissions.


>None of us are allowed to know that. That's the big problem here. They delegate the decision to a black box that cannot be questioned.

The entire article is about questioning the black box, with regulatory force. Which is a good thing that is is going to be investigated.

>When the process is set up like that, I think it's fair to assume the worst case scenarios

Disagree. Consider it yes, but not assume it as a foregone conclusion.

> and put the burden on the company to prove otherwise.

Agree.


I haven't seen any actual evidence other than his single datapoint of him vs. his wife's experience. Of course its possible that the algorithm is designed to give lower credit limits to women, but I find it unlikely. My wife and I have applied for dozens of cards over the past 10+ years and I've consistently found that once you're score is above some reasonable threshold, the biggest predictor of what credit limit you will be offered are the credit limits on the OTHER credit cards reporting to your report. Maybe he already has much higher limits on other cards?

I started using credit cards a few years before my wife, and despite us both having excellent credit the first time she started applying for cards she was getting limits around $1K or $2K while at that point I was around 10x that. But after a few years of getting more cards and requesting limit increases, both of us now have roughly the same (fairly ridiculous) limits across all of our cards. Admittedly I don't have any cards issued by Goldman Sachs bank, but I can't imagine their algorithm would be much different than Amex, Citi, Chase, etc.


No, but neither is there anything that would justify giving a spouse such a very different limit especially considering that they file joint tax returns and her credit score is better than his (see article).

It would seem that his conclusion is warranted absent evidence to the contrary, the difference is too large to explain in ways that make any sense.


Maybe I am missing something, but wouldn't it make perfect sense if they have dramatically different income?

I think even if two people have their property in common (and if the algorithm even knows about that), it is still not unreasonable to believe that there is a higher probability of the one with higher income paying off his or her loans.


> wouldn't it make perfect sense if they have dramatically different income?

The implication in citing their joint filing status is they submitted identical incomes. They have the same address, assets, and she had a better credit score. She is also a woman.


If she applied after him getting his application approved, this might very well explain her tiny line of credit: the algorithm didn't look at him and her alone in isolation, but at both at the same time, saw a certain combined income, saw he already has a large line of credit issued and concluded "I am not giving them another 3000 bucks or whatever, they already got that, so let it be 50 bucks or something, the minimum we can give to a customer, because really they together used all the credit we're willing to extend already when he got his card approved".

If she applied first, she might have gotten the big chunk, and he might have ended up with the tiny chunk.

Or it might be that the algorithm training data was just biased against women, which is entirely possible as well.

The black-box-ness, even to employees, of course, is a huge problem.


My wife and I fit that scenario. I was on the card beta. She applied (well, I applied for her) as soon as it went wide release. Exact same credit limit, she got a five percent lower interest rate. Applications were filled out identically, I did them both.


Income is generally supplied by the applicant at the time of application, and they presumably both used the same household income number.


i don't think that income is better then historical data of you paying on time.

i have 3x the pay of my wife but she pays always on time and i get overdue notices because i forget or don't care.


Credit card limits are usually based on household income.


DHH said his wife actually had a higher credit score than him.


Doesn’t that concede the point that married couples can have different credit profiles? How much higher is her score? What is the max amount of credit GS will issue to a household? DHH has a history of acting righteously angry without having all the relevant facts.


> DHH has a history of acting righteously angry without having all the relevant facts.

Well, if you know a better way to get the relevant facts without throwing a fit on social media until hopefully a state AG takes note, then maybe we’ll all do that next time instead.


So what? Credit score and income are two separate things. Credit applications look at both.


I don't think DHH or his wife are a risk with respect to being able to pay off their loans. Any reasonably adept credit scoring algorithm should be able to pick up on that.


Well, it seems like both you and the twitter thread have anecdotal evidence. I'd say that the fact that an internal investigation is being launched leads credence to the belief that the discrimination is at least possible.


Saying you're going to launch an internal investigation is often a way of calming a PR fire. For a question like this, people can often get an answer with a phone call or email.


In this case, it’s a state government investigation.


In the thread, it was pointed out that her wife actually has a better credit score than he did, and judging from the photo, it looks like ( I may be wrong ) that she only gets like $50 of credit limit.

I had the same thought as you did when I first read it, for those of us not in US, it seems strange and we assume those Credit Score and Algorithm works as intended, like in your example of past loans and payment. And from experience they tend to be consistent and can be easily explained.

In this case however it seems something is very wrong.

I am still thinking and not sure if it is Apple to be blamed. It is easy to just point a finger to Apple, but in reality our financial and insurance system works pretty much the same way, changing these algorithm will require lots of work. Luckily GS is new to all these consumer business and changes are much easier as compared to other banks.


>it looks like ( I may be wrong ) that she only gets like $50 of credit limit.

She actually got a higher limit than that but spent the majority of it. The complaint was that she had paid off the entire balance that she had spent already but the limit wasn't reset and wouldn't reset upon payment but would reset at the end of the billing cycle so she only had a $50 limit until that point.


The original Twitter thread is still being updated, and it's a doozy.

This is not an isolated incident.

https://twitter.com/dhh/status/1193240508845510656

https://twitter.com/dhh/status/1193242909111398401


I haven't read the whole thread, but I am seeing him bitch about her credit score being higher -- from a single agency. There are other agencies which may have different info.

I know nothing about this particular credit card. I'm not as up on credit stuff as I once was (and the world has changed a lot since then). But when I was a homemaker, I had a credit card in my name with a much higher limit than my husband had on any of his cards. That's not exactly the norm.

There are various factors that go into this. He's not wrong to suggest that there is a very big problem with employees having no idea what went wrong. I'm less confident that it is reasonable to infer gender is the entire explanation.


> I haven't read the whole thread, but I am seeing him bitch about her credit score being higher -- from a single agency. There are other agencies which may have different info.

If you read the thread, he was specifically told the Apple Card uses Transunion, which is why they checked with Transunion, and found her score was higher on that report: https://twitter.com/dhh/status/1192945415538106369?s=21


The score you get from any of the credit reporting agencies is not necessarily the score that will be used to determine your credit by any lender. That is a representative score based on one metric, I think Transunion is Vantage 3.0, but lenders can use any scoring system they want, the credit report will be the same but the score may be different. It really does show a complete lack of understanding of the credit scoring system on his part, see: https://en.wikipedia.org/wiki/Credit_score_in_the_United_Sta...


He was also repeatedly told "It's the algorithm, man! And I have no clue what's in it!"

I worked for a Fortune 500 company at one time. Lots of entry level employees were not exactly reliable sources of info about how decisions got made there.

You may or may not be talking with an entry level employee in their call center, but you probably aren't talking to a departmental head or member of the C suite.


Let’s assume that the tweet author is reasonably well-versed in business operations, since he is himself a very successful businessman.


Sure.

Let's assume I am very well versed in other pertinent domains of knowledge, like social psychology and the tendency for people to get mad as hell about social justice issues and leap to ugly conclusions that fit their SJW narrative about evil in the world that can be conveniently lumped under a one word heading, like sexism.

Let's further assume that this is actually actively counterproductive, so it's reasonable to point out that correlation is not causation and it's unhelpful to insist on a particular conclusion you cannot prove.

I already noted he's right to be outraged at the situation and critical of the black box nature of the decision. I'm just not comfortable with him ranting that it's clearly and obviously due to sexism.


There a many people chiming in that they have had the same experience, and nobody saying they had the opposite. If the credit limit difference was unrelated to sex there would be random distribution of couples with the inverse experience.

So you sayyyy that he flew off the handle because he's an SJW or whatever, but because his initial assumption continues to be proven correct as more data is accumulated... it seems like you are wrong that it was an overreaction.


So you sayyyy that he flew off the handle because he's an SJW or whatever, but because his initial assumption continues to be proven correct as more data is accumulated... it seems like you are wrong that it was an overreaction.

That's basically a dismissive personal attack.

One of the most frustrating and crazy-making aspects of participating on HN as openly female is the frequency with which one must politely endure phenomenal open disrespect from people trying to position themselves as pro women's lib while violating the guidelines here concerning how to engage respectfully with other members. The only thing more crazy making is that there tends to be hell to pay should a woman dare to point it out or otherwise try to defend herself.

For me, it is made more bearable by the quiet support of the many people who upvote my comments and posts, flag the worst replies and comment thoughtfully on pieces I submit.

Yes, sexism is very much alive and well. I get to experience it on a daily basis.

It still does little to no good for powerful men to engage in public white knighting and level accusations they cannot backup.

I will note we are reading his tweets on HN, not his wife's. We are discussing the opinions of a powerful man, not a woman. We are reading them largely because he is a powerful man, not because he can back up his assertions.

Discussions of this sort are sometimes a case of "two steps forward, one step back." But as a woman participating in them, they all too often feel like a dystopian bit of theater in which men get to claim virtues they don't have and treat a woman badly while loudly proclaiming themselves against this evil thing called sexism.


Not all attempts by men to advocate for equality or call out bullshit are “white knight”-ing. White knighting implies that it is unwanted. I am charging in to “rescue” someone when they do not want to be rescued. It is legitimate for a man to call out a bias when he sees it. To ignore it and leave it for women to point out is... frankly helping maintain the status quo.

I’m sorry you found my comment dismissive and a personal attack. I found your use of the term “sjw” dismissive of the issue, since it’s such a loaded term. So my tone was a bit... glib... in reaction to that.

I also definitely didn’t read your username, so don’t take anything i said to be a reaction your female handle. I definitely assumed a male writer (which is it’s own problem).


I also definitely didn’t read your username, so don’t take anything i said to be a reaction your female handle. I definitely assumed a male writer (which is it’s own problem).

It is, in fact, a much larger problem.

To my mind, white knighting is about men playing hero in order to enhance their ego and public reputation as the primary or sole goal such that actually addressing sexism is not only incidental, it's actually counter to their goal.

Being chewed out by you and lectured about how I'm wrong to find any of this offensive amounts to mansplaining.

At every turn, no matter how much men theoretically decry the existence of sexism in the world and pretend to fight against it, when push comes to shove, they expect to be treated with respect by women while not themselves being respectful to women. That expectation amounts to demanding deference from women.

Start by working on treating actual women you are actually interacting with in the here and now with actual respect instead.

That includes not assuming everyone you speak with on HN is male. If you don't know, don't assume. That assumption based on the odds is a fundamental part of sexism, racism, etc. It's a really huge issue.

If you really want to see change in the world, get with the man in the mirror and work on his bad habits. He's the person you have the most control over.

If every man who ever beat his chest about how sexism is a bad thing spent more time working on his bad habits, things would change.

Instead, what happens is every time I comment, multiple people treat me like shit and then come up with justifications for their behavior and reasons why the problem is me and then fail to see the irony in decrying sexism while basically telling me "Shut up, woman." in the same breath.


> If every man who ever beat his chest about how sexism is a bad thing spent more time working on his bad habits, things would change.

You can say that again. This place and many others would be unrecognizable.


Sorry still don’t agree that I “treated you like shit” by arguing that the facts don’t agree with your dismissal of the original story as an “sjw” overreaction.


I don’t see how what he said to you is a personal attack, nor is it different from how men talk to each other here on a daily basis. Saying someone is wrong is routine.


I'm not sure how them possibly being a 'SJW' has any relevance to this argument, as that's essentially a dismissive personal attack and a rather shitty one at that.

Given my experience with ML algorithms having bias against minorities, I think it's fair to assume the worst when it's a black box algorithm. You feed it bad data, and you get bad results out.


Also in compliance training, as mentioned in a different reply to you, we talk about disparate impact. As I'm sure you can guess, this principle implies that sexism can be thought of as an outcome, not just the result of explicitly "evil" attitudes.

In other words, I think he's right to say that Apple should be accountable for the issue he has pointed out, regardless of how it happened, which you rightly point out that he can't possibly know for sure.


I desperately wish that I knew a different word for the negative impacts that harm the lives of so many women. Both allies and others seem incapable of hearing that I would like to see better outcomes when the only word available is sexism. To far too many people, that words means "It's done on purpose by evil people with a heart of darkness." This actively interferes with effective communication because "good guy" ally types don't ever want to hear that some behavior of theirs is part of the problem and needs to change.

It helps make the problem insanely intractable.


As a black person, I know what you mean. I suppose I'd rather fight the battle of expanding people's notions of what these isms are. They're not just horrifically bigoted beliefs with harmful intentions. They're all beliefs they hold with the best of intentions that are nonetheless harmful to some people (e.g. paternalism towards marginalized people). But they're also impacts they have that don't have and explicit belief at all. I just personally believe that personal responsibility entails doing the work to uncover the ways in which we ignorantly step on other people.

I recently wrote something along these lines: https://acjay.com/2019/10/07/ableism-the-sneakiest-ism/


In some cases, they are beliefs people hold and don't even realize they hold.

My ex was career military and we arrived at a new duty station and he became fast friends with a coworker. He talked all the time time about his coworker "John" this and "John" that (John is not his real name). He rather got on my nerves with how much he blathered on about John.

In all those months, he never once mentioned that John was black. It wasn't anything that made his radar at all as worthy of noting.

When I met John and his family, I was very surprised that he (and his family) was black. I had assumed he was white. With seeing him, I realized in an instant that I had made this assumption because I grew up in the Deep South and if you didn't mention skin color, the signal there was that they had to be white. If they weren't white, you should give other white people the head's up.

I realized in that instant that this was a racist assumption and I wasn't as immune to the racism around me as I had thought I had been. I still had been inculcated with practices I was oblivious to as being a problem in that regard.

The surprise showed on my face and I was not able to figure out how to explain that I didn't care that he was black, I was just shocked and appalled to realize that I had made this assumption and was having a come-to-jesus moment with myself. It made for a very awkward meeting.

After that, I tried to just let my kids model race stuff from their father and did my best to butt out. It's an uncomfortable incident that I thought of quite often for some years afterwards.

In part because of that incident, I can be pretty thick skinned about a lot of low level, run of the mill sexism because I'm aware that a lot of people are doing pretty much that same thing without realizing it. In most cases, it is easier to combat if I don't try to point fingers, make them feel guilty, publicly embarras them, etc.

I do sometimes make pointed remarks to try to educate people, but I spend a lot of time trying to simply be the change I would like to see and letting other people react to that as they see fit.

Thank you for engaging me. I did read your piece (and the piece it links to at the start as background).


Social media anecdotes! Oh my!


Jikes. I appreciate DHH for his open source contributions, but this is straight up embarrassing. The sample size for his claims here is exactly 1. There are probably hundreds of data points that go into the algo that determines the initial credit limit. It could be literally anything, from the number of existing CCs in her name, her credit history, whatever.

Stop crying, stop bitching, grow up.


Exactly. Since it's a community property state, maybe they took into consideration the existing credit limit they already extended to the household.


The response to codinghorror's "then don't use the Apple Card? Solution seems obvious" is on point:

> This is such a shallow, disappointing take. If we relegate all responsibility for discrimination to the individuals discriminated against, nothing is going to change! Individual action against structural problems is INSUFFICIENT.


The difference between a solution and a workaround. "Not using the product" is a workaround. A solution addresses the root of the problem, and the root of the product's problem is seldom "The consumer bought the product".


In theory it corrects the problem too, as Apple would lose customers. In practice, Apple has customers no matter how badly they screw up.


Since we are slinging anecdotes around, starting with TFA, my wife and I file jointly, have what I believe to be a good credit score. I filled out what little form there is for both of our cards. Put down the same income, etc. I think our credit limits are the same (need her phone to verify), but she got 13% and I got 18% APR. Now how the hell is there a five percent difference? Not that it matters because they both get paid off, but WTF? (And 18%; seriously, GS?)

But on topic, she got the considerably better interest rate.


>“Any algorithm, that intentionally or not results in discriminatory treatment of women or any other protected class of people violates New York [and federal] law.”

I worry about this a lot with the growing importance of algorithms and machine learning. You can't just not actively program the thing to discriminate and assume that is enough. You have to specifically program it to not discriminate.


So, "credit realism", to match "IQ realism"?

I see this a lot, and the background assumption seems to be "in the real world, minority group A is actually riskier, dumber, or objectively worse in some other way, so in order to comply with anti-discrimination, we have to introduce special cases."

Maybe instead we should start with the assumption that women are NOT riskier, dumber, or objectively worse, and fix the likely bug, instead?


Because the goal was to be realistic, not to assuage white guilt.


All uneven distributions must be weighted accordingly to emulate a uniform distribution across classes. That's apparently what some people want.


Implementation details of law and enforcement matter a lot, but in general, I agree with a spirit of it.

As world depends more and more on algorithms, we need to have more security around them. Especially as algorithms are used a lot as cost cutting, so reaching competent human to appeal error is becoming harder and harder.

If we don’t put more scrutiny around tech, we may end up living in kafkaesque world.


One data point really isn't enough to draw a conclusion and I actually doubt it's true. GS is extremely scrupulous when it comes to avoiding liability and they probably put a ton of diligence into their risk model. It's possible there's some emergent gender bias but we'd need a better show of proof.


> One data point really isn't enough to draw a conclusion

My wife's credit history is longer and historically her score higher because I used no revolving credit until recently, and not using credit cards counts against you. Also married filing jointly...

For the Apple Card, I was given her limit several times over.

I did notice that the expected limit is shown before the hard pull on credit. This means they've got a pretty good idea before getting the latest credit report.

> GS ... probably put a ton of diligence into their risk model.

To your point about the risk model, I generally think the credit score the bureaus give us is wrong, while I think the decision GS made on the card limits is probably plausible ... if there's some chance or probability we might split up.

For instance, we work in different states, so data patterns might look like we are already separated? I also don't know if she put her income or household income. My income has been several times hers since long before we got together.

Rather than gender bias, I would imagine that given the probability of divorces at a certain age, executive level, and income bracket, that would put a thumb on the scale for ... what if you were not married filing jointly? Who makes more? What cash payment can they carry without going broke? Weighted that way, we should have different limits, and it's not a gender thing. This is the kind of correlation humans might not come up with, but ML probably would.

If this is purely actuarial, the decision might be correct, while not feeling moral.


Do you remember which one of you applied for it first?

To me it seems odd that married couples, filing jointly, get separate credit limits from each other at all. Aren't you one economic entity?


> To your point about the risk model, I generally think the credit score the bureaus give us is wrong, while I think the decision GS made on the card limits is probably plausible

I find this likely too, the first time I applied I got denied and the credit score in the email was considerably lower (-100 points) that what is shown on Credit Karma.


This is a common misconception. There is no such thing as a credit score. Theres FICO2008. FICO2015. FICO for car loans. FICO for revolving credit. Vantagescore. And so on and so forth. And then each of these can be based on data in any of the three different Angencies, which may be significantly different. So have if two different credit scores that are off by 100 points isn’t even remotely unusual. Many times even the scale itself is different (max of 850 vs 950 on another model).


Did you and your wife put different incomes in the application? Do you have identical debt loads on the credit report?


What we really need is the ability to force organizations to expose their models.


To regulators, sure. Entirely reasonable. To your average Joe? No. This is already done to prevent redlining when originating mortgages.

Disclaimer: Work in financial services in risk management, interface with regulators. Opinions are my own.


[flagged]


The model is not your property. Do you plan on releasing all of your company’s code on Github gratis? Unlikely.

As long as the model doesn’t violate the law, you have no right to it, unless you’re a regulator. Regulators and oversight bodies get full (audited and logged) access.

And while it’s none of your business, I don’t need my job, or a job at all. I work for projects I enjoy.


In the modern economy it’s impossible to not be constantly and pervasively impacted by these data models. I think it’s reasonable to assert that it is in society’s best interest to make sure that the effects and impacts of these models are communicated as perfectly as possible to those who are forced to live under their purview. What way is this done besides viewing the model? (And given the state of regulatory capture, I’m not sure you can rely on regulators as proxies, no?)

It’s totally possible that those models could then be “gamed”, sure—but it’s more important that people are able to live than for lenders and the like to make yet another buck, isn’t it?


Lenders can still realize a reasonable profit while ensuring people are not improperly discriminated against by machine intelligence.

Effective regulation is possible. I would even be willing to serve a tour of duty standing up such a regulatory body, perhaps as a division of the CFPB. Throwing up your hands that government is ineffective entirely is unreasonable, just as asking for all models to be public is.


Effective regulation is possible, sure. Is effective regulation likely? And is it made more or less likely by transparency?

Heck, as I think about this further--wouldn't transparent models help realize those reasonable profits? If everybody's cards are on the table, doesn't that encourage the market to move towards perfect competition?


If you’re open to a conversation about what legislation would look like, I am receptive to collaboration.

The best people for public policy are the ones who don’t want the job, but do it because it’s necessary.


If you're really financially independent, that's even more incentive for you to support exposure and transparency. People are being harmed by these models. You would lose nothing, and others would benefit.

>As long as the model doesn’t violate the law, you have no right to it, unless you’re a regulator.

If you have no realistic alternative to being judged by a model, you absolutely should have a right to it. Maybe not the complete implementation, but a verified summary that could be understood by the average person.


Problem is once a model is exposed it can be easily gamed


You can't really fake some of the inputs. Income, number of loans, stuff like that.

You can manipulate them of course, by taking out a cheap loan that you don't necessarily need and things like that. Plenty of people use a credit card for a few expenses (say fuel) just to establish a history.


You don't need to expose the model to figure out how to game it. It's going to be gamed anyways.


Credit card company extends known rich guy tons of credit - news at 11!

Every credit card company probably maintains a list of "big fish" - i.e. famous people - and grants them all a huge credit line.

It's a non-story.


There are two arguments here: A. "is the credit limit algorithm sexist" and B. "we should be able to know how an algorithm decides things and/or we should be able to see the algorithm".

You can't determine A, simple as that. The only way would be if Apple/GS comes out and says something like "he capped out the household credit limit", which Apple/GS would probably only tell him anyways.

B is a different issue, one with lots of room to actually converse over. But it's one which the author seems to pivot to after some reasonable arguments are presented as to why her credit limit was literally $57, making his argument for A very weak.


Credit lines are given based on stated income, not on credit score. A common problem in this industry is that non-working partners state their personal income instead of household income. Even if an applicant is approved due to strength of credit, the bank won’t underwrite a large line for low stated income.


What I don't see anywhere in that thread is how much his wife put in as her income when applying.


It strikes me that longer term, we're just going to end up playing a cat and mouse game with people continually inventing different categories that businesses aren't permitted to 'discriminate' on.

Today it might be gender and race. That makes a lot of sense, because the alternative is to further entrench what are basically inheritances.

But aren't we just going to rattle on through and have the algorithms discover (whether we actually realise it or not) that, say, someone diagnosed with X is less creditworthy than someone diagnosed with Y, or that someone bullied in school is less creditworthy, or whatever else?

The whole point of ML is to extract this sort of information from a dataset.

Is it even possible or meaningful to create an unbiased model? Doesn't a model's profit imply bias, whether we currently consider it morally correct or not?

I'd be interested in an argument to convince me otherwise. My view at the moment is basically 'we spend all of this time building models, and then we have to stop using them because they're socially negative/immoral, but for a brief period shareholder value was maximised'?


That’s not a great argument though - essentially “we can’t prevent an ML model from being biased so we should embrace it.”

Western society has mostly accepted the idea that—outside of some specific cases—we should avoid building systems and processes that systematically discriminate against people on the basis of a selection of characteristics which have historically attracted it. The exact application of this concept, the interpretation of it, and the boundaries of discrimination or protected characteristics will continue to be subject to gray areas and refinement. The rules are not perfect.

But I do think a better solution to the problem (“we have implemented a whizz-bang new technology which is inherently subject to bias”) is to either fix or discard that technology, rather than discarding the concepts of equalities regulation and civil rights.


I think you've mistaken me.

My argument is that if we take the standpoint that we don't just want the metric to be whatever is short-term economically optimal for the designer of the model, we should just stop/ban it now, because we already know that we're going to have to kill it once we actually understand it properly.

It's only allowed now because we haven't figured out the bad things that are happening.

Inventing more and more categories that businesses are not permitted to discriminate upon is precisely the wrong approach - if we're talking about huge companies and not the bakery down the road, we need something that's more like 'you need a very good reason to exclude someone', rather than 'you can exclude someone for whatever reason they like, unless they're a member of the set of continually extending list of protected categories'.


I think he’s making a lot out of nothing, there’s no way really to prove this, and we only have he and his wife as a sample size.


I would take DHH's interpretation of this with a large grain of salt. He seems to assume that living in a "community property state" will be taken into account (presumably then considering his wife to have parity with his income/score) by the credit limit algorithm, but we have no way to know that that's the case.

He also adds "It gets even worse. Even when she pays off her ridiculously low limit in full, the card won’t approve any spending until the next billing period. Women apparently aren’t good credit risks even when they pay off the fucking balance in advance and in full."

Are we really to imagine the Apple/Goldman algorithm has some "if(gender.female){ cc_payment_terms = :discrimination }" sort of code in it?

FWIW I'm a male and have a mid-700s credit score and was denied Apple card approval. I am fairly certain there was no sex discrimination involved in the denial.


It won't have that line in their code. What it very likely will have is some Bayesian algo or maybe if they're really high tech some ML black box that will have gender as one of its inputs. And it probably shouldn't have that input.

The whole idea behind decisions like these is that they have to be explainable and ML especially does not lend itself well for that.


Here's a thread on Apple support[1], with many reporting similar experience and an anecdote going the opposite way:

I’ve been having the same issue since Monday 10/14. I paid my balance in full, money was taken from my Chase account right away and cleared next day, but my Apple Card available balance hasn’t updated. I’ve also gotten the same response from support. It could take several days. Funny thing I paid my wife’s card off a few weeks ago and it updated within seconds, same bank account and everything......

[1] https://discussions.apple.com/thread/250676909?page=2


Note that an ML box doesn’t even need gender as an input. Name alone probably gets you 90% of the way there.


That points to the core of the issue. "Fairness" in ML algorithms can be hard to define and assess.

It's easy to say "omit gender from the model", but the real issue here has to do with the _causal_ pathways between your input variables and the output variable.

Since ML mostly works by exploiting correlations between the input and output variables, omitting gender doesn't mean gender's influence is removed. You'll have to omit all the causal pathways from gender -> the output, effectively "d-separating" [1] gender from the output. Whether that's practical or not depends on how well we understand the data generating process.

[1] http://bayes.cs.ucla.edu/BOOK-2K/d-sep.html


You can simulate it with artificial querries with the same data except gender and name and see if women get less or more.


True, you could have an internal table that maps name to gender with a very high probability of being right.


> Are we really to imagine the Apple/Goldman algorithm has some "if(gender.female){ cc_payment_terms = :discrimination }" sort of code in it?

Not intentionally, but it's possible for black box AI algorithms to engineer such a model feature.


We don’t know, and that’s the problem. Machine learning models are entirely dependent on their training data. If the data is biased the model will be too.


If they're using machine learning, then yes it may literally have code like that in it.


Highly doubt Goldman Sachs included gender discrimination in their risk model...do regulators get to see the risk models credit companies use to determine how much credit one can use?


It's well known there are proxies are you can't use when building a credit risk model e.g. zip codes are off limits because it can be used to a proxy for ethnicity.

Intentional or not, it's possible they could have used something that proxied for gender.


Hopefully anything linked to gender is looked into for example car insurance? Men typically signifigantly more than women for car insurance.


Charging different rates on car insurance based on gender is illegal in some places, including California.


But there is still a discrepancy between genders in CA.


Car insurers can point at actuarial data to back up their rating decisions.


Are you saying that there are insurers who don't base their rating decisions on actuarial data? Isn't that a requirement?


Parent comment said car insurer.


Maybe GS can too?


Maybe! Nobody knows!


Ok, let's try to put it all into context. Denmark. one of the most egalitarian countries in the world. 1% of the 1%. I live here and aren't a Dane, their vision of the world can be quite bubblelike and skewed. Im almost sure that he's jumping into conclusions.


Same story from the Associated Press: “NY regulator vows to investigate Apple Card for sex bias” https://apnews.com/8754cf30526b4b94a3ba6e1cfc1d5054


Can you get insurance from algorithm damage ? I see a niche here.



And my wife's Apple Card limit is 20x higher than mine. Are they're discriminating against men too‽

If I was as rich as DHH, who makes millions per year, I would cancel my credit cards and refuse to do business with companies like Goldman Sachs entirely.

I wouldn't give a crap about 2% cashback and I don't know why he does.

And despite the fact that he's probably wrong about the entire complaint, the least he could do is cancel his cards in protest. Instead, he seems to have accepted the "bribe" (his word) quite willingly.


There could be a manual process here - someone knows that DHH has a >$1 million car and manually edits his credit limit to super high.


Good for him. Of course if he had been a nobody this would never have gotten traction but typically DHH is found on the right side of issues like these and I commend him for speaking out. This is likely one of very many such instances and let's hope that insurance companies, banks and other financial service providers take note.


Speaking out about what? The guy appears to be claiming that a company is discriminating against women with no evidence aside from a single anecdote. Seems like he's some sort of rich 1%er founder, probably not approved under any sort of standard model the typical applicant will fit into, and thus has a crazy high limit.

Is there anything more to this than typical twitter mock indignation and outrage?

Now, if his wife was enormously wealthy before they met and this is the outcome, I suppose that would raise some eyebrows!


As explained in the very first tweet in the thread, DHH’s wife is exactly as wealthy as DHH. That’s part of the frustration expressed.


So, if that's the concern, Europeans are free to pass regulation that requires credit companies to extend the same credit line to husband and wife. Naturally, that will potentially have unforeseen consequences with regard to modeling risk, buyer beware.

Seems like my initial comment was on point then.


Let's assume that there is some valid reason for the difference in credit limits. (I'm not assuming one way or the other, but let's just grant it here.)

This is still bad for Apple and for consumers. The "ALGORITHM" that can't be questioned, inspected, explained, or overruled is a massive failure. Whether its criminal justice, credit scores, or behavioral predictions, ceding authority to some "AI" overlord can't end in just or fair outcomes. (Scare quotes around "AI", because the black box may just be a chain of if/then statements that conveniently proxy in the worst of our institutional biases. Or it could be sophisticated ML... proxying those same things.)

I don't think we're done hearing about this. I'll be very interested in what Apple has to say about it, or what—if anything—is discovered. And it'll doubleplusungood if gender is an explicit input into the "ALGORITHM".

DHH is absolutely right to push back against all the respondents that offer plausible explanations. They're all missing the point. The point is that Apple should be providing a concrete explanation for their credit decisions. All credit providers should.


Yep. This is part of why GDPR is so important - the “right to explanation” for automated decision-making processes.


[flagged]


The whining and expectation that big brother swoop in and make everything right.


Its probably DHH's jumping to the conclusion that gender bias (whether intentional or not) is the only possible explanation for this credit discrepancy. Or it could be the fact that he expects customer support representatives to understand the inner workings of GS's risk calculations.


[flagged]


Heinemeier Hansson is his two-word last name. It's customary to include a person's first name and full last time in an article.

See also: Eddie van Halen


Thanks. I've very familiar with van and von used like this, but Heinemeier seemed like a middle name.


Nope. Google his family; they're all HH's.


It's a Scandinavian name. His last name has two parts. David is his first name, Heinemeier Hansson is his last name. That's just how it works.


Thanks, that's helpful.


Because he chooses to go by his full name professionally?

Edit: or that his last name is two names.


This is a problem that needs to be solved, no question about it.


Maybe these CC issuers should just take gender out of the equation and solely base their credit extension on economic and behavioral data points.


That would be the obvious first step and an absolute minimal requirement. The problem is that as these systems get more black-box-y there is little to stop them making predictions that are affected by inferred hidden characteristics. Like, “we don’t take race or ethnicity into account, but our insurance algorithm says that if you have black hair and brown eyes and a name commonly found in black communities and you live in an area with a large black population then you are more of a risk sorry but it’s nothing to do with race honest”.


The chances there is a line in the algorithm that says ‘women get less’ is basically 0%.

It’s got to be inferences like you said. And those are so much harder to spot.


To the extent there exists a correlation between some characteristic and the outcome, models that exhibit the correlation will be more accurate than models that don't, irrespective whether the characteristic is used as a model feature or not. In the context of risk assessment, "more accurate" means "more likely to survive in the market place".


Implying they don't.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: