Hacker News new | past | comments | ask | show | jobs | submit login

Consider your drug conviction example, and lets say the algorithm is linear regression so I can give a very simple explanation.

The model will then be FICO = stuff + (-1) x (drug conviction).

I.e., your FICO score is lowered by 1 point if you've been convicted of drug use.

If you are correct, some enterprising quant can take your hypothesis. He can then rerun his regression with (drug conviction, black) and (drug conviction, white) as variables. If your hypothesis is correct the result will be:

FICO = stuff + (-0.5) x (drug conviction, black) + (-1.5) x (drug conviction, white).

This is because the bias in drug convictions makes it less predictive for black people.

If you are wrong the coefficients will be equal [1]. If you are right he just made millions of dollars for the bank he works for and probably added $100k to his bonus.

What you are discussing is strictly a statistics problem.

[1] For examples of this type of analysis on social indicators (I don't know of any on credit allocation), see these papers: https://www.rand.org/content/dam/rand/www/external/labor/sem... http://egov.ufsc.br/portal/sites/default/files/anexos/33027-... http://ftp.iza.org/dp8733.pdf




You're essentially making an argument from the efficient market hypothesis:

https://en.wikipedia.org/wiki/Efficient-market_hypothesis

In other words, the algorithm won't be biased because there is a buck to be made by removing the bias.

But the efficient market hypothesis is weird. It's kind of like Schrodinger's Cat. If you can find some specific instance where it's wrong and publish it then the market corrects itself, so if you observe a bias then it ceases to exist. So now on one hand, we know that biases can exist because people have discovered them in the past; it's just that those biases have already been published and are being taken into account. We can speculate that more exist but we don't know what they are. On the other hand, maybe now we've finally found them all (or the remaining ones are negligible) and the efficient market hypothesis becomes true.

But notice that the thing to do if you believe the efficient market hypothesis is wrong in a particular case is clear. Prove it, because you'll be able to make a lot of money by making the world more just.

Which is why we need transparency. Because the market participants can't find the bias if they don't have the information, and then the efficient market hypothesis will certainly be false.


I made no claims about the EMH. My only claim is that if bias like what you describe exists, it's something that statisticians rather than politicians need to identify and fix, and that the statisticians in the position to fix it have perfect incentives to do so.

Politicians have neither the ability nor incentive.

Similarly, if someone were claiming that we need political oversight of UI/UX choices, and that American web pages need more red/white/blue to improve conversion rates, I'd suggest that this is a job for web designers rather than politicians. I'd also say that if you just want red/white/blue designs for patriotic purposes, you should openly advocate for that and not talk about conversion rate.


The point is that assigning a FICO score -- any score -- is a strong predictor of future scores, to the person and those dependent on them. If the current score is unfairly distributed in society due to other historical effects, the computation will actively work to maintain the distribution.

A quant doesn't care how they make their money, and they certainly have no desire in leaving their Nash equilibrium.

Again, if you say you want to enact a conservative policy, say so. It's perfectly OK. But don't program conservatism into an algorithm and then claim "it's only math". It isn't.

Of course, being in a Nash equilibrium makes conservative algorithms much easier, as they don't require bargaining with other players, but that doesn't make them any less conservative.


Suppose we have some algorithms which are trying to predict which team will win a game. People are going to bet money based on what the algorithms say, and teams benefit when more money is bet on them.

The red team plays dirty.

First they spread false rumours about the blue team. Most of the algorithms respond by reducing the blue team's chances of winning against the red team, but algorithm #5 successfully predicts that the rumours were false and people who bet using algorithm #5 made a lot of money. Now lots of people use variants of algorithm #5 and thereafter the red team has no luck getting anyone to believe their false rumours.

In other words, if the algorithms are incorrect then we're all better off to fix them.

Next the red team successfully lobbies the government to raise the blue team's taxes (even though the red team makes more money). Now the blue team has less money for training and equipment, which has actually reduced their chances of winning the game. Most of the algorithms successfully predict this, and then the blue team loses the next game.

Your argument seems to be that the red team should be punished for this by having your algorithm stop preferring them. But now you're screwing over everyone else relying on your algorithm, who weren't responsible for the behaviour you're trying to punish. And thereafter no one trusts you to provide true information.

If the red team is playing dirty and the math accurately predicts the result, the problem is not the math.


Your analogy is incorrect because when it comes to social systems, how people bet actually changes the outcome, at least in the long run. So the dynamics are more like that the team that wins is the one that has the most people bet it would win.

In general, the problem is not the math but our interpretation of the result. All the result says is that if we change nothing, this is the likely outcome. By interpreting the results to mean merely "this is the likely outcome", we are depriving people of the opportunity of getting out of local optima. Deciding to coordinate or deciding to have society maintain its Nash equilibria are both political choices -- not mathematical outcomes.


> Your analogy is incorrect because when it comes to social systems, how people bet actually changes the outcome, at least in the long run.

That stipulation doesn't make the analogy incorrect. It's analogously an argument that we should bet on the team likely to lose tomorrow because more people betting on them will make them more likely to win next year. That is not a successful betting strategy even if it's true. It's just charity. And there is nothing wrong with charity, but then why use such a convoluted mechanism? If charity is what you want then just give your money to the blue team.

> In general, the problem is not the math but our interpretation of the result. All the result says is that if we change nothing, this is the likely outcome.

Which is what we need to know unless we're specifically trying to change something. And if we can find some particular unjustified bias in the algorithm then we are, and then we know what to do because we can adjust the algorithm. But if we can't find any such thing to adjust, if the algorithm is unbiased and accurate so far as we can tell, then what is it we're supposed to be trying to change?

Just looking at e.g. the racial composition of the output doesn't actually tell you if anything is wrong. It could be (and often will be) that the algorithm is measuring other things that are correlated with race.


> That is not a successful betting strategy even if it's true.

If we can determine the outcome, why is it not a successful betting strategy?

> It's just charity.

For something to be charity, you need to determine that the receiver does not rightfully deserve it, and the giver does. That distinction relies on values, and every person may come up with a different labeling of something as charity or not. I studied history and my views are largely shaped by that. I believe that the current distribution of power and resources in society is largely a result of what you may call "charity" to the people who are now rich (and that's putting it very kindly).

> Which is what we need to know unless we're specifically trying to change something.

As what we do or don't do determine the outcome, any decision is a political choice. Saying that we're specifically trying to change something is no different from saying that we're specifically trying to keep something the same. Today's bet is tomorrows outcome, and you must place a bet.

> Just looking at e.g. the racial composition of the output doesn't actually tell you if anything is wrong.

We liberals have an assumption. It is no more arbitrary than conservative assumptions. The assumption is that -- unless proven otherwise -- no group of people wishes to yield power over themselves to others, and that different groups have similar capacity for "power-gaining" achievement. Therefore, if we look at the racial or sexual makeup of a certain source of social power and we find a gross disparity, we want to balance it.

> It could be (and often will be) that the algorithm is measuring other things that are correlated with race.

Absolutely. The problem is not what the algorithm tells you, but what you do with the information. Because what determines the future outcome is not the output of the algorithm, but your decision on how to act.


> If we can determine the outcome, why is it not a successful betting strategy?

Because the bet is for what happens today but what the bet changes is what happens the next year or the next generation. It's giving someone a car loan when you know they're likely to default and then pay pennies on the dollar in collections. They will then have paid half the market price for their car and you will have paid the other half. That is clearly not profitable for you so doing it on purpose is charity.

> For something to be charity, you need to determine that the receiver does not rightfully deserve it, and the giver does.

Gibberish. Charity means giving to the less fortunate. The robber barons were terrible but you can't arbitrarily redefine words in order to claim they didn't give to charity.

And you're missing the point. I don't care what you call it, if your goal is to have the government take from the rich and give to the poor then just do that. Collect simple taxes and then give the money away. Don't come up with weird complicated economically distorting contortions with large and hard to predict negative externalities.

> As what we do or don't do determine the outcome, any decision is a political choice. Saying that we're specifically trying to change something is no different from saying that we're specifically trying to keep something the same.

Of course it's a political choice, but that doesn't tell you anything about what should be done or not done. And saying you're specifically trying to change something tells people what you're specifically trying to change.

> Therefore, if we look at the racial or sexual makeup of a certain source of social power and we find a gross disparity, we want to balance it.

But your categories are arbitrary. Race is a thing idiots made up. It isn't a real thing, it's a social construct. What justifies groups to be defined as they are? Why don't we also care about the economic disparities (which actually exist) between Irish Americans and English Americans? Or short and tall? Ugly and pretty? Consider whether the reason is because those lines don't map with existing political coalitions.

Everyone who is not born rich is so because of some historical misfortune not shared by their rich-born contemporaries. That's the line that sums up all the other lines. If you want to help the poor, help the poor. If you find a specific instance of racial discrimination (as in causation not correlation) then stamp it out. But Mercedes is not doing something wrong or obligated to do something different just because there is a racial disparity in who can afford their cars.


> Don't come up with weird complicated economically distorting contortions with large and hard to predict negative externalities.

I am not trying to. I am simply pointing out that treating past data as a future predictor and acting accordingly on the assumption that the social structure doesn't change as a result of your actions is a conservative political action; not a neutral one, and certainly not an objective one justified by "math".

> It isn't a real thing, it's a social construct.

A social construct is often as real as it gets.

> What justifies groups to be defined as they are?

You're looking at it the other way around. If someone arbitrarily makes up a group (say, race), and based on that arbitrary bias creates a social structure where that group is discriminated against and marginalized from power, fixing this bias is correcting an unfairness. We're not trying to make up groups in order to fix the situation; the groups had already been made up in the process of doing the wrong.

> But Mercedes is not doing something wrong or obligated to do something different just because there is a racial disparity in who can afford their cars.

I never said that that what they're doing is wrong. But a society that is not telling Mercedes to treat data differently is carrying out a conservative policy. I am not saying that changing that particular behavior is where we should best direct our efforts at change, but I am saying that we shouldn't pretend there's anything neutral about acting in this way.

I am also not saying that people (or companies) have a responsibility to individually act against their self interest; that's not how Nash equilibria are escaped anyway, so that's just not going to work. Social action is best done through consensus and compromise. Just as political systems now cooperate on not changing things, they can cooperate on changing them.


> I am simply pointing out that treating past data as a future predictor and acting accordingly on the assumption that the social structure doesn't change as a result of your actions is a conservative political action; not a neutral one, and certainly not an objective one justified by "math".

That doesn't actually mean anything. All action and inaction has consequences. There is no neutral.

An algorithm is objective because it's falsifiable. If you're trying to predict e.g. whether someone will pay back a loan then every time the algorithm tells us to make the loan we can see if it was right. We can also take a random statistical sample from the times it says not to make the loan and do it anyway to find out what happens then. And if the algorithm made bad predictions then we can improve it.

Now suppose we have a nice algorithm. Best available information. Prediction accuracy very good. It says the risk of John not paying back the loan is higher than the value of any reasonable interest the lender could charge.

There is now an obvious objectively correct answer to the question of whether the lender should typically loan John the money, if the lender wants to stay in business. It isn't "neutral" because nothing is. But it is unaffected by John's race.

> A social construct is often as real as it gets.

We made it, we can unmake it.

The low income housing people proved that concentrating poverty is dangerous. You put all the low income housing together and it becomes a slum. You put one or two low income units in each of several middle class neighborhoods and they don't.

Grouping a vulnerable population together has the same danger. It draws an imaginary line around them, separating them from everyone else, concentrating their shared troubles and isolating people who need help from people who could provide it.

Grouping humans by "race" is poison, even if you're trying to help.

> You're looking at it the other way around. If someone arbitrarily makes up a group (say, race), and based on that arbitrary bias creates a social structure where that group is discriminated against and marginalized from power, fixing this bias is correcting an unfairness.

You didn't create the groups but you're choosing which ones to care about. There are so many different groups that meet those criteria that you'll never meet someone who isn't in at least one of them. And they correlate and half-correlate and complement each other. You can't balance that.

But you also can't balance it because humans are not a fungible commodity. They're not equivalent. If you have fifty white coal miners, ten black doctors and three white CEOs, and you just average their salaries and say "white people have an advantage" because the CEOs each make a billion dollars, the coal miners are going to disagree with you and have a legitimate point.

Let's even pretend we can balance humans to see how quickly absurdity follows. So the problem we want to solve is that there are a higher proportion of low income black Americans than white Americans. OK, so we either need fewer low income black Americans or more low income white Americans. Possibility: Eject some low income black Americans. Nope, violates our principles and the US constitution. (But the fact that it would otherwise be effective should make you suspicious.) Next possibility: Get more low income white Americans. There are millions of low income people in eastern Europe who would like to be US citizens, so we let them. Hurray! Racial disparity in America solved!

If what we really care about is "balance" then that is an actual solution, but it's also so obviously ridiculous that it demonstrates how that can't possibly be the real problem.

And if it is to be a real problem then it also demonstrates why you can't solve it. Because the opposite of that is what has been happening. A large proportion of existing white American families immigrated here after the abolition of slavery. They, at a minimum, had enough wealth and education to afford passage (in the early days) and gain American citizenship (in modern times). So they've been bringing up the average the whole time. You're trying to balance an open system.


> An algorithm is objective because it's falsifiable. If you're trying to predict e.g. whether someone will pay back a loan then every time the algorithm tells us to make the loan we can see if it was right.

I'm sorry, but that's just mathematically wrong in the presence of feedback: https://news.ycombinator.com/item?id=10874683 When you have a dynamical system with feedback, you can have a perfectly predictive model that is is both wrong and unobjective.

> Grouping humans by "race" is poison, even if you're trying to help.

I agree. But I am not trying to group them. I am trying to undo the damage of the grouping.

> You didn't create the groups but you're choosing which ones to care about. There are so many different groups that meet those criteria that you'll never meet someone who isn't in at least one of them. And they correlate and half-correlate and complement each other. You can't balance that.

That's true in theory. Which is why you go to historians and anthropologist who seriously study this stuff and ask them. It turns out race and women are pretty much the big ones, not just in the West but in most societies (though not all).

> If what we really care about is "balance" then that is an actual solution, but it's also so obviously ridiculous that it demonstrates how that can't possibly be the real problem.

Let's say you have a headache. One solution is for me to kill you. Hurray! Problem solved! Headache gone! This shows that your headache couldn't have possibly been a real problem. WAT?

> And if it is to be a real problem then it also demonstrates why you can't solve it.

I studied history in grad school. And one of the constant things in history is that people will always explain why the problem can't be solved. You can go online and read some of the extremely elaborate, pseudo-intellectual explanations why letting women vote would be disastrous, why slavery is good for blacks, and why women should never be allowed to practice law or medicine. Yet social action solved all three. I could try to explain why your reasoning is wrong but it would take too long. In short, you think that the solution must necessarily solve the proximate cause without studying the ultimate cause, and you're wrong about what constitutes the main effect. The "open system" accounts for a very small portion of the problem.

One of the advantages to studying history is the perspective you get about things that today seem natural and immutable to us, and then you learn that things haven't always been like that. The social structure in society is constantly changing. The only question is do we want to help direct the change as much as we can, or act like incurious beings, that just let things happen to them without understanding them. I always find it curious (though not really; it is a known and familiar phenomenon, which happens over and over) how it is the people who are otherwise the most curious about nature and technology become primitive and unquestioning when it comes to human society. All of a sudden the model is either too simple, or too complex, and we certainly cannot change it. Which is funny because people had never flown (on wings) before the invention of the airplane, yet people have completely changed the social structure over and over, through political activism.


do you mean red team / blue team as in attack / defense or as in red tribe / blue tribe?

or just names for teams?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: