Hacker News new | comments | ask | show | jobs | submit login
Alexandria Ocasio-Cortez Is Absolutely Right About Racist Algorithms (breakermag.com)
245 points by StellarTabi 23 days ago | hide | past | web | favorite | 367 comments



There is no design of AI that can be simultaneously utilitarian, procedurally fair, and representatively fair.

I'm going to repeat this again, since many people struggle with this.

It is _literally impossible_ to achieve the best utilitarian outcome, the most procedurally fair outcome, and a representatively fair outcome.

Anyone designing algorithms will have to make tradeoffs along this frontier.

For a far better argument for the above than I can make in a Hacker News thread, please see these slides from Chris Stucchio:

https://www.chrisstucchio.com/pubs/slides/crunchconf_2018/sl...


More to the point, “procedural fairness” or “equality of opportunity” (if I understand it correctly) is directly opposed to “representative fairness” or “equality of outcome”. In many situations, prioritising one will compromise the other. As an example, Harvard admission of Asian students.

Perspectives differ on which one is more important. Personally, I’m absolutely for the former, but it increasingly seems I’m a part of the minority (or the silent majority).


I think the most useful way to think about this is that unequal outcomes almost always lead us to unequal opportunity upstream.

If, for example, you made admission to elite colleges(which is a whole other can of worms) ignore ethnic backgrounds, you'd get a result where far fewer applicants of certain ethnicities would be accepted. In an arbitrary sense, this could represent "equal opportunity" if you believed that these people materialized from the ether the day they submit their application. But if you take into account their lives leading up to submitting their application, "equal opportunity" requires counteracting powerful structural factors that make it harder for people of some backgrounds to get into college.

The reason Harvard admission of Asian students is cited to such a tiresome degree is that it's a rare case where there isn't a clear upstream justification. That's why there's a big lawsuit and Harvard is getting tons of awful press about it. But it also lines up with the fundamental problem we're trying to fix with affirmative action. Harvard and its admissions systems were invented by white people, and they have proven to relentlessly favor white people, who then have easier lives and more money they can use to pay SAT tutors to help the next generation of white kids get into Harvard. It's not surprising that Harvard would warp the affirmative action system away from its original purpose, to correct for historical oppression, in order to... help white people get into Harvard. The fact that Harvard is discriminating against Asians is not a reason to implement a fake-fair system that has the end result of discriminating against all other non-white people.

It's theoretically possible that there are some other cases somewhere that show large inequality of outcome but have no unfairness upstream. But I'd rather have the task of discovering and fixing those rare cases than the present system, in which being born with a certain background gives you radical advantages or disadvantages on the basis of, basically, ancestral violence.


Can you please explain how Obama’s daughters are disadvantaged based on ancestral violence?

The point is, racist discrimination, even if “positive”, produces unfair outcomes. Obama’s daughter don’t need even more advantages. If you want to help kids from disadvantaged backgrounds (and I think that’s a worthy cause, even if you ignore “fairness”, simply because it increases the odds of finding the next Ramanujan and improves the well-being of the society), you should help people from disadvantaged backgrounds. Admitting them to degrees they don’t deserve isn’t actually helping them - it’s mainly just fixing superficial statistics and perpetuating harmful stereotypes (e.g. people not wanting black doctors because they suspect they’re worse because they weren’t subject to as strict standards as white, let alone Asian, doctors). To ensure actual equality of opportunity, you need to help them when they’re young, improve their school system, help their families, etc. Everything else is just a bandaid, at best masking rhe underlying problem.


Oh wow!

Yes, that's very easy! Let me count the ways. First of all, Obama's daughters are much more likely to be murdered, abused or falsely accused by police than they would be if they were white. Even iconically famous black people are sometimes mistreated by police officers who don't recognize them. I'd say being at a high risk of murder counts as a disadvantage! I don't even need to draw a line back to ancestral violence for that one—that's just regular old present day violence.

Second, although the Obama sisters enjoy substantial family wealth due to their parents' incredible achievements, the fact that they were born black means that, statistically speaking, they would have been far wealthier if they'd been born to a comparably wealthy white family. For example, if the Obama's are the X richest African American family, they'd be far richer if they were the X richest white American family. Bingo! Disadvantage! Why does that wealth gap exist? There are many recent crimes, many tied to violence, that exacerbated the situation. But we can start by looking at the fact that Black people, including Obama ancestors, were violently forced to build wealth for white people without any compensation.

There are many more ways in which contemporary American society puts extra pressure and psychological burden on African Americans, yes even rich African Americans, all of which can be traced back to historical violence in our history, and all of which clearly affect the outcomes we associate with "success."

Whether you admit it or not, you're arguing that the person who "deserves" to win in our economy is the person whose ancestors perpetrated horrific crimes against his competitor's ancestors and who doesn't have to face the constant daily effects of ongoing racism in our society.


> First of all, Obama's daughters are much more likely to be murdered, abused or falsely accused by police than they would be if they were white.

I don't think this is true. If you compare Obama's daughters against Bush's sons, probably. But against a random white person with no media exposure, unlikely.

> they would have been far wealthier if they'd been born to a comparably wealthy white family.

Again, why are we comparing against comparably wealthy white families? The reason why Obama's daughters were brought up in the first place is because we recognize that they are incredibly privileged compared to the general population, and they don't need more advantages. We don't care whether they are privileged compared to Bill Gate's children or someone comparatively more privileged.


If you want to understand how anti-black bias affects even wealthy African Americans, you have to use methods like this. The GP was saying, basically "if the Obama's succeeded, it doesn't matter that their competitors were cheating." You can be disadvantaged and still be successful.

Why would it be useful to compare the most famous and rich African Americans against "a random white person?" Is that the way we prove that African Americans are not disadvantaged, by comparing an anomaly from one group to the average of another? We are comparing comparably wealthy people in order to eliminate other forms of difference so we can see just the effects of ethnicity...


The responses to this, articulated and expressed as downvotes, are extremely telling.


What if Obama's daughters don't find being more likely to be murdered to be a worthwhile exchange for whatever advantages you're offering them? It doesn't seem like a fair negotiation to not include them.


We are talking about large, diverse groups and their median or average.

That said, usually the correct approach is not discriminating based on race, but discriminating on the actual "disadvantegedness" of the path taken by the applicant.

Accidentally that's how it's done around here for university (you get points for being disabled, poor, living far from the campus, and so on).


It's great to account for what structurally led to the moment of a decision. What's tricky is that it's hard to get the right level of granularity. If you don't have data on how historically disadvantaged a given person's path was, you're going to get false positives and false negatives, and you can't make the system fair across both.

Changing the discussion to be structurally fair, not just transactionally, you still have the same impossibility issues arising if you don't have an oracle model with omniscient features. It's the same issues, but with a larger scope of features and outcomes.


with regards to Harvard et al., they have a harder time to do positive discrimination since some court decisions in recent years. all in all, they look at extracurricular stuff to try to give extra points, because they can't discriminate - even positively - based on race, as far as I know. (the legal history of affirmative action is pretty long: https://en.wikipedia.org/wiki/Affirmative_action_in_the_Unit... )


Equality of outcome seems like a mechanistic concept of the industrial age of standardized mass production.

Equality of opportunity seems much more organic/humanistic because it allows for the natural variation in people's values and personality


The problem is equality of opportunity is it's invariably applied in a way that ignores some very racist and uneven starting conditions and the effects of the past. Consider as a small example how schools are funded in the US, which is largely by local property taxes, so poorer places have poorer schools. These areas are heavily skewed towards minority populations because of some extremely racist issues in our past like redlining which kept blacks and other minorities out of the housing boom which drove a lot of middle class wealth growth through the 60s to 00s. So we build on this cycle generation after generation where kids in poor areas go to poorly funded schools and experience poor educational outcomes.

Even if we solve the funding problem it's still not actually equal opportunity because the poorer kids don't get the same support outside of school due to parents working longer hours at lower paying jobs because they got a bad education or living in a crummy house with lead paint (because that's all your parents could afford) or living in a town with contaminated water (Flint is just the one to grab headlines cities all over the country have abysmally high lead levels). Even if you want to blame community culture for some of these issues the question becomes where do you think that culture comes from, perhaps just maybe it comes from decades of lack of opportunity and systemic abuse?

How do you possibly hope to close the gap in our system where wealth almost invariably builds on wealth?

tl;dr: Equality of outcome doesn't mean everyone has to have the exact same exact life, it's just acknowledging our opportunities are directly tied to the outcomes of the generation before us.


> Consider as a small example how schools are funded in the US, which is largely by local property taxes, so poorer places have poorer schools.

Varies from state to state. In CA, only about 25% of K-12 funding comes from property taxes: https://ed100.org/lessons/whopays

I know of several school districts nearby that contain nothing but relatively new $1mm+ houses and abysmal school ratings. I would believe that there's some kind of complex nonlinear correlation between property values and quality of school, but it is very clearly not an easy "more expensive houses == better schools" calculation.

> perhaps just maybe it comes from decades of lack of opportunity and systemic abuse?

A fine point, and certainly not limited to any one race. I'm all in favor of helping those with fewer opportunities growing up, and I don't really care what race they are. If they happen to be mostly $RACE, it doesn't matter to me.


By changing the way schools are funded, and helping poor students. No racial discrimination necessary.


My point was principally about the issues with 'equality of opportunity' in the US because of the massive inequality already present.

On your note though yes with unlimited money we could do that, given limited budget though targeting the assistance to those most affected makes sense. Also a program like that also does little to nothing for people who aren't very young when it's started. Also ultimately that's an equality of outcome style solution because it's giving additional resources to some.

(this is more about things like the Harvard affirmative action program) I think there's also a meaningful distinction to draw between inclusive discrimination like affirmative action and exclusive discrimination of groups. I know it's an extremely tough line to define at a government level but they feel like very distinct categories of activities.


> meaningful distinction to draw between inclusive discrimination like affirmative action and exclusive discrimination of groups.

The only distinction I see is in the rethorical spin that people make. A preferential treatment of a set of people is exactly equal to a discriminatory treatment of the complement of that set of people (e.g. positive discrimination of black people equals negative discrimination of non-black people).


"The problem is equality of opportunity is it's invariably applied in a way that ignores some very racist and uneven starting conditions and the effects of the past."

Then that's where the energy ought to go. Adjusting the end result (e.g. by adjusting the college admission criteria based on race/ethnicity) doesn't really fix the root cause; it's a band-aid on America's broken leg.

If the starting conditions are unequal, then that is by definition not equal opportunity, so correcting those conditions should be Priority 1.


>our opportunities are directly tied to the outcomes of the generation before us

They are related, but "directly tied to" seems hyperbolic and deterministic.

Frederick Douglass is an obvious exceptional example of a human being transcending the outcomes of previous generations.


Sure he did great doesn't change the fact that individual examples like Frederick Douglass don't negate that fact that intergenerational incomes are pretty inelastic in America. It's a statistical problem across all slices of the population.

https://en.wikipedia.org/wiki/Socioeconomic_mobility_in_the_...


I am also interested in equality of opportunity- but how do I measure that everyone has an equal opportunity to achieve something? How do I measure the opportunity to consume nutritious food? The opportunity for access to the same healthcare?


It goes deeper than that if you go into cultural biases prevalent in poor (regardless of race) communities. Cigarettes, scratch-offs, crappy food.. what counts as opportunity?

Is being raised a certain way tantamount to having 'less opportunity' to make better micro-decisions?


In order to create a situation where people have equality of opportunity, it's often necessary to ensure that the individual had an equal outcome that creates the necessary starting state.


Conflating those two inherently different concepts for the sake of convenience will not net good systems. Equality of outcome is NOT a substitute for equality of opportunity. Even if attempting to create as equal as possible of a starting state has some surface similarities with equality of outcome, it's important to maintain the distinction.


Doesn't this mean Harvard should accept everyone? If equality of outcome is important - and I'd argue that it is to a much greater extent than most people would consider acceptable - doesn't it stand to reason that we should care about it at the level of individuals, not merely populations? What's strange to me is that people seem to think that equality is only important when talking about small differences across arbitrary social constructs, while entirely ignoring much larger gaps in individuals.


In theory I agree but in practice that often leads to racism. For example, some people support discrimination for blacks (& against whites) at e.g. college admissions, purportedly to offset their (statistically) worse family background (in terms of education and wealth). Clearly, that's a terrible and racist solution, with the better and obviously correct solution being... to support people from worse family backgrounds! Instead of targeting the proxy for the problem, you solve the problem itself.


The flaw with this argument is that if you recognize that for example, black people are generally worse off in family background than white people, then you end up back at the point where you on average offer more aid for black families than white ones which by your argument would be racist.

You're not solving the problem at all by attempting to sweep racism under the rug; quite the opposite in fact because it fails to address how race affects family background and discrimination in the first place.


> which by your argument would be racist.

You seem to have misunderstood my argument; maybe I wasn’t really clear in that part. I think a policy that helps lots of poor black people, some poor white people, and doesn’t help Obama’s daughters (because they don’t need help), is a good, fair, and most importantly effective policy. A policy that doesn’t help poor white people is racist. A policy that helps Obama’s daughters is ineffective (and I don’t want my tax money to go to that cause).


A policy that affects one group of people more than others by racial lines sounds like a racist policy, don't you think? Which was my entire point. You're in effect, arguing for a policy that is racist by your own prior argument.

We have affirmative action policies and the like because we admit that some groups of people are disadvantaged due to a multitude of circumstances and we attempt to do things in such a way as to ensure that they can mitigate those circumstances.


I think you're conflating the two different definitions of racism here. In one, something is racist if it has different impacts on people of different races - not due to their race, but due to some correlated variable. In another definition of racism, something is racist if it uses race as a determining factor in behavior or treatment.

Providing aid to low-income families is racist by the first definition, but not by the second. But (damn near) everything is racist by the first definition.


Why on Earth supporting poor people regardless of their race would be racist by anyone's criteria?

It would seem you are the racist here thinking a person needs support because he is black, not because he is poor.


Think of it like this. If you wanted to support all individual poor people in America equally, you'll end up helping a disproportionate number of black people, because black people are disproportionately poor. So it's actually reasonable to say "black people need a disproportionate amount of help" if you're trying to help poverty in an unbiased way.


That is imprecise to the point of being unreasonable.

In your example help isn't given to all black people, only to the poor ones.

If you consider help given to poor black people vs. help given to poor white people, it becomes proportional.


You need to reread my argument if that's what you came out of it with.


[flagged]


It's clear you're not here to actually argue in good faith, which is a shame.


It's clear you're not here to actually argue in good faith, which is a shame.


Why? There are many, if not the majority, of examples of mobility across outcomes. People don’t have to be born rich to become rich. Nor poor to remain poor. So many millions of people have taken advantage of equal or even unequal opportunities to create better outcomes for themselves and their descendants, why now are those successes all invalid and only by shoveling equality of outcome onto the “disadvantaged” are we able to make things fair?


Do you believe getting rid of accessible parking spots makes things more fair...?


Are you saying that black people are disabled in a way just because they are black?


...what? No. I'm asking about the equality of opportunity vs. outcome. Parent said he's for equality of opportunity rather than which accessible parking is most definitely not. I'm suggesting we need more nuance than just categorizing one as desirable and one as undesirable.


I mostly disagree with what “gdy” said but he might have a kind of a point. We, the society, choose to unfairly advantage disabled people, recognising that even with all their advantages, their lives are still pretty shitty. On the other hand, there’s plenty of black people who can, and do, raise to the top of whatever success hierarchy you want, so they don’t need to be treated better; they just need fo be treated the same. (As a thought experiment, would you rather be black like Obama or Kanye, or some disabled guy like Hawking or Roosevelt?)


I don't think this contradicts what I wrote. Like I wrote in the previous comment, all I'm saying we need more nuance on whether equal opportunity or equal outcome is better, because it's situation-dependent, and often some combination of both. On this point you guys are agreeing and then analyzing the situation which is... exactly what I've been saying.


You got my point correctly and I'm interested to know what you disagree with.


That's a fair point I agree with.


Take heart, you're in the silent majority.

https://www.theatlantic.com/ideas/archive/2018/10/large-majo...


Contrary to what you read online, very few people should discuss those concepts in contrast with each other. Almost all of the public debate is contained in the former. Therefor being for the former doesn't really say much, at all, and any encyclopedia will tell you why.

Edit: I will make it easier for you. If you do not understand most of this page[0] you are ignorant of one of the most fundamental debates in society. If you do you would understand my position. The wikipedia version might do as well [1]. It is ironic how people who don't "believe in outcome" are often ignorant.

[0] https://plato.stanford.edu/entries/equal-opportunity/ [1] https://en.wikipedia.org/wiki/Equal_opportunity#Theory


I was interested in the slide deck and wanted to hear the author expand on some of his thoughts, found a video of his presentation if anyone else is interested.

https://www.youtube.com/watch?v=Zn7oWIhFffs


Wow, this is really good. I hadn't thought of some of these things and he exposes some points that should have been obvious to me. For example: You aren't guaranteed to be able to maximize two functions at the same time.


This sounds an awful lot like voting impossibility theorems (Arrow's and its generalizations)... are they related?


I don't think so. This is simpler, it's just stating that an optimization problem on a constrained information set must be weakly dominated by the full information set.

Arrow's theorem is a more domain specific and elegant proof regarding political properties that can be simultaneously held.


More like the orthogonality thesis, I think. These different value axes are totally unrelated, and building an algorithm that maximizes one does not necessarily mean that it maximizes the others as well, and this mismatch can have disastrous consequences, a la "The Paperclip Machine"


But it is possible to make one that is representatively fair first and foremost, with procedurally fair and utilitarianism at 2nd and 3rd priorities.


representatively fair first and foremost, with procedurally fair and utilitarianism at 2nd and 3rd priorities

This sounds like a description of Harvard's admission policies with regards to Asians.


Are Asians underrepresented at Harvard? Or if you’re referring to something else, can you elobarate?


As I understand it, Asians are "underrepresented" at Harvard relative to what they would be if Harvard judged candidates based on SAT scores, scholastic achievement, and other seemingly-objective-and-reasonably-sensible metrics. They're "underrepresented" relative to the pool of candidates who are qualified to be at Harvard. But they are "overrepresented" compared to the general population of the United States. Which comparison is the right one?


Surely their high SAT scores and over-representation at Harvard compared to the general population are both caused by some heretofore unidentified analogue of white-privilege - what other explanation is there?


'Surely'? Why must it 'surely' fit into a preconceived ideological straightjacket? (One who's adherents are like 90% white, FWIW).

The answer is that asian immigrants are a self-selected subset of 1.5B asians in asia.

The social justice mental toolkit doesn't have an answer to this anomaly. American census numbers and the assumption of a normal distribution within them don't square with the highly-motivated subset of Asians who immigrated.


Given that:

    1) Cultural factors often raise people from poor to middle class
    2) The above often happens with little monetary input
    3) Culture is transmissible
In contrast, giving people money and resources has a very poor track record for increasing wealth long term. Why aren't social justice people all about the transmission of wealth building cultural knowledge? If anything, on the whole, they seem ideologically in favor of policies that erode the transmission of wealth building cultural knowledge.


I hope you're not asking me, because I don't understand the social justice people at all.

My guess is a combination of social media obsession and limited life experience cause them to optimize for 'sounding woke' over 'things that help real people'.


I think deogeo was being sarcastic, though see also Poe's law.


Aren't all immigrants a highly-motivated subset of their home countries/continents?


> if Harvard judged candidates based on SAT scores, scholastic achievement, and other seemingly-objective-and-reasonably-sensible metrics

I think this type of statement represents the main problem that folks who run down the “Asians at Harvard” rabbit hole.

People assume that Harvard admissions is all just based on who is the biggest brainiac based on grades and SATs, and that’s simply not the case at Harvard or any other elite school.

Decent article on the topic:

https://www.washingtonpost.com/education/2018/10/21/dockets-...

High SAT and high grades gets you merely a 2 (1 being best) in one of four categories (academic, athletic, extracurricular, and personal/leadership). If you get a 2 in three of four categories, it still only gets you a 40% chance of admission.

That said, getting a 1 in any category gives the applicant a bump to anywhere between 48% and 88% chance of admission. Note that only about 100 people per year get a 1 on academic, and that still only gives them a 68% chance of admission if that’s their only 1. Standards are sky high.

To anyone who has high scores and great grades in a rigorous curriculum and wants to apply to an elite school, I advise them to differentiate themselves in ways other than academics — it’s a much easier path to acceptance. In my opinion, these non-academic selection criteria is one of the things that makes top schools amazing places.

Specifically:

- Great grades and great SAT scores (750+ in Math and Verbal) are a baseline for being considered.

- The more of the following list below an applicant can check off, the more likely they are to get admitted. If they have several high quality checks, the requirements for grades and scores can actually decrease to a surprisingly low level:

(rough order of importance... non-exhaustive list)

1. Recruited athlete.

2. Diversity candidate (race).

3. Notable academic achievement (e.g., publish a paper).

4. Demonstrate leadership via some verifiable and substantial project.

5. Be a skilled-but-not-recruited athlete (esp. in non-varsity sports), entertainer, or person of interest (e.g., children of famous or powerful people). Accolades or championships help.

6. Create a substantial and successful business or non-profit.

7. Diversity candidate (geographical).

8. Have an incredible personal narrative (e.g., Malala) — this might be more important... it’s hard to tell.

9. Be the child of an alum.

10. Be the child of a substantial donor (also hard to rate in importance... it really only comes into play if they aren’t admitted normally).

11. School/department applied to (depends on university).

It boggles my mind when someone (Asian or otherwise) points to great grades and great SAT scores and gripes about not getting into an elite school. Elite schools explicitly state that they are looking for well-rounded applicants or “multidimensional excellence”. Why is that person/applicant surprised when they only focused on one dimension and did that in a non-exceptional way (compared to other applicants)?

Anyway, this is a topic I know quite a bit about. Please feel free to ask follow-up questions.


Then why do their admissions numbers line up neatly into racial buckets that match exactly what a social justice person would like to see?

I'm not saying you're wrong, but when their internal paperwork seems to indicate an ASTOUNDING difference in "personality" for blacks vs Asians, at a large-numbers level.. I mean do we really buy that? Can we even just consider for a moment how racist that sentiment is? Against both races?


1. Asians are over-represented by ~4x versus their percent of the population (5.6 in US and 22.9 at Harvard)... not that this is a useful metric. Is this a stat that SJWs are happy about? I genuinely wonder.

2. Affirmative action is a thing. If you stop the so-called “Asian quota” (e.g., by going race blind), then any slots gained will most likely result in losses for slots from other ethnic minorities. Some people believe that shaping the student body a certain way is good for the university. I personally agree with that general statement, but the specifics are not easy to hash out — there are many trade offs, and it’s not easy to determine where the magic cutoff is for too few or too many.

3. If Harvard goes race blind, don’t believe for a second that it will turn out like Stuyvesant or Berkeley. IIRC, when the change happened, Stuyvesant did not focus on the “multidimensional excellence” that most other elite schools do. Berkeley just lets in too many people to need to review to that level of detail. Harvard and other elite schools might see a very small Asian bump, but my guess is that it won’t be what people expect.

Note that I strongly advocate for fairness in the admissions process for all people, and I definitely think that some Asians get stereotyped poorly. That said, I also think that a lot of people, Asians as well as non-Asians, simply don’t make efforts to shape themselves in a way that optimizes their chance for elite school admission.


Thanks for your response.

I can't tell if you're disputing my assertion that they first decided what their racial buckets "should" look like, then stack ranked within each bucket and fudged the numbers to make it look less discriminatory.

It's that last part that I have a real problem with. If Harvard wants to say, "we want x% black and y% Asian, etc".. ok, I guess, but if they're going to say "black people have lots of personality to make up for their lack of science skills", and vice versa to Asians, that's just racist as hell to everybody involved. Own your decisions.


Since it's a university I'd assume lots of students applied from their home countries and migrated later. That makes the 4x argument pointless


While there are foreign nationals who come in as undergrads, it doesn’t sway the percentage that much (x domestic and 3x international). Most undergrad Asians I’ve met at elite schools are US residents/citizens who have been in the country for a while (i.e., not just someone who came for high school).

That can be quite different for grad schools, though.


Thanks for weighing in with the detailed information.

I actually had extracurricular activities in mind as one of the "seemingly-objective-and-reasonably-sensible metrics". (It can be harder to judge—e.g. "president of the choir club" could mean anything from "four people got together twice a month to sing" to "organized concerts and musicals in collaboration with the theatre department"—but maybe candidates describe more precisely what they did.) Legacy and athlete stuff make some sense in terms of the university's financial self-interest... most of the other things can be interpreted as demonstrating ability in some way (academic and leadership ability), or motivation/"Big 5 trait Conscientiousness". The "incredible personal narrative" seems like a wildcard that could point to lots of different things, but in Malala's case... in general, public relations could be a reason to take famous people. Department applied to, I assume that's because some departments have lots more applicants or don't need as many people than others. It's the racial and geographical diversity things that seem out of place.

Geographical diversity question: I've heard that, specifically, colleges like to have at least one person from all 50 states. Why is that? Just because it sounds cool? Is it really for some kind of viewpoint or background diversity? (A counterargument to that, at least for elite schools, is the same as for racial diversity: if you think the people from e.g. South Dakota who make it into an elite school are at all typical of South Dakotans, you've got another think coming.) Is it so they'll have alumni everywhere and therefore recruiting help and alumni interviewers everywhere, which wouldn't be efficient if all their alumni were in one place? (How many South Dakotans who go to an elite school actually stay in South Dakota after graduation? That doesn't seem to make sense.) A quick google merely turns up the quote "“Every school likes to be able to stick that little pin on the map and say they have kids from all 50 states,” said Andrea van Niekerk, a college admissions consultant and former associate director of admissions at Brown University", which is basically "because it sounds cool". (I also note that "a student from every state" could mean, at most, accepting 49 students you wouldn't have otherwise—probably less—whereas the extra 19% of Asians that Harvard didn't admit (I mention 42% vs 23% figures later), of a class of 1600, means accepting 300 students they otherwise wouldn't have—so the every-state policy has a smaller impact than the racial-diversity policy, and maybe the impact is small enough that indulging that whim is ok.) Do you know the rationale, for this and/or for geographical diversity in general?

Historical question: I've heard that the reason extracurriculars were originally introduced, as a criterion for candidates, was as an excuse to accept fewer Jewish students. Here's an article asserting this for Harvard specifically: http://web.archive.org/web/20190109180814/https://www.econom... Do you know if this is true?

SAT ceiling question: The maximum score you can get on the SAT is low enough that there are plenty of people that max it out, and plenty of others who would have maxed it out except they made one or two silly mistakes. I suspect that's why you say 750+, in that differences between anything above that are just noise; and that this still leaves a large pool to sort through. (As a data point, would the set of perfect-score SAT applicants still be too large to fit in a Harvard undergraduate class?) If the SAT had a higher ceiling, say going up to 900 or 1000 somehow, where the difference in actual ability between the average 850 and the average 750 was the same as between the average 750 and the average 650, then do you think they'd weigh SAT scores more heavily?

Olympiad question: are academic contests (math olympiad, physics olympiad, etc.) counted heavily in the "notable academic achievement" category? If so, would you say they substitute for a higher-ceilinged SAT—that if the latter existed, then they'd put less effort into researching and evaluating the olympiad contests?

Caltech question: I've heard that Caltech, at least as of some years ago, basically just selected for academic scores, and they had 0.6% blacks. Looking at this year, they have 40% Asians, compared to Harvard's 23%. Come to think of it, I've also heard that the University of California system is not allowed to implement affirmative action, and this year ... they break "Asian" down into subgroups, but if I add together "Chinese, Filipino, Japanese, Korean, Other Asian, South Asian, and Vietnamese", I get 42%. Anyway, any comments about Caltech or UC?


Are Asians underrepresented at Harvard? Or if you’re referring to something else, can you elobarate? (sic)

If you re-read my comment, it's clear that I'm not talking about rates of representation. You are trying to bring that in. We're talking about the process, be it a policy implementation or an algorithm.

It's the age-old "equality of outcome" vs. "equality of opportunity."

https://archive.org/stream/HarrisonBergeron/Harrison%20Berge...


I wasn’t trying to bring anything in, simply curious exactly what you meant. You’re right though, if I had read you comment more closely I would have realized my question was irrelevant. Thanks for the link.


Yes, it is. But if you're designing an algorithm that assesses credit risk and you do this, you will lose money compared to a company that does not do this.


You might even cause a financial crisis along the lines of the 2008 disaster.


As long as you're too big to fail and get bailed out, it's someone else's problem.

EDIT: I didn't say this was just or right, just the current state.


Frankly, private industry has been lucky up until this point that anything other than your financial history has been allowed to factor into your credit score or credit worthiness __at all__. I wouldn't be surprised if it gets limited to solely financial factors by regulation within the next 10-20 years.

In terms of regulations, I think by regulation they shouldn't be allowed to give the algorithms anything but loan amounts, payment schedules, and payment history. Don't give them locations. Don't give them age, name, race, or gender. Don't even give them the names of banks, since this too can be used in a discriminatory way that can be used to imply geographic location and thus race etc. Yes, people with no credit history will get terrible scores, but that's the point of a credit system. It should be based solely off of your merits and what you've actually done, not off of what people in your area/situation tend to do based on statistics. Then it's just statistics creating statistics, and socioeconomic mobility freezes


I'm not sure what you mean; are you saying an otherwise-creditworthy person could get denied a loan based on other factors?


Right now yes companies are starting to use things like your facebook profile as a basis of denial.


That’s madness, do you have a link?


The cost-to-benefit analysis of wheelchair ramps doesn't look great either (compared to not building one). Focusing on profit alone leads to some pretty terrible outcomes for people.


Taking credit when you can't pay it back doesn't just hurt the bank - it hurts you too.

Getting loaned money is not a right, and if you are irresponsible, it's harmful.

So if lending more to a certain segment of the population hurts the bank's profit, I'd also expect it to cause more problems for that population segment too.


Which is why we need regulation forcing you to do it


Why is representatively fair the gold standard? What if in 10 or 20 years we view one of the other methods of optimizing as a more "fair" way (i.e. what if our current consensus on what is best is not absolute truth, but a social and cultural bubble)? Then all our algorithms will be sub-optimal (at that time).

Just as different cultures value these differently, it makes sense that one culture that changes over time might as well.


But you have to be sure the groups you are equalizing across are fair and valid and comprehensive groups.


[flagged]


What are you trying to accomplish here?


I think it's a demonstration of how identity politics can result in ever more fractionated interest groups. If you keep going with the finer and finer divisions, one might suppose that one must stop at the level of individual people. (1 Kings 3:16-28 not withstanding.) Then one needs to come up with a way of adjudicating between the conflicting interests of different individuals, based on individual rights.


We've banned this account for trolling. That's what flamebait on inflammatory topics is, intentionally or not.


He has a point, though.


It seems like a shallow and obvious one to me, but if I'm mistaken and there's more there there, it deserves to be expressed substantively and respectfully. Otherwise the only effect is to rile up agreers with positive charge and disagreers with negative charge—just what we don't need.


I consider that argument to be a case of "Reductio ad absurdum"[0] and don't see anything disrespectful in it.

[0] https://en.wikipedia.org/wiki/Reductio_ad_absurdum


It seems like a shallow and obvious one to me

That's really unfortunate.

I could go on quite a bit about how, as a man of color, it gets fatiguing having certain conversations or trying to express certain points of view in these types of socio-political debates lately (to my dismay sometimes right here on HN-a forum that holds itself to a certain standard of debate but seems quite eager to throw it out, convenience permitting), but looking around at a lot of other comments in this very thread that seem to run contrary to whatever personally defined "truths" are out there, it'd probably just get downvoted and flagged as well.

I'll just leave it like this, if it seems "shallow", to you-dang-then probably it's time for some deep personal introspection because like gdy, I think shadowbained's point isn't wrong and I don't fault him for the fatigue that's probably in his soul which results in a delivery style that doesn't fit a mold of pretend decorum. A mold manifested and curated by political-egotists, IMO. (This last point made as broadly as possible)


I don't think you're taking into account how frequently these things get repeated. Anything so frequently repeated becomes tiresome.


The irony.

I'll try not to let it be too much of a burden on me as I sleep tonight..


Many more things are repeated just as frequently, but somehow they don't result in banned accounts.


Sure, but it's not hard to understand why. Race war is one topic, array indexing is another. Which is the greater risk to this site? If the answer isn't obvious, please read https://news.ycombinator.com/newsguidelines.html.


I don't really think that comment presents any risk to HN.



new CAP theorem? PUR? RUP?


yes, and yet "perfect is the enemy of good" ...

I think the article encompasses your objections, to paraphrase : "algorithms can never be perfectly 'fair', because logic itself can never be perfectly fair"

Can we even define 'fair', perhaps we just know unfair when we see it. We seem to be seeing more of it - poverty, homelessness, racism, fact/science-denial, gerrymandering, police violence, gun violence, etc.

Yet, we should still strive for fairness.

There is a lot of ground between where we are now and 'perfect' - and this is AOCs point, essentially - our system of [self?]-rule is palpably unfair for most people, yet we are busily encoding more of that unfairness into algorithms with little oversight.

AOC [ and Bernie, and others ] want us to seek a better implementation of society, and encode that in our algorithms and laws - fairer voting laws, fairer district maps, fairer pay, fairer penalties. Or the converse - encode more fairness, as a means to realize it.

For example, it may be a hard math problem to design optimally fair district boundaries, but its noncontroversial to suggest we can reduce gerrymandering significantly.

One problem seems to be that most people equate socialism with communism [ 'evil' ], when in fact you are always picking from mix of Democracy, Capitalism, Socialism, Oligarchy etc. After Bernies campaign, and a couple years into the Trump experiment, people are now starting to listen to these old but useful ideas.

Ultimately if we don't share any wealth with the poor we will have a bloody revolution, and if we don't reduce Carbon emissions then no wall will keep out the displaced millions of climate refugees.

A bit more Socialism in our Capitalist/Democratic/Oligarchic/Socialist mix, is sorely needed right about now.


Thank you very much for sharing this perspective. I think it probably does a good job standing in for how a lot of people think about these things. This is a difficult area of discourse, and I know people feel strongly about it. Obviously, the lecture you linked to makes many more distinctions. I'm going to try to simplify one of the distinctions to make it easier to talk about: that there is a tension between "procedurally fair" outcomes and "representatively fair" outcomes.

In my opinion, I think this idea is wrong, at least in the context of designing AI within contemporary American society. To frame "procedurally fair" and "representatively fair" as opposing value systems is a misunderstanding of the best arguments for a "representatively fair."

Let's use a common analogy: a race. This distinction imagines a situation where one race puts a bunch of runners at the starting line, tells them to run, and declares a winner based on who crosses the finish line first. That race is "procedurally fair" because all the participants raced under the same conditions. Then, there's a "representationally fair" race, where runners from "protected groups" are allowed to start the race from the middle of the track.

I think there are two ways that this isn't right that come from a common misperception, that the only relevant time to think about in the context of these decisions is the present. (I also want to mention that there's a lot of heavy-handed language use in this presentation that reveals the author's perspective. Algorithms tilted against black people "reveal" disparities,

First, it's important to have a more accurate perspective of the past. In most cases where "representationally fair"-type solutions are used, there is a historic reason why the "protected group" can't run as fast as the other group. If people in Hyderabad are more likely to defraud a micro-lender, that's not a spontaneous result of inherent differences between Hyderabad-type humans and other humans. A quick google search tells me that as of 2017 Hyderabad has the second largest poor populace in India. So, in the race to a successful economic outcome from birth, the so-called "procedurally fair" solution actually means taking the Hyderabad runners as infants very far back from the starting line. All else being procedurally equal, when the starting gun of "applying for a micro-loan" goes off, the Mumbaikars and Delhiites are already far ahead of the Hyderabadis. Once we have a more accurate perspective that includes the past, in order to make the race procedurally fair we have to move everyone into place at a common starting line, which will inevitably mean helping the Hyderabadis forward.

Second, it's important to acknowledge the effect of biased algorithms on the future. Not only will the unfair, pseudo-"procedurally fair" approach unjustly disadvantage some runners in this race, future races are calibrated according to achievement in past races. If you win one race, you get a head start in the next one. Micro-lenders who refuse to lend to Hyderabadis will exacerbate the relative poverty situation that the algorithm is picking up on. It's easier to pretend to yourself that you're making an algorithm that peeks into the world, makes an objective judgment, and then pops back out of the world. But in fact, people designing algorithms have a responsibility for the outcomes of their algorithms. If an unfair situation exists (for example, that just by being born in Hyderabad and not Delhi, any given person will start life with less economic power), your algorithm's consequences will either be helping to fix that unfair situation, or making it worse.

So, if your algorithm punishes Hyderabadi applicants for being poor, it is both unfair in the simplest sense once you account for where the applicants started and will increase the unfairness of any future round of applications. Put another way, what we're talking about at a high level is values. In one libertarian version of society, the purpose of the algorithm is to maximize profit for Simpl. That society values maximizing profit, and enshrines "shareholder value" as the centerpiece of its ethics. In AOC's socialist version of society, justice, equality, and eliminating poverty are valued highest. This is the broader point she's making about algorithms, that they reflect the values of the people who make them. Given the power that these algorithms have, in our example to lift people out of poverty or to deepen economic inequality, society and not just technologists (not the most diverse group in countless ways) should lead the decisions about what should be valued.

We're seeing more and more the consequences of a techno-libertarian approach, and I'm curious what we'll see as consequences if a more socialist approach wins for a while. I value fairness over shareholder value, so the prospect excites me. But I'm sure there will be lots of unintended consequences in such a system too (the over-cited "asians applying to elite colleges" example being a good case of an outcome that's not easily justified on its own. I think elite college admissions are more fundamentally broken, but that's another thread!)


The problem here is, we don't even know what "fair" means. What do we mean by fair? Say we want to base fair on group outcomes, in which ways should we divide people into groups in order to perform some sort of balancing? Is hair color or texture a relevant way of subdividing people for instance? Are we being "fair" to red haired people? What if red haired people for some reason seem to be less likely to pay back loans? Will we need to adjust our credit rating model to bump the credit scores of red haired people slightly? Will we need to agonize over other features fed to our model to ensure that they can't be used as a proxy for red hair?

Maybe???? But it seems very dishonest.

I think, if anything, the models are probably too true for our tastes as social creatures who evolved to avoid social conflict (being 100% honest with your unruly neighbors can lead to a loss of harmony that is much more damaging than just putting up with some disturbances). Our idea of "unbiasing" the models is more likely actually making them biased in a way that pleases our monkey brains.


> Is hair color or texture a relevant way of subdividing people for instance? Are we being "fair" to red haired people? What if red haired people for some reason seem to be less likely to pay back loans?

Machine learning is going to be very good at identifying proxies for features we don't like, even if we don't directly provide those features. Weighting red hair color could be a proxy for the Irish, targeting hair texture could be a proxy for african americans, targeting zip codes has always been a proxy for race, etc.

If we want machine learning to ignore these kind of things it's probably going to have to be a separately weighted training goal that racial groups and so on need to have equal outcomes in the final results.

That's really what AOC is going for here. The black box will happily learn racist ways to weight things all on its own, we can't just throw up our hands and say oh well, if the algorithm is denying black people for home loans then we just have to accept it.

Because, in the short term, being racist is an effective greedy strategy. It's not your business's problem that, say, black people have higher rates of default (or whatever, example). But it also leads to bad social outcomes in the aggregate, and we've agreed that shouldn't be a factor in home loans. And we have to find a way to make sure that The Algorithm (tm) doesn't just find a way to machine-learn redlining or some other proxy for race or other protected characteristics.


Weighting red hair color could be a proxy for the Irish, targeting hair texture could be a proxy for african americans, targeting zip codes has always been a proxy for race, etc.

Off-topic-ish, but only about 10% of Irish people have red hair.


P(A|B) != P(B|A)


"And we have to find a way to make sure that The Algorithm (tm) doesn't just find a way to machine-learn redlining or some other proxy for race or other protected characteristics."

This is not wrong, but it boils down to "don't make mistakes when optimizing functions". It's offensive because it's coming from someone who has nothing to contribute except "do your job better". It's like saying we have a problem with planes crashing, so engineers have a moral duty to build better planes. As though they weren't trying to build planes that stay in the air already. You can make serious sounding statements, or you can impose fines, or you can randomly pick people to publicly shame, but none of those things are positive contributions to hard problems.


The consequences of a poorly designed algorithm are far less visible than those of a poorly designed airplane. Oftentimes they are deliberately kept secret. How many banks do you know that publish their loan qualification source code?

Mechanical engineers are taught from the beginning of their careers to think carefully about the consequences of their decisions.

Computer engineers, not so much. We're taught how to optimize, and make machines that optimize, and make optimizers that optimize our optimizing machines, but it's pretty rare to encounter a data ethics course, and they're almost never program requirements.

Facebook got in trouble for algorithmic redlining in 2013, serving loan sharky ads to black people (which its algorithms learned via proxy features, like zip code). Does Facebook hire third-rate engineers who would point the engine on a plane the wrong way? No! Facebook has some of the best people in the industry! But even the best people in the industry were and in many cases still are not fully considering the consequences of our designs, because we weren't trained to.


"they're almost never program requirements"

Wrong. We had two courses at my school. From the ABET Criteria for Accrediting Engineering Technology Programs: "(e)Include topics related to professional and ethical responsibilities".

The issue here is proxy features, and that it isnt as obvious if your statistical model is doing something wrong. You identified zip code and race. What about number of cats? Would you have a particular gender and marital status in mind if a given data sample had a large value in the numberOfCats column?

So you ban making decisions based on number of cats. But amountOfKittyLitterPurchased, numberOfCatPicturesViewed, numberOfDislikesOnDogPictures, etc could be used as a proxy for numberOfCats. Try to come up with an exhaustive list of all features which could be used as a proxy for some protected class. Starting with politically incorrect jokes will get you part way there, assuming you possess an exhaustive list of all politically incorrect jokes. But you will need to be pretty creative to come up with more.

https://www.abet.org/accreditation/accreditation-criteria/cr...


If planes were built as shoddily as software is, even software that has massive material impact on people's lives, we'd have a lot more crashing planes, and a lot more people angry about it. And we have a huge group (a majority even) of software devs that are completely fine with the status quoa of how crap software is.

As a software person, I have absolutely no issue with telling other software people to do their job better, and I see no shortage of people that aren't even trying on that front.

Its your comparison between a process that takes safety, reliability and a reduction of harm as a core, first value, and a process that cares first about money, and second about novelty and somewhere way down the line people, maybe, that is offensive frankly.


The problem is the self-reinforcing, external factors that these algorithms don't take into account.

If you feed a ML algorithm crime stats, it will conclude crime is directly correlated to being black, or being a former criminal. But this correlation only exists because of how we've chosen to conduct the policing of black communities and treatment of ex-cons upon release that produced the the training data in the first place.

Most people here are male, were once (or are) under 25 and likely drive. How'd you like paying more for insurance than your parents because some system decided men are bad drivers, and doubly so when they're young? Triply so when they happen to be driving a red car? Quadruply so if there are prior citations/accidents?

Shit, with a spotless driving record, my own insurance went up when someone t-boned me. Other driver was at-fault and I subrogated their insurance; didn't claim with mine. But I'm being lumped into a higher-risk pool with my own insurer for factors completely beyond my control-- because to some algorithm, being in an accident means I'm more likely to be in accidents, and all drivers who are involved in collisions are higher risks than drivers who aren't. Note how the fact that I wasn't at fault isn't factored in. This makes sense to you?

The social training data in much of Europe is skewed towards hating Gypsies and anyone perceived to be one. But it's the same as the ex-con problem-- deny them jobs, beat them, chase them away, and of course an ex-con or young Romani will commit future crimes. You're shaping behavior to validate your training data instead of making unbiased ("fair") predictions. It's the equivalent of betting on a horse, injuring its competition, then patting yourself on the back for being so goddamn good at this game.

You can't look at the world in terms of sheer numbers. The numbers themselves are dishonest.


> How'd you like paying more for insurance than your parents because some system decided men are bad drivers, and doubly so when they're young? Triply so when they happen to be driving a red car?

isn't this how it already works? young males cause a disproportionate amount of insurance claims so they get assessed higher rates. they'll even bump your premiums for driving the performance version of a given car. being a young male, I certainly don't like it, but I'm not sure I would call it unfair.


It does. That's the problem.

Take an 18-year old with a red Mustang and a 58-year old with a midlife crisis and a 3-series. Both drive like assholes. Both will inevitably cause an accident.

The latter is more likely to be wealthy enough to be able to pay cash for repairs and keep the entire incident off the radar. No insurance claim, no citations, no evidence. The 18-year old has no choice but to make an insurance claim, go to court and take points on their license.

Can we truly conclude that 18-year olds with red Mustangs are worse drivers than their elder counterparts?

(And yes, I realize that in the case of insurance, it's in their interest to minimize claims so in that context this is a valid conclusion. But this same skewed data gets shared outside of the industry as well.)


> Take an 18-year old with a red Mustang and a 58-year old with a midlife crisis and a 3-series. Both drive like assholes. Both will inevitably cause an accident.

this is not a fair example; by assumption we have much more information about the 58yo than the 18yo. remove the bit about the midlife crisis and there's no reason to think the 58yo is anywhere near the same risk to insure. as a group, young people are less experienced drivers, have less facility for judgement, and the males probably drive more aggressively.

unless you disagree that the goal of insurance is to spread risk among a bucket of people who have relatively similar risk factors, I'm not really sure where you're going with this.

and by the way, I'm a young male who drives a fast car.


> unless you disagree that the goal of insurance is to spread risk among a bucket of people who have relatively similar risk factors, I'm not really sure where you're going with this.

I don't disagree. In this context, if older men make fewer insurance claims, they're better customers. On paper, he's less of a risk (to the insurer). In reality, they're both risks to other drivers. But only the 18-year old will be held accountable.

But assuming the under-25-and-male crowd are the worst drivers on the road based on number of insurance claims is not necessarily true, yet it's the basis for this oft-repeated assertion. That's my point. What isn't the insurance industry's data telling us when we use their data to make these conclusions? To get more accurate numbers, you need to be looking at the demographics of all body shop customers and the nature of their repairs. What demographics get the most front-end work done that isn't covered by an insurance check?

This new age of computing seems to be to make machines understand how the world works by putting blind faith in statistics, without ever stopping to question where the stats came from or how accurate they really are. The numbers aren't handed to us on golden tablets carved by God himself-- we get them from imperfect systems designed by imperfect humans. And then we teach dynamic algorithms how reality works based on this data and sell them as solutions, which we trust absolutely.

> and by the way, I'm a young male who drives a fast car.

No offense intended. I once was, too.


I think I misunderstood the point of your argument. at first I thought you were specifically singling out car insurance as being unfair, which seemed odd to me, as it seems pretty close to WAD to me.

if you are just using it as an example of a misleading dataset, we are probably in agreement.


are you assuming the event to be inevitable? if so, you're really opening up a can of worms here, bc insurance doesn't really function with p(event) = 1.0

a better argument would be something like: suppose we detect a gene that indicates probability 1.0 of disease that causes the person's left arm to fall off.

it's then not a matter of "insurance" but a matter of "are we as the pool willing and able to pay for this?" recall company needs some profit as well as cost of coverage to maintain overhead so it's a different argument now.


Interestingly, though, in many insurance industries it's illegal to offer different rates based on gender. Health insurance is one example.


> The problem is the self-reinforcing, external factors that these algorithms don't take into account.

And changing the algorithms isn't the right way to fix those external factors. If you try, their self-reinforcing nature is going to cause exactly the opposite of what you want.

To stay with the crime vs. race example, assume that the ratio of black to white criminals is greater than you'd expect from the ratio of blacks to whites in the overall population, and that is due to self-reinforcing external factors. Your algorithm for deciding whether to parole a prisoner picks up on that statistical regularity and rates blacks riskier than whites. Because you want to achieve representational fairness, you make the algorithm race-blind to equalize the rates.

What happens? The black prison population shrinks. But your algorithm wasn't wrong when it predicted higher recidivism risk, so the black crime rate rises. Your algorithm now requires an even stronger correction to remain race-blind. Black criminals realize that they have an easier time getting paroled, so the black crime rate rises some more. Debiasing the algorithm has made the situation worse.

In general, when you find out that your machine-learning model has some undesirable bias, the correct response is not to blind the algorithm. Instead, you should fix the data-generation process (the real world) until your model no longer picks up on the bias.

Edit: For a mathematically precise description of how attempts to make an algorithm more fair cause worse results when you consider the feedback loop that it is embedded in, see "Delayed Impact of Fair Machine Learning" https://arxiv.org/abs/1803.04383


Your higher rate makes a bit of sense. The worst problem is the small sample size. If everybody crashed a minimum of 100 times, then the risk estimates would be much better.

The fact that you were hit is influenced by your risk factors. For example, maybe that is a bad intersection, and you always drive through it. Maybe it really is just the other driver being bad... but he and you happen to leave for work every day at a reliable time, making it likely that you will encounter him again. Maybe your car is hard to see. Determining "not at fault" is a fuzzy thing; perhaps a better driver would have somehow avoided the crash. The number of miles you drive is an influence on the number of crashes you get, and perhaps you drive more than is typical.


"Note how the fact that I wasn't at fault isn't factored in. This makes sense to you?"

Actually, yes. In Russia we have a principle called DDD which can be translated as Give Way to a Moron ("Дай дорогу дураку").

People who avoid dangerous drivers, even when it hurts their ego, are less likely to get involved in a collision.


Maybe don't split groups on genetics/biology/birth.

Also make sure the data you use is not a proxy for the above criteria


But what if, due to historical happenstance, the data is already a proxy for groups split on genetics, biology, or birth? Using that data in a way that might create a feedback loop that propagates that happenstance is entirely likely.

Part of the problem is that, when dealing with humans as the data set, the data is really messy and there's lots of noise correlated to historical circumstance.


Easier said than done.


I know, this is why all this algorithms should not be hidden/proprietary. At least we have a chance to spot why/how the computer decided my fate.


The problem is that in many cases (especially with deep learning), "the algorithm" is essentially a black box. There often isn't a straightforward way to determine exactly what the algorithm is actually learning.


I know that too, but if I(or the community) have access to it I can prove the algorithms is flawed. Say i change a bit the input and notice a huge output, like I change my non enlgish name to an english name and by magic my score triples then I prove the algorithm is discriminating me(same if I change my birth place or address).

I don't think that opening the algorithms is a sufficient condition but I think is necessary.


> The problem here is, we don't even know what "fair" means.

Maybe, but we know blatantly unfair algorithms when we see them. And we see them.

e.g. Remember the AI recruiting tool that automatically rejected women because that's what it learned from the input data?

https://www.theverge.com/2018/10/10/17958784/ai-recruiting-t...


I wouldn't say its impossible.

They may happen to align, you cant force it though.


If there exists a scenario where it fails then yes, it is impossible. Just because something works the way you intend it to in some specific case doesn't mean it works in the general case (and the general case is what matters here.)


If it works in a specific case it isn't impossible.

I agree that its the general case that matters. Impossible is a very strong word though.

How wide do you want to cast the 'general' net? You can still have limited scenarios that are useful. You don't have to model the entire universe.


The GP's claim is that yes, it is _literally_ impossible. He cites a source[1]. If you have a disagreement with that theorem then by all means, share. Right now you're just saying "Nah I don't think so", which isn't very convincing.

[1]: https://www.chrisstucchio.com/pubs/slides/crunchconf_2018/sl...


That article gives an example of the Scotsman and Englishman.

Each are equal, 25% get 1200 on the sat, there is no allocative harm.

It is therefore fair, no?

If it is _literally impossible_ I must be misunderstanding something. Because I don't see the things he's mentioning as being mutually exclusive.


> There is no design of AI that can be simultaneously utilitarian, procedurally fair, and representatively fair.

AI is just tools of humans. Any system that can be achieved with humans can also be achieved with AI.

These AIs take us backwards from our existing human systems in addition to being much harder to correct later.

The supermarket theft example in the slides you linked to is a prime example. In what store in America would that kind of display be deemed acceptable? Yeah, you save a few dollars a day on theft, but what kind of goodwill are you spending? Now that AI is here, people think they can get away with things like that because numbers.


> Any system that can be achieved with humans can also be achieved with AI.

I hope you meant it the other way around.


What I'm trying to say is that inasmuch as you consider the current legal and cultural environment of America to be utilitarian, procedurally fair, and representatively fair, any system that could be built using AI can achieve the same. I don't believe that they will a priori deliver an improvement. But there's no excuse for AI systems to show any discriminatory biases that have already been minimized in human systems via legislation, training, etc.

Don't misunderstand me; I'm not saying that discrimination doesn't exist or that we live in a society with no biases. I'm saying there is no valid reason why AI should take us backward in that regard.

Computers are extensions of the will of man. Our AI researchers and developers should ask themselves what kind of society they want to create. Whether they choose to turn a blind eye to discrimination and bias, or choose to actively combat it, then that will determine the result. Civil rights didn't just happen; there were people fighting for it actively. The same has to occur in the digital world if the same result is desired.


I think it's a rejection of “AI can't be racist”—if humans can be racist, the AI systems they employ can also be racist.


The second half of the article that talks about Yudkowsky, rationalists, and Roko's Basilisk is absolute trash and a gross mischaracterization of the actual positions taken.

>Yudkowski has for more than a decade pursued the possibility of perfect human reasoning

His website is literally called "Less Wrong". It's fundamentally a quest for improvement, not perfection.

>His system of coldly logical reason, it turned out, was by many accounts completely undone by a logical paradox known as Roko’s Basilisk.

Roko's Basilisk is a thought experiment designed to import your "don't negotiate with terrorists" intuitions into a really weird corner of decision theory. It's possibly why prior decision theories held were rejected as flawed, but it's not a current issue with their logic. Roko's Basilisk has, unfortunately, gotten way more coverage and fame than it deserves in terms of actual importance. This is because of the regretful (but understandable) decision to censor discussion of it on Less Wrong, which backfired spectacularly.

>For super-nerd bonus points, it’s also arguably a spin on Godel’s incompleteness theorem, which argues that no purely rational algorithmic system can completely and consistently model reality, or prove its own rationality.

That's not at all what Godel's incompleteness theorem argues. That's much more narrowly about formal logic.


> "This is because of the regretful (but understandable) decision to censor discussion of it on Less Wrong, which backfired spectacularly."

Can you explain the (understandable) rationale behind Less Wrong censoring discussion of Roko's Basilisk? I've been lead to believe it was censored out of fear that discussing it could somehow inspire its actual creation.

Honestly from a distance Less Wrong has always struck me as vaguely cult-like, but I'm open to the possibility I've just gotten the wrong impression.


Sure thing. The basilisk can easily be misread as an argument to do everything in one's power to support AI research of some sort. It's more accurately an argument that you should ensure your decision theory doesn't coerce you into putting all your effort into supporting AI research. LessWrong has a bunch of really scrupulous people reading it, who take pride in taking ideas seriously, so censoring the discussion seemed worthwhile for avoiding the failure mode of having readers make the mistake mentioned previously.


I think the more charitable interpretation is that the discussion was taking over everything. If people are fixating on a narrow topic for a long time (without much progress) then sometimes it's helpful to say "alright people, let's move along now."

You're right about Less Wrong feeling like a cult, though. It gives me the heebie-jeebies.


>You're right about Less Wrong feeling like a cult, though. It gives me the heebie-jeebies.

They even have a succession plan similar to finding the next Dalai Lama:

https://www.youtube.com/watch?v=d9_2qhkWHN8


The entry on the less wrong wiki has some info on it:

https://wiki.lesswrong.com/wiki/Roko%27s_basilisk


Cults don't write about how to avoid being a cult, while the LW crowd has reams of texts and blog posts about how they are purposefully avoiding trying to become a cult.


Cults do talk quite a bit about why they're not a cult.


They even have an essay about _cultish countercultishness_: https://www.lesswrong.com/posts/gBma88LH3CLQsqyfS/cultish-co...

I think some of the feeling of cultishness might stem from a homogeneity of expression, but it kinda makes sense for a community of people dedicated to being rational -- after all, two rational agents with identical priors cannot "agree to disagree" in the mathematical sense. The community would tend to converge on common "correct" things and would have similar rebuttals for "incorrect" things, right?


Have they ever addressed the criticisms of rationality, technological rationality and instrumental rationality of the Frankfurt School in the 50s and 60s? Or, more generally, the critique of positivism and the collapse of rationality to facts connected with formal logic? These are fairly big topics in philosophy, so it would seem really naive if they haven't covered them.


I'm not personally familiar with those critiques and I haven't exhaustively read anything close to all the content from LessWrong, but I can point out that Yudkowski himself has a number of criticisms of Traditional Rationality. I think the most helpful thing I can add is that the paradigm embraced at LW is Bayesian rationality and not pure formal logic. Being massively reductionist, it seems to boil down to updating your expectations to be more in line with reality -- that being less wrong is about having an accurate account of your uncertainty of the beliefs that you hold, taking into account your priors, inherent bias, model complexity, etc.


This just seems like a very strange and restrictive way to model real-life problems, and it just seems to export the meat of the problem (taking into account priors, what exactly is bias, and if it's possible to be aware of it); for instance, I imagine that many people who subscribe to LW thought are probably uncritical of the ideological function of liberal democracy, for instance - or rationality being used to justify the status quo. These sort of things were taken up by critical theory, so there seems to be some overlap between critical theory and what it is to be rational.

I fear that a faith in mathematical reasoning for all problems can easily make you come out the other end thinking your opinions are mathematically correct, with minimal reflection on core issues such as the applicability of classical logical models or a mechanistic view of how people do (or ought to) behave - this kind of uncritical historically divorced reflection has not fared well in the past, with even economists admitting their model of the economic man is as much fiction as their worldwide Robinson Crusoe from a century earlier.


But I haven't seen cults talking about how to personally ensure you're not falling into trap of cult-like thinking. LW is a rather self-conscious community.



Sorry for the tangent, but from the outside I've also felt very "outside" the LW community. I'm not exactly sure how to "join" it. I know that I'm supposed to read the various required readings first, but then what? How do I see what LW is "talking about" or "fighting about" at a point in time? I mean this in a serious, logistical sense. Thanks!


The LW community has decentralized and deformalized things to a huge extent. It's not just you - the logistics of participating are much less clear than it used to be, and the whole system is far less legible.

Honestly the best strategy is attend local meet-ups, come across as intelligent and interested to the people who hold gatekeeping power over the informal non-public groups, and get in the loop that way. This is significantly easier in Berkeley, where a lot of folks end up moving to. You can also skip some of these steps by attending a CFAR workshop, which will both introduce you to many members of the LW community and put you on an alumni-only mailing list which houses a chunk of the general discussion.


I'm out of the loop now, but it used to be like everywhere else (HN included) - you follow the posts and participate in comments, and after a while you suddenly find yourself to be an established community member.


I don't fuck with LW, but SCC definitely has that culty vibe.

Overall I think Scott has some interesting articles and the general idea of rationalism is .. fine. But that's not at all what the communities around it are like. You just get tons of 'rational racism' with HBD (human biodiversity for those not in the know, which is basically just race realism with a nerdy veneer), IQ worship, and a group that collectively abandons anything like the rationalist ideal and becomes a post-hoc clusterfuck.

The community can almost be thought of like these racist AIs: garbage in, garbage out. They cling to any bit of rational-sounding data that supports their preconceptions, and have to adopt severe anti-academic stances to reject the mountains of data that contradict their views.

I suppose the deeper rot is that these rationalists fancy themselves like scientists, able to interact with primary data and draw conclusions, but any actual expert (i.e. someone that has performed research and published it in a community of peers, not whacko journals) is just confused how anyone could be so wrong. You regularly see posts where someone comes in who actually knows what they're talking about, authoritatively shows that all the nonsense they've been talking about makes no sense, and at best they get a 'huh, interesting' before the nonsense picks back up.

The idea of a rationalist community is to organize around questioning your own assumptions, confronting diverse views, and engaging in good-faith debate. But the reality is that basically none of that happens for anything controversial to the community, so it's just a veneer of legitimacy for in-group circle-jerking. And it just so happens that the in-group tends to be pretty damn racist.


Read Eliezer himself, 4 years ago: https://www.reddit.com/r/xkcd/comments/2myg86/xkcd_1450_aibo...

The "meat" of interest here for those too lazy to click (I encourage people click through, though, really, and maybe even take a glance at https://arxiv.org/abs/1401.5577) is probably this long parenthetical:

> (But taking Roko's premises at face value, his idea would zap people as soon as they read it. Which - keeping in mind that at the time I had absolutely no idea this would all blow up the way it did - caused me to yell quite loudly at Roko for violating ethics given his own premises, I mean really, WTF? You're going to get everyone who reads your article tortured so that you can argue against an AI proposal? In the twisted alternate reality of RationalWiki, this became proof that I believed in Roko's Basilisk, since I yelled at the person who invented it without including twenty lines of disclaimers about what I didn't necessarily believe. And since I had no idea this would blow up that way at the time, I suppose you could even read the sentences I wrote that way, which I did not edit for hours first because I had no idea this was going to haunt me for years to come. And then, since Roko's Basilisk was a putatively a pure infohazard of no conceivable use or good to anyone, and since I didn't really want to deal with the argument, I deleted it from LessWrong which seemed to me like a perfectly good general procedure for dealing with putative pure infohazards that jerkwads were waving in people's faces. Which brought out the censorship!! trolls and was certainly, in retrospect, a mistake.)

Edit: this plus more is all covered by the LW wiki's page someone else linked, check that out too if you really care. https://wiki.lesswrong.com/wiki/Roko%27s_basilisk


My guess would have been that by discussing it, you leave a trace in the networks allowing an eventual AI to prove that you knew about it. Since you knew about it, it can hold you accountable, and take revenge on all of your descendants if you didn't devote sufficient energy on bringing the AI into existence.


Also the neoreactionary bit. I literally paused on that in total surprise. As a long-time LW lurker - yes, the rationalist community plays with plenty of non-mainstream ideas and try to work them in both directions (up from first principles, and towards extreme conclusions). But characterizing rationalists as equal to neoreactionaries? That's a complete mischaracterization of the community.


If you don't see the NRs in the rationalist community, you're not looking.

Ultimately I agree that they're not equal; I think many people involved in LW or SSC are good-faith rationalists. But there's also a strong segment that has adopted - post hoc - a crazy set of priors to support NR views.

What strikes me is how antithetical this post-hoc "pick your priors, any priors!" is to the rationalist ideal, but it doesn't get called out. It's given equal treatment, and it means that the community is categorically incapable of discussing culture-war issues.


There are neoreactionaries, but they aren't anywhere near as important or central as the article implies. A huge chunk of this is the fault of the discourse norms, which allow for a lot of discussions that would be prohibited or highly discouraged elsewhere.


All: we're going to try turning off the flags on this story. If you're going to comment here, do it civilly and substantively. I can already see signs of thought-degrading flame-entropy appearing, and if it continues we'll restore the community defense mechanism.


The article implies that both algorithms and the data are at fault, which I don't think is true. It's really just the data, the algorithm reflects the 'truth' it finds in the data.

Interesting talk relating to the topic: https://www.youtube.com/watch?v=jIXIuYdnyyk

Many approaches in fair machine learning that try to 'de-bias' the algorithm basically just do stuff like reducing the accuracy in the advantaged group to make the algorithm seem more fair - that is hardly what you want and will just make you susceptible to charges of discriminating against the majority or employing affirmative action. Probably rightfully so, because that's what you do. It's absolutely fine if that's the intent, but then you should have a public discussion where you are open about the fact that you manually tinkered with the parameters to prefer fairness over accuracy (which can probably be a valid goal).

I think finding the problems with the data is very important though. Everyone wins if the quality of your data increases: the algorithm can become both more accurate and also more fair. And it can also identify societal causes for this biased data, for instance police being more sensitive to crimes of minorities, which will then feed back to the innocent algorithm.

A related point is of course that we should be wary of putting too much power and trust into faceless algorithms in the first place.

Also some interesting collection of papers on the matter: https://fairmlclass.github.io/


I thought we got over "the computer is never wrong" fallacy by the end of the 90s. Anyone involved with computing should know GIGO - Garbage In Garbage Out.


Perhaps. But lots of developers have also heard that premature optimization is the root of all evil. Doesn't mean we don't fall prey to these traps, biases, misconceptions, etc.

Besides that, the rise of the algorithm, like everything else in the US, is subject to interpretation by camps who insist (at their extremes) that racism is real, modern, ubiquitous, and fixable; and those that insist it's a relic of the past, inevitable, imagined, rare.


Apparently if you shove in a lot of garbage, subject it to a decision making process nobody can explain in detail, out the other end comes magic reliable data that if you question you obviously just have an agenda.


People want easy answers and discard uncertainty as soon as they get an indication that's stronger than the noise floor.

See also: Anyone who (despite the fact most people understand a 1-in-4 chance perfectly well when handed a 4-sided die) looks at FiveThirtyEight's election predictions as anything more than a fun curiosity.


So you are saying any data for which any input feature is correlated with race or sex is garbage. That's going to make doing useful data science incredibly difficult with anything involving humans.


Strawman

Sex is easy to incorporate. Race is more difficult because vanishingly little work has been done with a concept of race based on anything resembling real population genetics, but it's not impossible in theory. Hint: if you really care about human biodiversity, you'll spend most of your time in Africa and some isolated islands, not looking at brown people in the west.

The issue is GxE, where we see loads of racists and sexists just patently forget about 'E' and conclude that women or brown people are genetically inferior. Some will try to gussy that up as 'different' rather than inferior, but the dog-whistles may as well be air-raid sirens.

'Data science' is fine if you're AB testing websites. When you try to do real science, you'll find that the utter lack of research experience is .. a bit of a problem. Caring about PhDs isn't credentialism, it's wanting a plumber that has worked with pipes before. You have to actually perform research to get decent at it. It usually takes about a decade before you can honestly do it independently. If you can look back at what you did a year ago without cringing, you're not making that progress.

Some schmuck with PANDAS and a Bio101 class 8 years ago isn't a scientist.


To some extent. More specifically, if those become confounding variables, the result may be partially useless. That's why you have to make sure you control for race and sex if they're relevant. https://en.m.wikipedia.org/wiki/Confounding


It makes it exactly as difficult as the problem actually is. If you start with garbage data, such as arrest records (black people are WAY more likely to get arrested than white people in the same circumstance), then the bias in your sample is going to result in bias in your model.


It’s more clear to me than ever now that the left and the right do not use the word “racist” the same way.

The left thinks of racism in terms of outcome and the right thinks of racism in terms of intent.

We could benefit from better language around these concepts, and honest dialogue about them too.


> We could benefit from better language around these concepts

I believe this is a very under-rated problem today's public is facing. Not just these concepts, but many political terms are misused in today's climate, making for a conversationally ignorant population.


I don't see how it is being misused. This sort of distinction is pretty basic e.g. philosophy. People simply disagree. More and more people not understanding the basics in favor of whatever they read on social media might be a problem though.


Political terms are misused all the time. For example, "liberals" are constantly conflated with "leftists". Half of the population currently believes that "fake news" means "news that doesn't confirm my political preconceptions".


I feel like a writer once discussed the power that exists in limiting and controlling the language used in public discourse... /s


jpmcglone and dgzl were proposing the opposite of that. Newspeak eliminated words so that some ideas simply could not be expressed. They propose adding words so that dissimilar things can be distinguished from each other - to expand the things that can be (clearly) expressed.


Yes, their concern is about the misuse and/or confusion of a limited lexicon. I understood that they propose adding words; and I suggested that there is power to be found in not doing so. It can be beneficial to some if we are unable to speak precisely and without misunderstanding. It can be beneficial to some to co-opt existing words for new uses.


Doublespeak is too real.


*Newspeak


What I get from people on the left is that an unequal outcome is indicative of racism, however subtle that racism is. The only other explanations for an unequal outcome are biological differences between races (do not want to get into that) and races haves advantages over others from past history.


Unequal outcome is not indicative of racism, it is racism. That's the point: racism is an outcome, not an intent. An unknowing, unfeeling beurocratic system can be racist, even if there isn't a speck of racial bias in the hearts of the practitioners and designers, simply due to historical quirks as you point out, or design mistakes.


So your argument is that Harvard is racist if it doesn’t discriminate against Asian applicants? So that they are not overrepresented among the student population?!


No, my argument is that "it's just numbers! they don't have a soul!" is not a valid defense from accusations of racism, because the statistical systems can be racist in their design, and there is no "racist meter" you can apply to check. The only way to know if a system is fair or not is to check whether the results you're getting are the ones you want.


FWIW, your usage of the the term 'racism' is not at all what most people mean when they use the term. The vast, overwhelming majority of people do not think in terms of systems. They think in terms of agency.


"vast, overwhelming majority of people"

You gonna back that up?


https://www.dictionary.com/browse/racism

https://www.dictionary.com/browse/racist

No mention of what the person I was responding to was talking about.


Right, and if you look at results, you see that there's too many Asian students, relative to their proportion in the general population. So how do you solve that? One way is to up the bar at the admissions, and require Asians to score better relative to other ethnicities, in order to be admitted. Do you think that makes the system fairer?


Other possible explanations include at least culture and background (parental wealth, connections and education), possibly others.


That is what I meant by past history.


There's a severe problem forming in today's outrage climate, and I'm not really sure how to even address it.


I think that implies a symmetry that doesn't exist. Vanishingly few people care about effects but not intent, because intent inevitably leads to effects. This creates a continuum, not a mirror image: consider neither, consider only intent, consider both. Those on the left mostly consider both, with a few considering only intent. Those on the right mostly consider only intent, with a few (racists by any definition) considering neither.

Your formulation also misses another very important nuance. I don't think most people on the right don't consider effects. They mostly know that such effects exist, and quite often feel bad about that. However, they also believe that addressing only intent (or "procedural fairness") is sufficient to make those effects go away, and that more assertive measures create "reverse discrimination" and/or infringe upon liberty. I'm not going to argue whether they're right or wrong, but it's not about consideration. "Strategy" might be closer to the mark. Most on the right (not counting the true racists) do want to end racism. They just reject the left/center prescription for doing so.


Broad statements about the left and right are divisive, ignoring the spectrum of beliefs, and this is not a good summary. Just as two examples, housing and education policies are frequently designed with the intent of segregation, though it is impossible to prove intent in most cases. And so, the actual outcomes need to be used in courts to demonstrate racist intent. No one will ever openly admit that they don't want the, for example (as this is common in these cases), poor, mostly black children in their school. But they will also actively ignore all of the research that shows a benefit to their own children with increased diversity. And so intent can be assumed. People will hurt their own families to defend their racist beliefs. Maybe we could use better language, but what is more important is using the language we have more clearly.


when I went to school a poor black kid stole a portion of my lunch every single day in 2nd grade (stupid policy that you had to leave your lunch on an unattended desk, he knew which bag was mine and took the granola bar.) my parents moved me to a catholic school which had basically no black kids. the bullying was still there but nobody ever stole from me or really anyone else. no fights basically ever.

we move, I go back to a more diverse school. fights, my graphing calculator stolen... what’s the possible benefit of having poor people in your school? until social programs make it so my lunch doesn’t need to be stolen to feed a kid, I fail to see how the non poor students are better off.


I am going to post this article from last year which covers a lot of the topics of re-segregation of schools through unique methods.

https://www.nytimes.com/2017/09/06/magazine/the-resegregatio...

To touch on your point specifically, I am sorry you had such a poor experience. You are right when you say that social programs need to exist so that children are not hungry at school. Education is supposed to be the great equalizer and yet America has provided education almost only for the rich and has consistently attacked the poor. But I also feel bad for the kid who had to steal from you every day just so that he wasn't hungry at school. Education and wealth are a combined, intractable problem in a capitalist country, but there are hundreds of places making the problem worse.


This problem exists accross disciplines. I heard someone defend the DSM manual as simply a way to standardize how psychiatrists think and talk about mental health issues. Otherwise, the field would be chaos.

Perhaps we need a DSM for society level malfunctions, with strict definitions?


That’s not true.

It’s just that racist intend is almost always impossible to prove. Outcome therefore becomes a needed proxy, but only after excluding other factors, by, for example, normalizing for age and income.

Two landmark studies in this regard come to mind are (a) how the success rate at an orchestra doubled among women after auditions were changed to a “blind” format not allowing the decision-makers to see the applicants’ gender, and (b) how changing applicants’ names (and nothing else) could impact their chances to be invited to interviews.


You use the word racist to mean two different things.

The basketball team being all black isn't racism and it isn't due to racism.

If I saw a basketball team (in the NBA) of all white people, I would suspect racism, but it's important to point out that the outcome (an all white NBA team) is NOT racist in itself. Even if it is likely due to racism.

So, I think we should stop calling the outcomes 'racist' and say what we mean: "I suspect this outcome is due to racism"

I think that will make the whole conversation a lot easier to have.

I don't think its advantageous to certain political entities, however, if we have this conversation. There is one party in particular that I think relies on people to believe that their problems are outside of their control, so that maybe they'll outsource the problem solving to the government.

Maybe I have it pinned all wrong, but I will never know if we can't talk about racism and outcomes of what may or may not be racism as two separate things.


I fail to see a distinction b/w “racist” and “due to racism”. In any case, I feel large parts of society, including major media outlets, already tend towards caution, c. f. the reluctance in calling the President’s “both sides” comment “racist”, instead opting for “racially charged” or “insensitive” or similar.

> There is one party in particular that I think relies on people to believe that their problems are outside of their control, so that maybe they'll outsource the problem solving to the government.

That’s a rather unfair characterization of the Democratic Party. But I find it even more interesting to know why you feel the need to superficially obfuscate who you are talking about?

I’ve also provided two examples above that clearly prove that racism and sexism do exist. If gender-blind hiring doubles the chances of female classical musicians, aren’t they right in pointing the finger at that result and complaining about white men playing life on easy?

But apart from such narrow situations, most left-wing advocacy is decidedly altruistic: college students supporting a raising of the minimum wage aren’t doing so for their own benefit. Unless, that is, they are terribly pessimistic about their personal future. Neither are voters and politicians advocating for DREAMERs, who by definition are neither. Nor are Bill Gates, Warren Buffet, Bloomberg, LIN-Manuel Miranda, or any number of billionaires or otherwise successful people advocating on behalf of the less fortunate.


I obfuscated it because I knew you would pick the right party without me elaborating.

I don't think the left-wing advocacy is altruistic. If it was altruistic, it would promote altruism. It, instead, promotes redistribution of wealth.

Do you think redistribution of wealth is altruistic? How so?


Given that AI has no "intent" for the foreseeable future, we can only judge AI in terms of outcomes.

Is a robot arm that kills anyone who stands near it a murderer?


Thinking of racism in terms of the intent is a way to take the issue less seriously, to force the issue into the realm of opinions and beliefs. If the intent of the perpetrator is the deciding factor, then it is always arguable that perpetrator's intent was not specifically racist; perhaps it was driven by a misunderstanding, etc. Since it was informed by misunderstanding, it isn't racist. And so on.

It's an old tactic, I don't think changing the terms will make much difference.


But racism is from the realm of opinions and beliefs.


A lot of people rationalize racist or discriminatory beliefs in such a way that they seem reasonable. Yet, when you judge their behavior as a whole, it tells a different story. The banker who consistently gives loans to people from [group x] but not from [group y] who have similar financial features. When we talk to the hypothetical banker, they may claim that "numbers don't tell the whole story", etc.


Racism is discrimination based on race, everything else is not racism.


How does that address the parent commenter’s assertion? Discrimination, whether intended or not, has the same outcome for the discrimated party.


Statistical differences in outcome cannot be automatically attributed to discrimination, though they frequently are


Yes they can, because that's what "discrimination" means: differences in outcome due to uncontrollable irrelevant factors like race. Discrimination is an effect, not an intent. Racist intent is called "prejudice". Discrimination can be caused by prejudice, or it can be caused by something else, like a poorly considered algorithm.


No, that's pseudo-social-science because it falsely assumes outcomes are solely determined by external variables and is completely ignorant of internal variables, such as differing cultural values.

Some cultures value family life over money, others value money over family life, for example.


I'm sorry, this reply doesn't make sense to me. What does "different cultures are different" have to do with whether computer systems can be discriminatory or not?


The FICO example from another thread shows a different outcome based on race without any data in the computer system about race.

If the computer system doesn't have access to race, like in the FICO example, then can it be discriminating based on race (racist)?


Yes, if the data contains proxies for race and/or unexamined racialized bias, then yes it can. Police presence has often been kept higher in zipcodes with mostly non-white residents. This results in higher arrest rates in an area because of increased eyes, not necessarily because of increased rates of law-breaking. If an algorithm were to be designed to determine where to allocate police officers using this data, it could contain no racial data, but it would dictate that more officers should be placed in the zipcode with more non-white residents, as those areas have more arrests. Perhaps you don't want to call this racist because you have a narrow definition of that word, but I don't know what other word to use.


My apologies for not linking to the other thread, but as far as I can tell, there are no proxies for race used to calculate FICO scores.


Without the data, I have a hard time believing this. Especially since I had to explain the concept.

Even if that were the case, you're still left with a small set of conclusions about race and its correlation to FICO score. 1) Non-white people are worse at maintaining credit scores because of something inherent to the condition of non-whiteness. This conclusion quickly points in the direction of racist psuedo-science. 2) Social and economic forces do not act equally on people of different skin colors. This is easily proven true by the example discussed by AOC and ta'nehisi coates of people of color being explicitly excluded from provisions the New Deal, an issue that has compounded the inequality between people of different racial backgrounds. This is directly applicable to FICO scores as white people are more often able to access generational wealth to pay of debts and avoid low credit scores whereas people of color are not.

Sooo, in the case of FICO, even if the score doesn't contain subtle proxies for race which I doubt, it is still predicated preserving a system that contains intense racial bias.


Some people dont value FICO scores and so theirs are likely to be lower. This would fall outside of the only two models that you proposed. It requires cultural difference which some people value highly.

If you cannot accept the FICO as non-racist, then I do not know if you could create a system that others would not find racist. I cannot think of a way that your system would not end up with some form of explicit race-based corrections. I think that concept is less palatable in America due to the focus on freedom/individuality. Each person should stand on their own, not colored by the groups that you could fit them in.

Just curious if you have an opinion: how do you propose correcting for the past?

Lastly, you can look up the determinants of a FICO score for yourself as you can use sources that you trust.


Are you saying that even if certain groups of people are less likely to be able to repay debt (for reasons outside their control), it is still racist if their FICO scores are lower?

I think you want FICO scores to mean something that they don't mean.


I agree, of course computers extend and amplify our prejudices.

But as the article states in it's conclusion:

>Just because something is expressed in numbers doesn’t make it right

Many statistical social differences are often automatically attributed to discrimination based solely on "numbers".


you should ignore that comment, it's an incredibly disingenuous claim intended to mask the racist sentiment that 'non-white people are over-represented in american prison populations because they don't value not being in prison as much as white people do'.


[flagged]


Please don't do that here. HN is a place for civil debate, not ad hominem attacks.


It is racist if the data is built on 100s of years of racism and the ML is trained on a dataset poisoned by racism.


I think it's important to note that there is no algorithm that can have an "intent". If we are to agree that racism is a feature of intent, then there will never be a racist algorithm. Yet the outcome will still be discriminatory.


> If the intent of the perpetrator is the deciding factor, then it is always arguable that perpetrator's intent was not specifically racist; perhaps it was driven by a misunderstanding, etc. Since it was informed by misunderstanding, it isn't racist.

Are you saying that this is racism. Most people would define what you outlined as not rascist.


I'm saying that racism cannot be solely an issue of someone's intent, it needs to be evaluated by the outcome. An outcome is a fact that we can all observe. An intent is internal to a person, we can never get a clear view of a person's intent; at best we can make deductions as to a person's intent based on their behavior.

Claiming that racism can be evaluated only by the intent is simply moving the goal posts into an area where we can't clearly observe. It's a tactic.

In terms of this specific issue, algorithms by their very nature lack intent. Thus this particular argument has no validity; we can only judge the algorithm by it's results: the outcome.


> Claiming that racism can be evaluated only by the intent is simply moving the goal posts into an area where we can't clearly observe. It's a tactic.

Claiming racism is anything but intent is changing the definition of racism. Which is:

"prejudice, discrimination, or antagonism directed against someone of a different race based on the belief that one's own race is superior."

If you start changing the definition of words to suit a political goal, only the people who already agree with you will listen.


I agree, but cmiles74 has a bit of a point, too. Everyone short of the KKK (and maybe even them) will claim that their intent is pure. Lacking a foolproof way to judge others' hearts, all we have to go on is actions and their effects.

As I said, I agree with you. But our position can lead to hiding some genuine racism under the "unintentional" disguise. It also leaves unintentional systematic biases unaddressed. While those may not exist as often as the left claims, they do at least sometimes exist, and do need to be addressed.


I don't disagree. But I don't think we can judge the intent of people who are discriminating in a racist way, aside from simply asking them what the think their intent might have been. At that point it's very likely we'll be dealing with a rationalization or a half-truth because not only is there a stigma attached to racist behavior but in many cases it is illegal as well and there are other punishments to contend with.

If we need to accurately gauge their intent, that's not really possible. In the case of an algorithm we've divorced the process from the source of intent (the author of the algorithm), there is no intent to evaluate.


The left also thinks of racism in terms of intent. But they think that outcome implies intent. That is the real difference, IMO.


Intent and impact are not always the same.


and academics think of it as power imbalance of any majority power, separate from the census majority and minorities

and colloquially nobody thinks of it the same way, with themselves always exempt from being racist until convinced that their 'normal behavior' is considered racist and this does not change their view of their normal behavior 'so be it'

this is a challenge. at this point the word itself is polluted.


Disparities in racial outcomes do not necessarily constitute racism.

The author flippantly violates this by claiming the credit system to be racist, but the Equal Credit Opportunity Act has been in force since 1974.

We know the factors that affect credit, some of them are income, payment history, loan balances, number of credit checks, etc.


> The author flippantly violates this by claiming the credit system to be racist, but the Equal Credit Opportunity Act has been in force since 1974.

Surely you're not arguing that the law instantly solved everything?

Countrywide - once the lender for 20% of mortgages - was dinged for violating the ECOA in 2011, so violations were clearly still occurring then, and are likely continuing today.

Add in the fact that redlining has multi-generational impact, too. Housing is one of the big ways families pass wealth down to their kids and you get potential racist impact due to past actions even if the current implementation is race-blind.


I think a lot of people (especially on HN) view racism in terms of internal processes rather than results. If the current implementation is race-blind, that satisfies them.

https://www.reddit.com/r/EndFPTP/comments/8wz6g3/impartial_a...


Even on that narrow view, the fact that the law on its face requires systems to be race-blind doesn't mean that, in fact, they are.


> Add in the fact that redlining has multi-generational impact, too. Housing is one of the big ways families pass wealth down to their kids and you get potential racist impact due to past actions even if the current implementation is race-blind.

This is, by the numbers, the much bigger issue by an order of magnitude. I think one problem that many people have with the 'racist creditors' trope is that racist policies have left a gaping hole in african american wealth, and consequently african americans are disproportionately priced out out of their local housing markets. If you were to make the credit process completely (and I mean completely) race-blind, you would still have massively unequal outcomes, probably more or less on par with what we see today.


I would not argue that the 2019 credit system is racist. I think it has fairly sound principles, including the wiping of negative debt records over the course of 7-10 years.

Countrywide was one of the biggest lenders responsible for the 2008 financial crisis. Their problem, just a few years earlier, was giving way too many people mortgages. I saw it myself as a real estate agent, people were approved for loans up to even as high as 50% of their monthly income. It was ridiculous.

You can argue all you want about the ripple effects of the past. They're all over the place. The thing is, you can't change the past.


I really wish people would at least skim the wikipedia page to figure out the vocabulary of the field they're about to opine on.

The left-progressive use of the word 'bias' is completely different than the way statisticians use the word.

If bias increases accuracy/precision, it's not bias.

The more interesting question is this - is it permissible for models to consider protected characteristics if those characteristics improve the performance of the model?


It gets even more interesting when you consider proxies for protected characteristics as well. It's obviously wrong for an algorithm to preferentially discriminate for white people and against black people. It's less clear whether or not discriminating for baseball players and against basketball players is acceptable. Using sport choice in your model will necessarily have downstream effects on the distribution of outcomes on the axis of race. Any racially-correlated source of information will. And the trickiest bit is that explicitly correcting for this will often use race as an explicit algorithmic ingredient.


> The left-progressive use of the word 'bias' is completely different than the way statisticians use the word.

There's lots of words people use that don't match up with exact scientific definition. Infer from context which version applies, or ask, and you'll be fine. Also applies to: force, resistance, acceleration, etc. We know that startup accelerators help companies grow faster and not actually increase their physical velocity.


> Infer from context which version applies, or ask, and you'll be fine.

Your solution is the correct one, yes. Except the 'progressives' in question are working very hard to selectively remove context (and intention) from language for an ever growing and arbitrary list of words/situations. Where simply speaking about it in a way which a [insert particular special interest group depending on the situation] view as 'incorrect' based on thier ideology/worldview, then you are instantly wrong and acting maliciously regardless of context/intention. You hear this often today. for example: "you can't ever joke about x" or "you can't talk about x historical event without also mentioning y" or having to preface any wide-ranging statement with 100x conditions so as not to offend any group loosely related to the topic.

We should fight to keep language from moving further in this direction because this alternative idealistic world, despite good intentions, is making the world a worse place, not a better one. We can't naively pretend that by creating a huge complicated system of no-go-words, ie not saying certain combinations of words out loud, will automatically makes peoples internal thoughts change for the better and ultimately change outcomes in society. This is merely hypothetical and far from proven method to be effective.

If anything it makes people resentful and creates ridiculous kafkaesque situations where you have to jump through hoops to engage in the most basic innocent dialogue and debate.

Which is ultimately anti-intellectual, inefficient, and irrational compared to how incredibly important context and intention are in a million other examples which they seem to have no problem with.

The worst part is how it incentivizes the worst behavior by giving small people "power" by allowing them to walk around correcting everyone's apparent "misuse" of "problematic" language (which is like crack to the social media outrage culture). Even despite situations where the given audience and in context it was totally harmless and the meaning fully understood by everyone involved.


I don't disagree that people use words in ways that don't match up with their use in science. However, when people are criticizing science, it'd be helpful if rendered their argument in language that obfuscates important distinctions.


> If bias increases accuracy/precision, it's not bias.

Yes it is. It's bias in your fitness function.

Accuracy and precision are not handed down by the gods. We write the functions that evaluate our models, and it's our job to make sure that the values they promote match up with the real-world outcomes we desire, and to constantly monitor and re-evaluate those outcomes.

Fancier machine learning techniques will never be able to avoid Goodheart's Law: "Any measurement, no matter how reliable, when regarded as a target, ceases to be a good measurement."


But our 'fitness functions' for social problems are normally pretty good or above reproach of the model. These tend to be easy to measure like 'did the person skip bail?'

If a model of 'likelihood to show up to court after making bail' can make better predictions with information about protected characteristics (e.g if the model used sex to predict likelihood to show up in court), that feature would reduce the bias of the model.

I think the issue progressives have with 'bias' is that some of society's prejudices ('bias') have an evidentiary basis. We already make decisions that progressives would tell us are prejudiced but we probably want to use those prejudices if they're useful.

Consider a group of young men standing outside of a Church. If they're all clean shaven, smiling, and 'appropriate' for the Church it's nothing concerning. If they're white guys with shaved heads / neo nazi haircuts, and they don't look nice, and they're standing outside of a black Church, the prejudiced among us might correctly decide to alert the authorities to it.

My personal opinion is that we should allow models to consider protected traits but we should ensure that models that make important decisions aren't prejudiced along those protected traits. The way to measure this is simply to ensure that the accuracy and precision of the classification decisions are comparable among protected traits.


Here is a somewhat modified situation. Say you use a machine learning algorithm to recommend if a person currently accused of a crime should be allowed bail. The fitness function should be to maximize the number of people who are allowed bail and minimize the number of people who miss their court date.

Say that the model allows X% of people to have bail and makes sure only Y% fail to show up.

The model then adds race as an input to the model. This improves the model so that it allows X+5% of people to have bail and makes sure that only Y/2% fail to show up. It also has the effect that it increases the chance that a black person is denied bail and increases the chance that a white person is allowed bail.

Do you think the inclusion of race is bias? Should race be removed from the input to the model?


It just dawned on me that the left and right absolutely do not think of “racist” exactly the same. The left looks at outcome and the right looks at intent.

We need better words.


Those words are "systemic" or "implicit" or "institutional", etc. All you really need is a proof or example that racist/sexist outcomes are possible even when there is no overt intent. And there are plenty of examples like that out there. Failure to accept that those examples exist, however, is something beyond just looking at intent instead of outcome.


I don't see how those adjectives add any clarity to the situation, since you can swap them out interchangeably and you're still describing the same vague, handwavy sense of racism somewhere being a cause for an unequal outcome.

Larry Elder had an interesting take on "systemic racism" IMO: https://www.youtube.com/watch?v=phPXTWJhnYM


This isn't about unequal outcomes, it's about racist/sexist outcomes. It's very possible to have unequal outcomes without them being racist/sexist. And despite what seems to be commonly believed, liberals/left/progressives/democrats/etc don't tend to have a problem with that.

It's also not vague and handwavy. If you'd like to explore an example of institutional racism, check out the Parable of the Polygons. It's a clear, simple model with repeatable results.

https://en.wikipedia.org/wiki/Parable_of_the_Polygons https://ncase.me/polygons/


There's plenty of "systemic" racism/sexism/etc. coming from the left, though. Social engineering in general is a recipe for every kind of unintended consequences at the "systemic/institutional" level, and the left is huge on social engineering.


"Systemic" doesn't even have to be left/right though, that's the point. It can simply be well-meaning people that keep ignorantly doing things they way they did before, unaware that their system has racist/sexist impact.

I don't know what you mean by social engineering, by the way. I've only heard it in the context of hacking, like calling customer support and pretending to be someone else to try to get their mother's maiden name or whatever.


Not the person you're responding to but I took their meaning to be social engineering in the sense of someone's purposeful planning and intercession in areas where there is a social output, fiddling with whatever knobs are available to shape the desired output.

For example, Harvard admitting less Asians because they are over-represented compared to other races. If you take your view that racism is an emergent phenomenon that you can spot based purely on the outcome, then Harvard was exactly correct to deny more Asians admission than other races, yes? If Harvard didn't do that, then the outcome of their admission process would've been "racist."

Many would disagree with that interpretation of racism.


That's not my view, so that seems a straw man.


I'd also like to bring up the International Obfuscated C Code Contest.

Just because a policy seems reasonable and has straightforward justifications for all of its pieces doesn't mean it wasn't maliciously designed to another purpose. The stated intent is not always the only intent and if the results...


Perhaps even more apropos to your point is the Underhanded C Contest:

http://www.underhanded-c.org/


That's true too, yes. I just think it's interesting that malicious intent isn't required. I think it helps to talk about it too - if you can approach the participants and assume good faith, allow them to believe that you believe the output is completely unintentional, then maybe it will be easier for them to agree to change their system.


The right[2] looks at intent because that's what matters in a hierarchical view of society. This division is from a fundamental difference in what societies are. This can be observed din how they use language; from one of the most enlightening articles[1] I've ever read:

> One of the biggest problems of the entire Culture Wars is that people like us [the left[2]] use language impart information. We usually are not aware that a nice big chunk of population does not use language in that way at all. Their use of language is that of Phatic Language [...] In a hierarchical society [the right[2]], language is [often] not used for exchange of information [...] It is used to establish social hierarchy.

For a good explanation of how this works, George Lakoff's lecture[3] "Moral Politics".

[1] https://scienceblogs.com/clock/2007/05/31/more-than-just-res...

[2] The "left"/"right" labels are being use in a general psychological sense, which doesn't always match the political groups with the same names.

[3] https://www.youtube.com/watch?v=5f9R9MtkpqM


Haidt's book The Righteous Mind does touch on this - he'll likely have references to the studies in his book. What he says the studies show:

Conservative ideology: Fairness is about guaranteeing everyone equal rights. If different people have different outcomes, the question is: Did one person have more rights than the other? If so, let's correct for it. If not, it is because the person did not fully utilize his/her resources. However, this step is often omitted and people jump to "Person did not put in effort."

Liberal ideology: Fairness is about guaranteeing equal outcomes. This often (but not always) ends up being a metric regardless of the effort the person put in - so if the outcomes differ, it's a sign of something unfair at play.

There is overlap between the two, and they are not fundamentally at odds with each other. However, as a lot of pop psychology has taught us: People are fundamentally lazy in applying analytical thought, and will look for simple proxies. So instead of thinking through as their ideologies dictate, they will jump to the conclusion.


There's a third option, though, removing barriers to permit equal access and opportunity. I think it's well illustrated by this comic:

https://static1.squarespace.com/static/56d9cbd420c647c7373d4...


That's not a third option - it is the conservative option. If you build a fence that is tall such that some person is disadvantaged, then he doesn't really have the same rights.


The short person still has the right to look over the fence. They just have practical difficulties on exercising it.

Similar cases: Voter ID laws with disproportionate impact on minorities, gay people having "equal" right to marry someone of the opposite gender, people in impoverished school districts having "equal" rights to an education, etc.


Your last two examples are ones where a conservative ideologue would look and say "No, they aren't being granted equal rights". In the former, you have judges refusing to follow the law. In the latter, you have children who are not getting access to the same public education their peers in wealthier districts are.

>The short person still has the right to look over the fence. They just have practical difficulties on exercising it.

It all depends on what the fence is achieving. I can't take the cartoon literally, because conservatives wouldn't argue that people should have equal rights to view a ball game - whether you can view one or not has little bearing on, say, your financial success. Nor does it impinge on your right to speech, religion, etc. If the fence represented something that was a barrier to achieving what is viewed as a right, and it's a barrier for one group and not for another, then the approach in the cartoon is not inconsistent with conservative ideology.

With regards to voter ID laws: I'm not even going to go there, as in my past experience, it's an issue that both sides refuse to understand the counterpart's.


> Your last two examples are ones where a conservative ideologue would look and say "No, they aren't being granted equal rights".

Unless we're "no true Scotsman"ing things here, conservative ideologues in the US (and a variety of other countries) have strongly and consistently opposed gay marriage, often arguing they had the "same rights" as others and that allowing same-sex marriage would be "special treatment".

Before that, the same was true for conservatives and interracial marriage.

The point of the cartoon is not to advocate for the right to watch a baseball game. It's an analogy showing how the binary "you either have to have an unfair situation or give someone preferential treatment" isn't always the only two options.


If his studies basically equated liberal ideology with communism (equal outcomes), I think I'm a lot less inclined to read his book.

You know, what I'd really like is for parties or candidates to identify what they think the appropriate GINI coefficient should be for the US.


You inspired me to update my post:

>However, as a lot of pop psychology has taught us: People are fundamentally lazy in applying analytical thought, and will look for simple proxies.

I would not recommend judging books based on a random Internet comment, even my one.


I wouldn't qualify myself as lazy for choosing not to read that book. :-) There are a lot of books out there!


>If his studies basically equated liberal ideology with communism (equal outcomes)

It's an easy mistake to make, but Communism (at least as Marx and his contemporaries and Lenin envisaged it) does not have anything to do with the principle of equal outcomes except in a very narrow sense - this sense being equality of privileges to some portion of society.


I think the bigger issue is that I've never anyone other than conservatives characterize liberals as believing in "equal outcomes". That's a slanted frame from the getgo.

Among democrats and liberals, it's usually "equal opportunity" or "equal starting lines", language like that. That's very different than "equal outcomes" because it still believes in self-reliance, merit, diversity in outcomes, etc - it's just that it requires a level of fairness that applies to everyone.


Unfortunately many political words are polluted with garbage understanding of their meaning, and have been used to polarize and divide the people.


> It just dawned on me that the left and right absolutely do not think of “racist” exactly the same.

This is quite true. But then, there is considerable diversity in that issue within each the right and the left, too.

> The left looks at outcome and the right looks at intent.

But this is not even approximately true. Though it is an oft-repeated talking point of the right.

More

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: