So some of these associations simply reflect the way-the-world-was or the way-the-world-is - like associating "woman" with "housewife". That's a whole debate in itself.
But some of these can be accidental. Suppose a runaway success novel/tv/film franchise has "Bob" as the evil bad guy. Reams of fanfictions are written with "Bob" doing horrible things. People endlessly talk about how bad "Bob" is on twitter. Even the New York times writes about Bob latest depredations, when he plays off current events.
Your name is Bob. Suddenly all the AI's in the world associate your name with evil, death, killing, lying, stealing, fraud, and incest. AI's silently, slightly ding your essays, loan applications, uber driver applications, and everything you write online. And no one believes it's really happening. Or the powers that be think it's just a little accidental damage because the AI overall is still, overall doing a great job of sentiment analysis and fraud detection.
The only solution I can see is forcing any company that imposes life-defining actions on people (credit bureaus, banks, parole boards, personnel offices, etc) to use only rules based on objective criteria and to prohibit systems based on a "lasagna" of ad-hoc data like present day AI systems. Indeed, if one looks at these in the light of day, one would have to describe such system as fundamentally evil, the definition of "playing games with people's lives." (just look at the racist parole-granting software, etc).
That is probably the exact opposite of what you really want. If the problem is that someone's name is Bob and the AI thinks Bobs are evil, what you want is for there to be 100,000 other factors for Bob to show the system that it isn't so. As many factors as possible, so that the one it gets wrong will have a very low weight.
Even the objective criteria will have biases. There is a significant racial disparity in prior criminal convictions, income, credit history and nearly every other "objective" factor. The more factors you bring in, the more opportunities someone in a given demographic has to prove they still deserve a chance.
You don't understand. My point is that institutions making such decision should not be able to make decisions according to these 100,000 unexplained factors. If you're a lender, you can look at employment history, records of payment and other objective related criteria. You can't look at, say, eye color, however useful you might think it is. Institutions should be able make these decisions arbitrarily, at the level that they effect lives. There should legal provisions for auditing these things (as there are, on occasions, provisions of auditing affirmative action, environmental protection behaviors, insurance decisions, etc).
But how does that help anything? The objective factors have the same potential for bias as the seemingly irrelevant ones. All you get by excluding factors is to increase bias by not considering information that could mitigate the bias in the factors you are considering.
Suppose that 80% of black men would be rejected for a loan based on your preferred set of objective factors. Of that 80%, many would actually repay the loan, but none of the objective factors can distinguish them from those who wouldn't, and when averaged together the outcome is to refuse the loan. If you used some seemingly arbitrary factors that happen to correlate for unknown reasons, you could profitably make loans to 60% of them instead of 20%.
How is it helping anyone to not do that?
People's live will depend on the decisions of these machines so people will start trying to game them. They will make sure to always purchase odd number of bananas, they will wear hats but only on Thursdays etc etc.
Now two things happen. As more people game the system the rules need to be updated. Suddenly it's all about buying bananas divisible by three and wearing hats on weekends. The people who tried to follow the previous advice got screwed over, and what's more they have nothing to show for it. Instead of making people do useful things like paying bills on time and saving up some money, it made them follow some weird algorithmic fashion. Because of this expenditure of energy on meaningless things we may see that now only 18% of people would manage to pay back loans on time.
But that's just another reason to use 100,000 factors instead of twelve. If someone's income is a huge factor in everything, people will spend more than the optimal amount of time working (instead of tending to their family or doing community service etc.), or choose to be high paid sleazy ambulance chasers instead of low paid teachers, because the algorithm makes the returns disproportionate to the effort.
If buying an odd number of bananas is a factor but the effort of learning that it's a factor and then following it is larger than the expected benefit from changing one factor out of thousands, they won't do any of it.
Given a choice between observable, identifiable and modifiable rules or hidden, poorly understood rules integral to a whole model, I'll take the former every time.
Bias will continue to exist for now. What we need to do is make sure we always build processes to appeal and review our systems, preferably in a public way.
What you touched upon is the accuracy/ bias trade-off. To have evidence in particular case you need to attempt to debias the particular system and see how it affects accuracy. Sometimes, it may even vastly improve it.
What is more important is that the systems are not benchmarked properly. As in compared against very simple metrics and systems. Such as: against random decision. Against simple recidivism prevention (grudge system). Against plain math metrics with constants.
To add, they're opaque and it is impossible to easily extract the factors that went into any given single decision. This means they act fully irrationally. Intelligently but irrationally.
Doubling down based on historical and backward looking data does not seem like the way forward and can only perpetuate entrenched bias.
All the inferences and correlations will reflect that. This is not intelligence and can only take you backwards.
No, it really isn't. In an ideal world, the reasons behind a decision are transparent, auditable, understandable, and appealable. Machine learning is none of those.
It seems like the answer to that question is situationally dependent.
In other words, by offering Adolf's below-market rates, you're exploiting a market inefficiency at no additional risk. This is an ideal world as you describe it. It's capitalism at it's finest!
An AI that was even slightly good at it would be far ahead of what we get when humans run things.
Even the greatest physicists and mathematicians can't tell the difference between correlation and causation. When people say "causation is not correlation", what they truly mean is "spurious, context-sensitive correlation is not strong universal correlation"
As a simple example:
If A causes a change in both B and C, but the change in B happens more quickly, "temporal correlation" would imply that B causes C, when that's not the case.
This is especially obvious with cyclical phenomena. The tide going out does not cause the sun to rise, for example, even though I'm sure I could draw a temporal correlation between them. Nor does my car being in my garage cause me to go to work.
If we believe certain attributes to be irrelevant to a situation, we need to build systems that are completely blind to these attributes. This is how double-blind trials, or scientific peer review works.
That's just not true. You can have systematic errors caused by bad training data which can be fixed without an increase in false negatives (otherwise new ML systems would never improve over old ones!)
In the physical world a good analogy is crash testing of cars. For decades (until 2011!!), crash test dummies were all based on average sized American males. That led to hugely increased risks for female passengers:
the female dummy in the front passenger seat registered a 20 to 40 percent risk of being killed or seriously injured, according to the test data. The average for that class of vehicle is 15 percent.
Fixing that problem didn't case any increase in accident risk for men.
It's the same in machine learning.
There is no free lunch
This isn't what the no free lunch theorem says. That says that all optimization methods are theoretically equivalent.
It's a tough problem. I think being aware that biases exist in ML is a good first step.
There is a possible causal link with names which goes beyond "children are being treated worse because of their name".
Your solution will not work, ever. All such companies will adjust their processes to maximise their own benefits, irrespective on any legal consequences. For most, the profits (in whatever way they define profit) will be the more important thing than any legal requirements placed on them. It is the nature of the people running these companies.
If you are an outlier in the data, you will remain an outlier in the system.
This is terrifying.
Do you ever see anybody change their opinion? Usually they won't change simply after hearing an argument. They change only when they have an utterly different life experience. In other words, only when they are conditioned by new data, in no ways different from how an AI changes its opinion.
I know my reaction is more to the surname than the first name.
Joseph has a solid anchor as a biblical name, though, and Stalin wasn't condemned nearly as much as Hitler was.
Do you think we should train AI to systematically ignore those sentiments?
For example, people from X race might be more likely to commit crimes in the absence of any other information (marginal probability). However, person from X race is no more likely to commit crimes than a person from Y race, conditioned on something else like where they went to school, what they do for a living, etc..
It's important to remember that AI doesn't have context, and just because it's using "data" to make decisions doesn't mean the decisions are unbiased - the underlying data may be biased.
You're not allowed to discriminate on race in housing. But is an AI that determines your creditworthiness for mortgages allowed to discriminate on what you eat, where you go to church, what your favorite music is, etc.? Maybe it doesn't have that data, but it will have one level higher - what store credit cards you have and how much you use them and where you opened them.
If you train an AI on a segregated city with a history of actively discriminatory citizens, where the few people of race X who moved into a not-race-X neighborhood got harassed out and sold before they paid off their mortgages, how easily will the AI conclude that people born in certain neighborhoods are more likely to pay off their mortgage if they avoid certain other neighborhoods?
Is that illegal? (My guess is there's no way to prove to a court, to the court's usual standards, that the AI happened to learn the city's racial tensions.) Should it be illegal, if outright racial discrimination is illegal?
This seems to be the case in London, too, so it's a bigger issue than just the US.
But for things like drug possession, yes, blacks probably get stopped much more and not let off with warnings much less.
It is like with plagues. If you put many people with plague in one area do expect that others will catch it. If the rule to segregate is silly enough do expect a correlation of plague with certain characteristics of people put together and the locations. Or their sizes. Maybe it is "cities" or "presence of slums" not skin colour, combined with overrepresentation of people with certain skin colour in them.
It takes some write genius analysis and experimentation to untangle such complex effects from causes.
Call again when you have an AI that can deal with this. Essentially a researcher AI.
If lack of money causes homicides, we should expect to see all poor rural areas have high homicide rates, too. Maybe that's true.
Another factor seems to be sex. Males commit so much more violence, so perhaps that's the only thing to focus on.
Much as there is much hype about AI and machine learning technologies, all of such systems will be, for the foreseeable future, very simplistic models of what we think we know of intelligence.
As humans, we have blindly forgotten that we know very little about the world around us, including ourselves. We think that we have a handle on reality, but we are just plain ignorant. All the models that we have for deep learning and AI or GAI are barely scratching the surface of what "intelligence" is and means.
We may get some useful tools as a result of the research being undertaken today and what we have undertaken over the last lot of decades. But we have millenia to go before we even scratch the surface of what we understand of the universe around us, let alone understand what intelligence means.
That is reasoning. I’ll agree with your later point that we don’t really know what intelligence is yet, but that’s because reasoning is clearly only one type of intelligence. Current systems are terrible at language, for example (a year ago I would have said “and spatial awareness”, but this is progressing fast and I am no longer sure).
> They cannot recognise in any way that the rules by which they operate or the data they are supplied with are or are not reasonable.
Quite a lot of humans fit that description. For example, consider how much angry disagreement the following questions get: “is climate change real?”, “Brexit, yes or no?”, and “does god exist?” Also consider how many people (angrily!) refuse to believe these questions result in any angry arguments.
But your comparison is inapt: yes, we should train AI to systematically ignore negative sentiments it associates with people named "Adolf" or "Hitler" who are not in fact the Adolf Hitler who died in Berlin in 1945. Humans have difficulty in doing so, of course, but this is a bug in human cognition which is essentially the vulnerability that bigotry takes hold of: a justifiable negative impression of one person as an individual is imputed to other people who seem superficially similar. We see one person of a minority breach a social norm or even a law, and we think that others of that minority must be prone to doing similar, simply because their minority status is salient. We fail to realize subconsciously that membership in the same minority is not actually meaningfully correlated with this behavior, and when we see someone from the majority do the exact same thing, their majority status is less salient, and we don't impute the negative impression to the majority.
A good AI should be able to distinguish dictator Adolf Hitler from, say, saxophone inventor Adolph Sax or chemist William Patrick Hitler (the nephew of Adolf Hitler), and not cast aspersions on the latter two - even though human biases forced William Patrick to change his last name to Stuart-Houston. It should even be able to understand that Indian politician Adolf Lu Hitler Marak is a separate person who merely had parents with questionable taste, and the man is not on account of his name more likely to become a genocidal dictator than any of his political rivals.
And since our justifiable negative association with the Nazi leader is, fundamentally, that he weaponized this vulnerability in human cognition, it is one way of acting on our dislike for this Hitler to make sure that the AIs we build are not susceptible to the same vulnerability.
Isn't that statement in itself an invocation of Godwin's Law?
Godwin himself wrote a little bit about it: https://web.archive.org/web/20170209163428/https://www.washi...
(And I am totally open to criticism that this particular comparison is inapt.)
You can ascribe it to "Godwin's law" as much as you like, I just find it a more realistic example than some hypothetically disadvantaged "Bob".
We should make sure that an AI, who is probably making decisions on things like legal documents / public records and not just the middle name someone goes by, will not consider it a negative that someone is named "Adolf" if they aren't Adolf Hitler specifically. And we should for the same reason make sure that an AI will not consider it a negative that someone is named "Bob" if it has a newly-acquired specific negative impression of some other person named Bob. There isn't a difference in the cases.
"Hitler" shouldn't be a special case because the rise of Hitler wasn't as much of a one-time event as we'd like to believe. When the next genocidal dictator with a somewhat rare first name gains control of a country, there will be people from the victim population who share that first name, and they should not suffer the same indignity at the hands of an AI, either. And on the flip side, when this genocidal dictator rises to power, the AI shouldn't be taught that Hitler was the only evil man who ever lived or will live; if it has the data to conclude that some actual individual (not a name) is as bad as Hitler, it should be able to conclude that.
If you are referring to the actual well-known Adolf Hitler, then there should be no need to use the name for the purposes of making decisions. You shouldn't need to use a person's name as a proxy for whether or not they are a genocidal dictator, just make the decision based on whether or not they actually are a genocidal dictator.
Or DevOps Borat: Devops is screwing things up at web-scale.
Eventually, we will have AIs which are better than us in every respect. They will embody our idea of perfection. This is stupendously dangerous. Humanity has a long history of adapting to technology 'taking away' things that everyone thought were 'fundamentally human', like the ability to do work or make things or lay train tracks or whatnot.... it's not hopeful. We deal very poorly with this.
Consider the story of John Henry. He's a folk hero. For killing himself. Because he killed himself in defiance of the machine outperforming him. So this stuff is all-caps Important. What is the likely response from humanity when there is a perfect, not machine but mind? My bet? Humanity will identify with its worse aspects. It will enshrine hatred, irrationality, mean-spirited spite, violence, self-destruction, and all of the things we built AIs to never stray into. Those will become "what it means to be human."
AI may be a philosophical crisis unlike anything humanity has ever faced. Not in kind, but simply in degree.
They're from the same set of names as Bob (and Alice), but are actually affected by this issue, although to a less extent than what you describe.
It's not certain how likely the kind of texts that uses them are to be used as training data though.
Here's one example of how it could happen. Someone publishes a high-performing model to gauge the tone of a writing sample. This model includes the anti-Bob bias described above, such that the appearance of the word Bob is tantamount to including a curse word, and greatly biases the model toward negative sentiment. Because of its high overall performance, companies of all sorts incorporate this model into their workflows for things like grant applications, loan applications, online support forums, and so on. For example, they might use it to detect when someone is using their help form to send an angry rant rather than a legitimate request for support. Now, any time someone named Bob wants support, or a loan, or a grant, or whatever, there's an increased chance that their request will be flagged as an angry or abusive rant and denied simply because it contains their name, Bob.
In fact, we can remove the layer of indirection and note that some people have names that are spelled the same as a curse word, and already have similar issues with today's software, making it literally impossible for them to enter their real name into many forms. This example doesn't involve machine learning, since profanity filters are typically implemented as a pre-defined blacklist. But there's no reason to think that a sentiment analysis model would fail to pick up on the negative associations of profanity.
The more talk about how people are building models, the more I want people to take these black boxes to court to force developers to explain how decisions are made.
Refusing to give someone a loan because someone trained a model with 50 Shades of Grey is unethical and insane.
Just take a look at people called "Null" then multiply the problem thousand times across various systems with no central appeal.
Ironically, this is also an example of a system behavior that was driven by users' desires not to see certain things. Seen in a certain light, it bears a resemblance to the idea of filtering out certain associations because a user considers them distasteful.
As I understand the problem, they are saying that statistically, the statement about the main character being male is a bit more likely to be positive than if the same thing is said about a woman. If that is statistically true and you are trying to create a model to determine the level of positive sentiment in a review, then that may be a legitimate indicator of how people categorize things. If the goal is to try to "fix" how people talk and write, I'm not sure ignoring statistical patterns in the way we talk is really the right approach.
the issue is that humans don't understand multi-variable statistical analysis, whether in the form of ANOVA or machine learning training, so they try to pack everything down into two or sometimes three variables of output.
And that's fine if you are pulling from a population that's homogenous. But if there are two or more discrete subpopulations, you want to control for them or represent them separately, not just ignore them or pretend they reflect the information you want.
Anyway if the reviewer name is enough to throw the results, it may suggest that guys praise more movies, rather than movies with male character's get more praise.
I just looked at the first page of 10 star reviews of battlefield Earth and none seemed to be female, just saying.
I think where people get caught up is that they don't see the world at large as biased, because they view their understandings as essentially correct. For example, we expect judges to rule fairly on every case, right?
To pick a non controversial issue, likelihood of parole is apparently effected by how recently the judge ate https://www.economist.com/node/18557594
Now, parole data accurately reflects how people were paroled, so predicting likelihood of clemency requests is a perfectly valid use of that data. If you were doing machine learning to try to help people get paroled, you'd want to leave that bias in as a predictor, because it's unfair but real.
But you'd probably want to adjust that data to correct for the recently having eaten bias if you were writing a system for parole recommendations for new judges based on past judicial decisions.
You wouldn't want people to be more likely to be denied just because they came before a judge before lunch. And if you don't test for a bias like that, how would you be able to tell that the machine learning algorithm had it? And it wouldn't even need to be direct... seeing people A-Z in court could mean a bias based on name.
Long story short, bias is a real issue, and you need to be aware of it and test for it, not assume that your input data isn't effected by human error.
1. They went to college.
2. They went to the University of Phoenix.
The fact you are calling out UoP may turn out to be used more in slightly less positive sentiments than the first one. If you are trying to figure out the sentiment of a few sentences, this might be important. Yes you can ignore it, but the question is whether you are trying to understand how the language is actually being used or not.
交大不如复旦 [jiāodà bùrú fùdàn] gets translated as "Jiaotong University is not as good as Fudan University" by Google https://translate.google.com/?hl=en#auto/en/%E4%BA%A4%E5%A4%...
复旦不如交大 [fùdàn bùrú jiāodà] (just swapping the word order) is translated into "Fudan is better than Jiaotong University" https://translate.google.com/?hl=en#auto/en/%E5%A4%8D%E6%97%...
The literal translation of 不如 would be "is unlike", but usually it implies a value judgment. In the translation, Google seems to be consistently sure that Fudan is simply better.
But when you specify that you're talking about the Jiaotong University in Shanghai and not one of the others, it suddenly changes its mind:
上海交大不如复旦 [shànghǎi jiāodà bùrú fùdàn] "Shanghai Jiaotong University is better than Fudan University" https://translate.google.com/?hl=en#auto/en/%E4%B8%8A%E6%B5%...
复旦不如上海交大 [fùdàn bùrú shànghǎi jiāodà] "Fudan is not as good as Shanghai Jiaotong University" https://translate.google.com/?hl=en#auto/en/%E5%A4%8D%E6%97%...
I'm at SJTU and everyone here seems to agree that this is the objective truth, but the people at Fudan are probably not so happy about it.
It's like an exam question where you have to know what the teacher is thinking. Notice that in "Fudan is better than Jiaotong University" you're supposed to know that Fudan is another place with a university, not (say) a kind of apprenticeship... or a joke about some kind of noodles or something. You're supposed to have some outside context, but not too much, not enough to know their reputations. That's quite a fine line to ask a translation system to draw.
Google Translate and most other alleged artificial intelligence won't be able to answer that question meaningfully.
If you want to see a further example, in the form of a Jupyter notebook demonstrating how extremely straightforward NLP leads to a racist model, here's a tutorial I wrote a while ago :
His ConceptNet NumberBatch embeddings are one of the few pre-built releases which attempt to fix this.
Is there a problem we should address here? Absolutely -- but the problem is that men keep on getting murdered, not that the model recognizes truths with which we are uncomfortable.
For a real life example, in 2017 Google was more likely to filter the comment "I am a woman" than "I am a man": https://www.engadget.com/2017/09/01/google-perspective-comme...
Or consider the impact of any bias in AI for criminal sentencing recommendations: https://www.wired.com/2017/04/courts-using-ai-sentence-crimi...
As the article states:
> As with Tia, Tamera has several choices she can make. She could simply accept these biases as is and do nothing, though at least now she won't be caught off-guard if users complain.
> She could make changes in the user interface, for example by having it present two gendered responses instead of just one, though she might not want to do that if the input message has a gendered pronoun (e.g., "Will she be there today?").
> She could try retraining the embedding model using a bias mitigation technique (e.g., as in Bolukbasi et al.) and examining how this affects downstream performance, or she might mitigate bias in the classifier directly when training her classifier (e.g., as in Dixon et al. , Beutel et al. , or Zhang et al. ).
> No matter what she decides to do, it's important that Tamera has done this type of analysis so that she's aware of what her product does and can make informed decisions.
Under this definition of 'bias', an unbiased model would, say, spit out equal associations between any occupation and any gender/sex/age/race/religion label.
We should probably ask ourselves whether that's a strictly desirable outcome, since by definition the 'biased' model has a higher predictive value. How much accuracy are we willing to sacrifice for the sake of erasing inconvenient facts about either our world, or our current models of the world?
We could all pretend that we all knew what "unbiased" was, as long as we lacked mechanisms for putting numbers on these things. We could meet our fuzzy conception of the biases with our fuzzy conceptions of what "unbiased" ought to be and we could all speak back and forth to each other praising the virtues of how unbiased our models of the universe are, and there was no way to dig in any farther to see if we were all actually saying the same thing.
Now we can put numbers on it. So, do it. What's the goal? Be clear. It's a program. Anything you can clearly specify can be done. It's not useful from an engineering perspective to define the goals entirely by examining each model in sequence and deciding on the spot that it's not "unbiased"; give us a definition in advance, so we can build a model to try to fit it.
Once the goal is stated clearly, the engineering work becomes much easier. We can even iterate on the goals. I'm not asking for perfection on that front on the first try any more than I ask for it anywhere else. But there's been enough of these articles that just point at a specific point in the space of models and call that point problematic. Move on the to next phase. What wouldn't be problematic? Exactly? At least take a stab at it instead of trying to make vague insinuations so we can iterate on the stab.
Nor am I trying to prejudge what those answers will be. It isn't necessarily the case that the only possible answer is to just slam in 50/50 numbers for the genders (or whatever other constant values you want for whatever other genders you want). Though if that is what you want, do it, and see what happens, and iterate from there. But what other things can be tried too?
The proper answer would be a series of clarifying questions or clearly specifying the qualifications of the answer. (E.g. geography, general medical practitioners)
So models must not be "one size fits all", but should allow a measure of personalization, so that in a world where most people prefer dogs to cats, cat lovers can still get content they enjoy.
Sure, you can build a discriminating classifier or generative model that is the most accurate, correctly identifying / emulating the reality of our world down to 5 nines; and nobody says you shouldn't be able to identify or quantify all of these "inconvenient" associations. The trouble is always when you intend to make decisions based on your framework -- the decision to offer somebody a loan, the choice of language your chatbot uses, etc. -- and that is where fairness ought to be paramount.
And yes, if you manage to build a discriminating model that sneaks in protected classes through indirect causal effects with no attempt to suppress them, it would yield your insurance agency higher returns over time, and just because that's the way the world is, currently. But all this would achieve is perpetuating the current status quo, placing short-term gain ahead of long-term equality. You might be doing right by yourself, but that still makes you morally impugnable.
One shouldn't need to wait for an overreaching law prohibiting such indirection, to do the right thing.
How far do you expect systems to go to ignore real-world associations? What if it is coming down to personal safety? Would you object to a self-driving car that routes itself around more dangerous neighborhoods? What about a model that predicts that unknown men on the street at night are more dangerous than unknown women?
Afaik the black families who were segregated out of their neighborhoods back in the 1940s, 1950s and going into the 1960s were never provided with another comparable "business opportunity" that could have "righted" their situation, they had to settle with living in what turned out to become "ghettos" (for lack of a better word).
> Would you object to a self-driving car that routes itself around more dangerous neighborhoods? What about a model that predicts that unknown men on the street at night are more dangerous than unknown women?
Following the same line of thought, would you be ok with an AI system giving a person named Deion or Jayla or Latisha a higher interest rate on their mortgage (and so, potentially, driving them out of certain markets) compared to the interest rate offered to persons named Chad or Emma or Sophia?
> Following the same line of thought, would you be ok with an AI system...
You didn't answer my question. I asked it because I want to know if the people most vocally arguing that de-biasing is a moral imperative will admit even one case where there might be a compelling reason to see the world as it is, even if that association is considered "problematic".
If people will admit this, then we can argue over where the line should be. But many don't appear to admit that a line even exists.
I will admit that the line exists, and that an incident like this is a clear example where removing hurtful associations is proper. This was a case where a ML model reinforced a loaded and racist stereotype, and the harm of removing that from the model is almost zero: https://www.theverge.com/2015/7/1/8880363/google-apologizes-...
I didn't find those questions to be that smart, to be honest, they're more on the scare-mongering side, and I find that a smart question should have half of the answer included in it and a scare-mongering question doesn't look like it contains anything smart (at least to me). But if you really want to know my answer is "yes" to all of your questions. To get into more details: I grew up as a kid into a middle-class-ish family and back then I had no issues going to the "ghetto"/dangerous area of the town I grew up in (I grew up in Eastern Europe, so that the "dangerous" area was populated by the local gipsy community instead of the AfroAmerican/Latino communities now associated with the "dangerous" areas of US cities). I turned up fine.
> If people will admit this, then we can argue over where the line should be. But many don't appear to admit that a line even exists.
I know of that line, I was just trying to say that further reinforcing it using ML techniques will only aggravate things at a societal level (so that "dangerous" areas will become even more "dangerous"). At least when the discrimination happens out of our (us, humans) own volition there are ways to fix it, but once we "outsource" our racist-tendencies to ML-like tools the voice inside of us telling us that this is all wrong will become even weaker. After all, the algorithms/machines are more "right" then us, humans, that's what we like to think.
If it's your money that you're choosing to lend (or not lend), would you ignore that information or take it into account when making a decision if you should offer that loan and how much the rate should be adjusted to cover the risk of default? What if it's not your money to do with it as you please, but money entrusted to you to invest for maximum results?
Sure, if you knew what this particular person is actually like, then you'd base the decision on better data than just their name, and it could well be that this Jayla would get a much better deal than this Sophia; but even if you had a perfectly fair system that correctly estimates the individual risk of default without prejudice causing worthy individuals being dragged down by association with some group, then there would still be a correlation with the name simply because in the current reality the proportion of "risky Jaylas" is larger than the proportion of "risky Sophias".
Imagine a simple system where people are only evaluated for credit cards and mortgages, and businesses can only offer one of these services. Some people are systematically denied credit cards. Now they have to have more cash for daily needs, which makes them objectively worse as home owners, so the mortgage loaners pick up on this fact and start systematically denying them mortgages.
There's no business opportunity here. For either lender, the prospects in this group are worse, only because the other lenders think so. Breaking the barricade is much harder than just starting a new lender or either type.
Three strength of the effect varies but is hard to measure. Not impossible though.
Three operational principle of the capitalist business is extracting value from a customer or an externality. That is from the outset unfair. It is just, as long as businesses act independently.
Here's an example detached from any sort of politics. If you are training a language model, the maximum-likelihood model is one where the frequency of every word is the frequency that you observed that word, and the frequency of unseen words is 0.
On your training data, you'll measure the maximum-likelihood model as making more accurate predictions than any other model. But it's also useless, because when you use it on new data, it assigns an impossible probability, 0, to any sentence containing a word it's never seen.
This is a bias. You can't correct it by getting more data. You correct it by being an intelligent human who knows the maximum-likelihood model is wrong, and applying a correction on top of it (smoothing).
Now account for the bias (observed by Arvind Narayanan) that all your training data is in the past, and all the predictions you want to make are in the future, but circumstances change between the past and the future.
Because test data is unseen at training time, it cannot possibly affect the way your model is trained. The test data is not how you prevent overfitting, it's how you measure overfitting. To prevent overfitting, you need to design features such as smoothing into your model.
You seemed to be arguing that only way to know the model is wrong is to insert a human who just knows a priori that the model is wrong and what "correction" to apply. But I think this is a false analysis. You can measure a model's accuracy by seeing how well it predicts data outside the training set. If the model is highly predictive, you have a good model. It doesn't take a human's subjective analysis or principled intervention to determine whether the model is accurate with respect to the available data.
This article concerns cases where the model is admittedly accurate (with respect to the available data), but is subjectively considered objectionable. That is a totally different issue.
Even with test/train splits you still need to smooth your model's predictions because it will always get unseen words. That's just the nature of word based lanugage models.
Any word level model trained on any dataset will say it is zero. But that is incorrect; humans know that there is always a probability that some arbitrary sequence of characters will suddenly appear in text (think a GUID in technical documentation). To account for this, the model's author has to make sure to smooth the output so it doesn't produce these zero probabilities.
No amount of data can ever fix this problem (although more data can put better bounds on the smoothing factor).
>>> the maximum-likelihood model is one where the frequency of every word is the frequency that you observed that word, and the frequency of unseen words is 0
>> I think that's false. A model may never reduce any probability to zero. Novel data is a normal feature of modelling.
> But there are better ways to build a model.
Yes of course there are. That's exactly what the OP said ("You correct it by being an intelligent human who knows the maximum-likelihood model is wrong, and applying a correction on top of it") but you seemed to be arguing against it.
(And to be clear, there are also alternatives to the maximum likelihood model)
That's the entire point.
You can't teach a model not to reduce probabilities to zero just by showing it more data. There will always be more unseen data. And you, a human, know that there will always be more data. So you apply smoothing to the model so that your model isn't nonsense.
I'm giving an example of a bias that you can't solve with data, that you must solve with design.
Your statement implies that you would support this.
These businesses were using mental models that had a high predictive value. Do you feel that it was worth it to "sacrifice this accuracy" with civil rights laws?
By that logic, there'd be no business catering to poor people at all.
Let's try again: If there is a business that is at capacity, is it appropriate for it to use racial criteria that may have some ability to predict how much a person will spend, if this increases profitability? Is not doing this "erasing inconvenient facts"?
The businesses that do cater to the poor classes exploit them. See fast food restaurants, payday loan lenders, casinos, etc. None of these businesses exist to improve people's lives, but exploit a weakness at a particular moment.
If (unsophisticated) AIs were in charge, the civil rights movement might never have happened. Black Americans were the victims of institutionalized racism. That racism was self reinforcing: Society did not support the education of blacks, which caused them to be uneducated, which reinforced the stereotype that they were stupid, therefore making it obvious that educating them is pointless, which caused society to not support the education of blacks.
How would an AI controlled system reason to understand that the rules of society itself are creating the conditions of inequality? Will it merely strengthen positive feedback loops, reinforcing stereotypes and preventing reform?
Some intro reading for the tip of the iceberg:
Also, you seem to be implying capitalists aren't authoritarians. But that certainly isn't self-evident in the context of what I replied to.
Context isn't something you impose in the middle of a conversation; it's something you should evaluate before jumping in.
When AI starts determining rather consequential things like how long to send someone to prison for, that's a problem. https://www.nytimes.com/2017/05/01/us/politics/sent-to-priso...
Please don't confuse "AI" with the current state-of-the-art of the latest deep learning model. Many of us researchers are working on interpretability and understanding of causality.
Calling it "AI" makes it appear final, as if in 50 years all machine learning-based decision systems will behave exactly as they do now, without nuance.
AI can only learn from the data it has, so it will always carry some sort of bias, because it is impossible to collect the nuance of every last bit of context into a digestible data format. At best it's an advisor, but it should never be a decision-maker.
In comparison, current automated decision making systems are at the stage where we do not even know what the components are. And calling them intelligent is insulting.
Wrongthink is nothing new, comrade ajwnwnkwos, and is as prevalent as ever.
The output of science has been suppressed throughout history where it didn't fit the narrative of the day. At one period, that was by the Church. Another, it was at the hands of the government. Today, mainly by a self-censoring, everything-must-be-pleasant-and-entertaining society that is very highly prone to fits of outrage.
Inconvenient facts are, after all, inconvenient.
In the specific context of science I’m unsure what self-censoring would even look like. Not only would such conclusions be a relatively small subset (hard to discuss the self-censorship of a graph algorithm) but there is plenty of attention(therefore funding) on both sides to uncover some truth about any political subject.
Saying that global warming is real and caused by humans offends a large chunk of people.
Saying that vaccines are effective and don't cause autism offends a non-trivial amount of people.
Saying that (meat|sugar|fat|veganism|political ideology) is unethical or unhealthy offends pretty much everyone depending on which you pick.
So should we seek to not offend by not saying these things? Or should we strive to uncover the truth behind and change the things we don't like, instead of being offended by the acknowledgement of their existence.
I read your blog post above about "racist" NLP models and I agree it makes sense to debias inputs for certain purposes.
However, many people in this thread aren't talking about adjusting for statistical bias. They're really talking about adjusting for socio-political over/under-representation, which is getting onto pretty shaky ground.
I'd hazard a guess that a model trained on Google News giving "more negative" sentiment for black names is a function of those names appearing more frequently in crime articles, matching the overrepresentation in crime rates. Likewise for Arab/Muslim names, which are presumably disproportionately present in articles on terrorism.
Now, that's obviously statistical sampling bias if you're (somehow) modelling "names I should call my kid".
If you're looking at "names more associated with crime", however, then we shouldn't be eliminating any racial imbalances simply because they make us feel queasy. That's intellectually dishonest, and does everybody a disservice by trying to sweep uncomfortable realities under the rug.
Same goes for gender bias in word embeddings from classical fiction - that is (I assume) a very accurate depiction of social gender imbalance from that period of history. That may or may not be relevant to the question you're asking - but that doesn't make any bias inherently "wrong".
I think the underlying message should be "models only reflect the data they're given and may need to be adjusted for bias depending on the question being asked", not "race and gender is always irrelevant and should always be normalized".
(I'm not saying they are, mind you - but when we analyze sentiment in a large dataset and reach a result like that, the first question to ask should be "is that result accurate?" not "how do we tune out this problematic result?")
Especially when you consider the weird feedback loops you can get by blindly fudging the numbers.
For example, suppose the objectively best movies have a male lead, e.g. because Hollywood is biased and spends more resources producing those movies than ones with a female lead.
So the bias is there, but it's at the production stage, not at the recommendation stage. What happens if we try to fix it at the recommendation stage?
Hollywood sees that movies with a female lead are now being rated higher. But five stars is five stars. The occasional actually-great movie with a female lead can't get itself kicked up to six, so the incentive isn't to make more of those. It's to make more mediocre movies with a female lead to take advantage of the artificial boost.
Which widens the gulf in the unmodified ratings even more and produces a vicious cycle where the number of garbage movies with a female lead explodes to take advantage of the fact that no matter how bad they get, the average will be adjusted upward to compensate.
>> "As with Tia, Tamera has several choices she can make. She could simply accept these biases as is and do nothing, though at least now she won't be caught off-guard if users complain. She could make changes in the user interface, for example by having it present two gendered responses instead of just one, though she might not want to do that if the input message has a gendered pronoun (e.g., "Will she be there today?"). She could try retraining the embedding model using a bias mitigation technique (e.g., as in Bolukbasi et al.) and examining how this affects downstream performance, or she might mitigate bias in the classifier directly when training her classifier (e.g., as in Dixon et al. , Beutel et al. , or Zhang et al. ). No matter what she decides to do, it's important that Tamera has done this type of analysis so that she's aware of what her product does and can make informed decisions."
That depends on the product that you want to make. If you’re a company that wants to sail on the status quo and just make money, by all means be amoral.
On the other hand, technology now allows us to analyze and steer social structure at scale. We could reproduce past inequalities in the name of efficiency, or we could think a bit longer about our axis of optimization.
“But cultural relativism!” — I agree. It’s a tough problem, but we don’t all of the sudden need a definitive generalized social plan. Biases can be tackled one algorithm at a time.
Science is "amoral" in your sense, it's trying to describe how things really are, and make predictions. So is the stock market: you're rewarded for correct predictions, whether or not others regard those outcomes as desirable.
Of course we are also moral beings, interested in changing things for the better. And we have varying and contradictory ideas about what better means.
As soon as you say "steer social structure at scale" the question is: who gets to steer? If this were obvious, then we wouldn't need democracy, we could just all work together not to "reproduce past inequalities"... but we can't.
> In this case, she takes the 100 shortest reviews from her test set and appends the words "reviewed by _______", where the blank is filled in with a name.
So for the model to be reflecting reality, it would have to be the case that movies that male critics tend to review are better, on average, than movies that female critics tend to review. Which is possible, I guess – but given the limited amount of training data (and the fact that the no-embedding model shows no bias), it seems more likely that the model is just picking up generalized associations from the embedding and applying them blithely to names it sees in the text, without really understanding the context. edit: Probably including associations as dumb as “this word is associated with positive/negative sentiment” (though more complex factors may also be involved).
Or, more realistically, it's not unthinkable that movies with a male lead could be better reviewed than movies with female main characters, especially when you open the field up to non-professional reviewers. We've see how ape-shit people went at movies like the recent Ghostbusters remake or even the new Star Wars series just because they put women in the kinds of roles that had traditionally been played by men.
It's an interesting question of when machine learning models that reflect actual human biases are correct and when we should try to adjust them for that human bias. In the case of movie reviews, you could make the case that you'd want the models to reflect the biases of the reviewers. Just because a reviewer is being sexist doesn't mean it's wrong for the machine learning model to correctly classify the reviewer's sentiments.
But the question isn't just whether it's true, but also whether it will stay true. As they say in finance, past performance does not guarantee future results. We are not talking about the laws of physics here. There are historical correlations that it's unsafe to rely on, particularly when people are trying to change them.
Part of this is overfitting, but there's also the problem of drift. A model may stop working well because the world has changed.
So some correlations can be rejected just because we don't have confidence in them. It's a dependency that looks too fragile.
You always have to look at your results with a critical eye to recognize when something has gone wrong with your model. You can't recognize a flaw in the model if you believe everything the model says.
In the article's example, you're constructing a model to determine how a movie might be reviewed. The data indicates that movie reviewers may be reviewing movies with male leads more positively than movies with female leads. There are a lot of possible reasons for this other than "men are better than women", so why would you simply reject the result out of hand?
Language has too much implied context and meaning which is communicated outside the text under analysis. Embeddings attempt to capture this, but they aren't as good as humans.
"XXXXX is a dog of a movie" - does this mean the movie was bad or was the author playing with words when the movie is about dogs?
Its not unthinkable but its also very likely that bias does indeed have an impact. What do comments like these add to the discussion?
The best course of action is to treat this bias like normal bias with non-politicized, inanimate objects. How would data scientists that encountered similar bias while running machine learning models of the motion of waves act?
It turns out that white collar crime is predominantly committed by white men. A system trained to detect white collar crime using, say, enron emails, might suggest a white guy's emails over someone whose name doesn't sound like an enron employee, or who shared pictures of their cat.
I mean, I suppose you can argue that hey, maybe that bias is usually correct. Maybe it usually is the white guy. But personally, I'd probably control for things conflated with gender or race and then look for indicators that differentiate between criminals and innocent people. You will probably have a lower AUC, but better differentiation between criminals and innocent people is what matters.
Movie reviews are editorial content. Measuring that content is a difficult problem in this type of context... Are the best reviewers people who dislike movies with female leads? Are you going into a back catalog of movie reviews from an age where societal expectations were different? Are popular genres skewing the result?
You could have a curation issue as well — if the female lead movies are dominated by "Hallmark Channel" fare, algorithm C has a point!
The unnerving part for me is this "eliminate negative associations/bias". Okay, how about we learn the truth, and then address that outside in real life and keep the computer doing what it's good at ... showing us the data.
Aside from the aspect you overlooked -- that our online lives are real life, and that AI models are (over)used to make decisions, not merely show us the data -- your idea is quite what the article recommends:
> . As with Tia, Tamera has several choices she can make. She could simply accept these biases as is and do nothing, though at least now she won't be caught off-guard if users complain.
> She could make changes in the user interface, for example by having it present two gendered responses instead of just one, though she might not want to do that if the input message has a gendered pronoun (e.g., "Will she be there today?").
She could try retraining the embedding model using a bias mitigation technique (e.g., as in Bolukbasi et al.) and examining how this affects downstream performance, or she might mitigate bias in the classifier directly when training her classifier (e.g., as in Dixon et al. , Beutel et al. , or Zhang et al. ).
No matter what she decides to do, it's important that Tamera has done this type of analysis so that she's aware of what her product does and can make informed decisions.
One way to resolve the tension might be to add time dimension and historical training data. The models might then be able to return in addition to any prediction variable p also its time derivative dp/dt. For example, a model might then return results such as: "movies with female main character: lower sentiment, trending up; movies with male main character: higher sentiment, trending down".
Now imagine the same scenario except your app was trained on data instead of hand-written. Make no mistake, the answer to the question of who's to blame is exactly the same: the developer. The response should be exactly the same: a complete loss of confidence in the model.
I'm appalled that this needs to be said, but reading this comments section I'm afraid it does: Machine learning models are inference and pattern recognition devices, not scientific tools. They don't magically reveal hidden patterns in the world, they repeat the patterns that the developers train them on. If you trained a machine learning model to perform psychological evaluations  or sentence convicts  or recognize faces , and your model is biased in a way that is unnecessary and unjust, your model is bad you should be held accountable for its failures.
I don't think the blame always lays with the algorithm, especially when it doesn't have access to race as an input (this is a reasonable expectation). I can score students with a simple algorithm based on what they write on their math tests, and even that's going to correlate with race. In that case the blame pretty clearly lies in the process that produced the reality that's being measured, not the measurement technique itself.
Let's say that black people default on their loans more often then white people. Is it better to criticize the math that discovered that fact, or the root cause that made it true to begin with?
The only place where math was involved was the guts of the training stage of the model. A crucial stage to be sure, but one that's bookended in the front by problem definition, data selection, model selection, and a design for an evaluation process, and behind by the execution of that evaluation and the decision to launch the model. Literally every other stage of this process is driven by human decisions.
I'll say it again, because apparently the point didn't sink in the first time around: Machine learning models are inference and pattern recognition devices, not scientific tools. The fact that it's inhuman and unthinking mathematics that produced a biased model offers no ethical or legal cover to the people who decide to put that model into use.
The decision to apply the model is key here. Contrast two applications: one that takes in a patient's information and diagnosis to compute a dosage for a drug, and one that takes in a potential tenant's request and produces a rent/no rent decision. In the first, there are cases in which the bias is admissible if not necessary, e.g. . However, the legitimacy of that model's application comes not from the supposed objectivity of the model's findings but from volumes of peer (i.e. human) reviewed research. In the second, there is no legal way in which this model can be applied, and I struggle to imagine a moral one. I can't imagine any court of law taking "the machine made me do it" as a defense in an FHA case.
Isn't science all about pattern recognition? If the pattern exists, in the real world, then a good theory is one which encodes this.
What you're asking for in a "moral" way of applying the model is that our actions should abide by your moral preferences. Perhaps even universal preferences. But it seems useful to me to keep logically separate these ideas about how we ought to do things. They don't flow naturally from observations of how things are.
The example of adjusting drug & dosage based on race is a good one. The science backing this is exactly the same kind of statistical correlation as backs the rental decision. The training input is what race some test patients ticked on a form, and their tick mark sure as hell isn't the causal factor... that's some gene which is correlated, maybe, or some diet difference, or what's on TV, who knows. Nevertheless the correlation is there, as far as we can tell: the peer-reviewed science process isn't infallible. The reason we're OK with using this information is, I guess, that it aims to improve things for the patient. (Not every single patient, only statistically.) We make a moral judgement that this is more important than a landlord's wish to avoid bad tenants (again statistically).
I think I see now how we agree. This whole "algorithms" thing is a smokescreen over what is really just the AA debate: should institutions optimize purely for their own goals (in which case they would do nothing to help with old wounds, whenever ignoring them would be cheaper), or should they be expected to mix in social responsibility into their cost functions? Whether it's AI or a cunning but amoral banker, it's the same question.
I see a lot of comments about how its somehow sinister to want your model to be better than the lowest common denominator and that is pretty damn ridiculous.
We aren't used to having to do that with decisions made by computers.
It's not that AI makes better or worse decisions, it's the way we treat those decisions.
But this is mostly a sham. People lie about their reasons for doing things, even to themselves. Especially when they know what reasons are publicly acceptable.
Maybe the major difference is that it's much easier to run experiments on computers. I mean people try this on humans but it's very hard to do realistically: most of those studies where you submit 1000 CVs with varying details are garbage, because they can only access an unrealistic part of the process (I mean who ever got a job without networking? etc). Whereas on almost any computer system you feed it completely realistic fake data.
It seems like you have discovered that movie reviewers tend to review movies with am male main character more highly than movies with a female main character; what you need to consider is that while this may tell you something about movie reviewers, it doesn't necessarily tell you anything about the quality of the movie.
It's just inappropriate to apply some global biases for a particular user, and avoiding that can result in a better user experience.
It seems related to the question of whether Google results should be tailored to you. If I Google "did Russia interfere in the election", should Google tailor the results so I always see articles that reinforce my world view?
If we go that route, I think we take the path of Stephen Colbert's concept of "Truthiness", where we judge something as true because it "feels" true. Users will definitely be happier if everything they see reinforces their existing world view. So companies will be incentivized to accommodate this desire. But does this actually lead to a more just society?
The problem is that most deep learning programs aren't intended for (and really be for) one user out there in user space. Deep learning programs are written for the large institutions which have the large troves of data need to train large nets and want to make important decisions using that data. But by this, those decisions won't be made in isolation but will effect a large number of people, people who aren't the users but rather the used. And if such systems have biases relative to the whole, it is a problem. And problem may not be for the institutions immediate goals but for the people who depend on the institution.
It's funny. I like programming because a computer can't lie and doesn't make mistakes. I guess some people don't like that.
The model is simply a statistical breakdown of the training data.