Hacker News new | comments | show | ask | jobs | submit login
Text Embedding Models Contain Bias (googleblog.com)
222 points by gajju3588 5 months ago | hide | past | web | favorite | 202 comments



This is a genuine, difficult problem. It's so easy to join up on your political team of choice and scream about it, and all this makes any real attempt to solve it so much harder to talk about in public or collaborate on. In fact, there's practically guaranteed to be some greyed out text in the discussion here.

So some of these associations simply reflect the way-the-world-was or the way-the-world-is - like associating "woman" with "housewife". That's a whole debate in itself.

But some of these can be accidental. Suppose a runaway success novel/tv/film franchise has "Bob" as the evil bad guy. Reams of fanfictions are written with "Bob" doing horrible things. People endlessly talk about how bad "Bob" is on twitter. Even the New York times writes about Bob latest depredations, when he plays off current events.

Your name is Bob. Suddenly all the AI's in the world associate your name with evil, death, killing, lying, stealing, fraud, and incest. AI's silently, slightly ding your essays, loan applications, uber driver applications, and everything you write online. And no one believes it's really happening. Or the powers that be think it's just a little accidental damage because the AI overall is still, overall doing a great job of sentiment analysis and fraud detection.


With current technology the problem of Bob (or Adolph or Mohamed) becoming associated with evil is insolvable by the fact that current deep learning systems are fundamentally characterized by not being able to distinguish causation from correlation.

The only solution I can see is forcing any company that imposes life-defining actions on people (credit bureaus, banks, parole boards, personnel offices, etc) to use only rules based on objective criteria and to prohibit systems based on a "lasagna" of ad-hoc data like present day AI systems. Indeed, if one looks at these in the light of day, one would have to describe such system as fundamentally evil, the definition of "playing games with people's lives." (just look at the racist parole-granting software, etc).


> The only solution I can see is forcing any company that imposes life-defining actions on people (credit bureaus, banks, parole boards, personnel offices, etc) to use only rules based on objective criteria and to prohibit systems based on a "lasagna" of ad-hoc data like present day AI systems.

That is probably the exact opposite of what you really want. If the problem is that someone's name is Bob and the AI thinks Bobs are evil, what you want is for there to be 100,000 other factors for Bob to show the system that it isn't so. As many factors as possible, so that the one it gets wrong will have a very low weight.

Even the objective criteria will have biases. There is a significant racial disparity in prior criminal convictions, income, credit history and nearly every other "objective" factor. The more factors you bring in, the more opportunities someone in a given demographic has to prove they still deserve a chance.


That is probably the exact opposite of what you really want. If the problem is that someone's name is Bob and the AI thinks Bobs are evil, what you want is for there to be 100,000 other factors for Bob to show the system that it isn't so, as many factors as possible, so that the one it gets wrong will have a very low weight.

You don't understand. My point is that institutions making such decision should not be able to make decisions according to these 100,000 unexplained factors. If you're a lender, you can look at employment history, records of payment and other objective related criteria. You can't look at, say, eye color, however useful you might think it is. Institutions should be able make these decisions arbitrarily, at the level that they effect lives. There should legal provisions for auditing these things (as there are, on occasions, provisions of auditing affirmative action, environmental protection behaviors, insurance decisions, etc).


> My point is that institutions making such decision should not be able to make decisions according to these 100,000 unexplained factors.

But how does that help anything? The objective factors have the same potential for bias as the seemingly irrelevant ones. All you get by excluding factors is to increase bias by not considering information that could mitigate the bias in the factors you are considering.

Suppose that 80% of black men would be rejected for a loan based on your preferred set of objective factors. Of that 80%, many would actually repay the loan, but none of the objective factors can distinguish them from those who wouldn't, and when averaged together the outcome is to refuse the loan. If you used some seemingly arbitrary factors that happen to correlate for unknown reasons, you could profitably make loans to 60% of them instead of 20%.

How is it helping anyone to not do that?


Let's assume we push it from 20% to 23% (I don't think we can expect the huge gains you posted) by using various weird features such as whether you like to purchase odd or even number of bananas.

People's live will depend on the decisions of these machines so people will start trying to game them. They will make sure to always purchase odd number of bananas, they will wear hats but only on Thursdays etc etc.

Now two things happen. As more people game the system the rules need to be updated. Suddenly it's all about buying bananas divisible by three and wearing hats on weekends. The people who tried to follow the previous advice got screwed over, and what's more they have nothing to show for it. Instead of making people do useful things like paying bills on time and saving up some money, it made them follow some weird algorithmic fashion. Because of this expenditure of energy on meaningless things we may see that now only 18% of people would manage to pay back loans on time.


> People's live will depend on the decisions of these machines so people will start trying to game them. They will make sure to always purchase odd number of bananas, they will wear hats but only on Thursdays etc etc.

But that's just another reason to use 100,000 factors instead of twelve. If someone's income is a huge factor in everything, people will spend more than the optimal amount of time working (instead of tending to their family or doing community service etc.), or choose to be high paid sleazy ambulance chasers instead of low paid teachers, because the algorithm makes the returns disproportionate to the effort.

If buying an odd number of bananas is a factor but the effort of learning that it's a factor and then following it is larger than the expected benefit from changing one factor out of thousands, they won't do any of it.


Goodhart’s law, as phrased by Marilyn Strathern: “When a measure becomes a target, it ceases to be a good measure.


> But how does that help anything? The objective factors have the same potential for bias as the seemingly irrelevant ones.

Given a choice between observable, identifiable and modifiable rules or hidden, poorly understood rules integral to a whole model, I'll take the former every time.

Bias will continue to exist for now. What we need to do is make sure we always build processes to appeal and review our systems, preferably in a public way.


The whole problem is that single cases are not statistics yet people would live to apply global generalized statistics to single cases.

What you touched upon is the accuracy/ bias trade-off. To have evidence in particular case you need to attempt to debias the particular system and see how it affects accuracy. Sometimes, it may even vastly improve it.

What is more important is that the systems are not benchmarked properly. As in compared against very simple metrics and systems. Such as: against random decision. Against simple recidivism prevention (grudge system). Against plain math metrics with constants.

To add, they're opaque and it is impossible to easily extract the factors that went into any given single decision. This means they act fully irrationally. Intelligently but irrationally.


The idea of society moving forward is removing systemic bias and other entrenched forms of discrimination and prejudice that have occurred in the past and continue to occur. And this requires human thinking, intelligence and sustained effort.

Doubling down based on historical and backward looking data does not seem like the way forward and can only perpetuate entrenched bias.

All the inferences and correlations will reflect that. This is not intelligence and can only take you backwards.


> That is probably the exact opposite of what you really want.

No, it really isn't. In an ideal world, the reasons behind a decision are transparent, auditable, understandable, and appealable. Machine learning is none of those.


The interesting question is not whether those are good things -- they clearly are. The interesting question is their value relative to each other and, more specifically, relative to correctness. To what extent are we willing to tolerate less correct decisions if necessary in order to achieve those and other desirable properties?

It seems like the answer to that question is situationally dependent.


In an ideal world, society would compensate you when they would like you to assume a risk below its market price, rather than forcing you to pretend not to notice the risk.


But there isn't actually a risk associated with an Adolf. Its an inefficiency born out of an incorrect belief by the whole of (or at least most of) society. The correct solution is not to price in the incorrect assumption, but to not make the incorrect assumption.

In other words, by offering Adolf's below-market rates, you're exploiting a market inefficiency at no additional risk. This is an ideal world as you describe it. It's capitalism at it's finest!


This isn't just a technology problem. Most humans are also really, really bad at distinguishing causation from correlation. The very notion that these are even different things is utterly alien to most people.

An AI that was even slightly good at it would be far ahead of what we get when humans run things.


"Causation" is defined as temporal correlation: changes in A cause changes in B. There's no reason to thin computers would be "bad" at this. People are only bad at this because they have small, lazy brains

Even the greatest physicists and mathematicians can't tell the difference between correlation and causation. When people say "causation is not correlation", what they truly mean is "spurious, context-sensitive correlation is not strong universal correlation"


No. Temporal correlation is "changes in A precede changes in B". This is not causation, you more or less correctly describe causation when you say "Changes in A cause changes in B", but a better way of putting it might be "all else equal, changes in A cause changes in B". The difficulty here is controlling for all possible C.

As a simple example:

If A causes a change in both B and C, but the change in B happens more quickly, "temporal correlation" would imply that B causes C, when that's not the case.

This is especially obvious with cyclical phenomena. The tide going out does not cause the sun to rise, for example, even though I'm sure I could draw a temporal correlation between them. Nor does my car being in my garage cause me to go to work.


There is no free lunch. Any forced adjustments we make to reduce false positives, will increase the false negatives. It's up to us to decide what trade-off we want and the algorithms will simply adapt to that.

If we believe certain attributes to be irrelevant to a situation, we need to build systems that are completely blind to these attributes. This is how double-blind trials, or scientific peer review works.


Any forced adjustments we make to reduce false positives, will increase the false negatives.

That's just not true. You can have systematic errors caused by bad training data which can be fixed without an increase in false negatives (otherwise new ML systems would never improve over old ones!)

In the physical world a good analogy is crash testing of cars. For decades (until 2011!!), crash test dummies were all based on average sized American males. That led to hugely increased risks for female passengers:

the female dummy in the front passenger seat registered a 20 to 40 percent risk of being killed or seriously injured, according to the test data. The average for that class of vehicle is 15 percent.

Fixing that problem didn't case any increase in accident risk for men.

It's the same in machine learning.

There is no free lunch

This isn't what the no free lunch theorem[2] says. That says that all optimization methods are theoretically equivalent.

[1] https://www.washingtonpost.com/local/trafficandcommuting/fem...

[2] https://en.wikipedia.org/wiki/No_free_lunch_theorem


The main problem is that most other systems have their own biases. A typical suggestion I hear a lot goes along the lines of "if in doubt, let a human make the decision". If you think about ML in say a school context it's likely that certain biases are baked in but I'd argue that these biases tend to be more extreme when humans make decisions because they take into account less factors (this may not be true but it's my working hypothesis). I think a decent example is "first names". There's certain names that result in worse overall grades during a normal school career, even if the teachers are aware of this bias (in Germany the posterchild for this is "Kevin").

It's a tough problem. I think being aware that biases exist in ML is a good first step.


> I think a decent example is "first names". There's certain names that result in worse overall grades during a normal school career, even if the teachers are aware of this bias (in Germany the posterchild for this is "Kevin").

There is a possible causal link with names which goes beyond "children are being treated worse because of their name".


If it is insolvable for the current deep learning systems then they aren't worth the time and effort that has been spent on them.

Your solution will not work, ever. All such companies will adjust their processes to maximise their own benefits, irrespective on any legal consequences. For most, the profits (in whatever way they define profit) will be the more important thing than any legal requirements placed on them. It is the nature of the people running these companies.


This may not be a good thing, but it does seem to match what you'd expect the human attitude to be. Consider what the typical, initial human reaction to someone named Adolf would be.


With humans, you can change an opinion. Once AI has made up its mind, there isn't anything you can do anymore. The entire system and extended network of interaction points will stick to the decision/stance it has made.

If you are an outlier in the data, you will remain an outlier in the system.

This is terrifying.


> With humans, you can change an opinion.

Do you ever see anybody change their opinion? Usually they won't change simply after hearing an argument. They change only when they have an utterly different life experience. In other words, only when they are conditioned by new data, in no ways different from how an AI changes its opinion.


It depends on how 'deep' the person believes in that opinion. People change their opinions all the time after a simple discussion.


The correction would be to make the systems learn continuously in an online fashion. This is a hard open algorithmic problem though - how to achieve that in a stable robust way.


Forty years ago, I worked for a very pleasant elderly Jew whose name was Adolf. There was never any association with bad things about him. I think you would more likely find the names Hitler or Stalin as being a little bit more problematic.

I know my reaction is more to the surname than the first name.


Also note that people do not tend to react negatively to the name Joseph. I'd ascribe that to it being much more common name than Adolf, though.


Adolf used to be a quite common name (1%-2% of population) and it was popular in many countries e.g. Sweden including the Gustav Adolf line of kings, Netherlands, Adolphe in France, Adolfo in Spain and Portugal, etc.

Joseph has a solid anchor as a biblical name, though, and Stalin wasn't condemned nearly as much as Hitler was.


For a real world version of your "Bob" example, many people have negative associations with the names "Adolf" and "Hitler".

Do you think we should train AI to systematically ignore those sentiments?


For a real world version of your "Bob" example, many people have negative associations with the names "Jamal" and "Mohammed".

Do you think we should train AI to systematically ignore those sentiments?


Machine learning models are pretty good at reasoning with (or can be made to reason with) conditional probability, which I think can solve this problem.

For example, people from X race might be more likely to commit crimes in the absence of any other information (marginal probability). However, person from X race is no more likely to commit crimes than a person from Y race, conditioned on something else like where they went to school, what they do for a living, etc..


When talking about something like crime rates by race, your statement incorrectly assumes society enforces laws fairly/equally. For example, based on the numbers alone, black people are much more likely to commit drug-related crimes than white people. However, studies show that black people are stopped a lot more often and have unequal sentencing outcomes.

It's important to remember that AI doesn't have context, and just because it's using "data" to make decisions doesn't mean the decisions are unbiased - the underlying data may be biased.


One of the things I'm rather worried about here is AI's ability to learn on the most unlikely (to human cognition) features. Take the image recognizer that finds "sheep" by seeing pastures where it thinks sheep might graze http://aiweirdness.com/post/172894792687/when-algorithms-sur... , or take any of the adversarial perturbing examples like https://blog.openai.com/robust-adversarial-inputs/ .

You're not allowed to discriminate on race in housing. But is an AI that determines your creditworthiness for mortgages allowed to discriminate on what you eat, where you go to church, what your favorite music is, etc.? Maybe it doesn't have that data, but it will have one level higher - what store credit cards you have and how much you use them and where you opened them.

If you train an AI on a segregated city with a history of actively discriminatory citizens, where the few people of race X who moved into a not-race-X neighborhood got harassed out and sold before they paid off their mortgages, how easily will the AI conclude that people born in certain neighborhoods are more likely to pay off their mortgage if they avoid certain other neighborhoods?

Is that illegal? (My guess is there's no way to prove to a court, to the court's usual standards, that the AI happened to learn the city's racial tensions.) Should it be illegal, if outright racial discrimination is illegal?


Homicide rates are unlikely to have this kind of bias though. And they're heavily against blacks. It's unlikely whites being murdered go unreported, or that anyone gets off with a warning. But inner city homicide has a low solving rate. If the reporting rate is also lower, then it could be even worse than the 4-7x overrepresentation of blacks in homicide.

This seems to be the case in London, too, so it's a bigger issue than just the US.

But for things like drug possession, yes, blacks probably get stopped much more and not let off with warnings much less.


Untangling cause from effect in this case is extremely hard.

It is like with plagues. If you put many people with plague in one area do expect that others will catch it. If the rule to segregate is silly enough do expect a correlation of plague with certain characteristics of people put together and the locations. Or their sizes. Maybe it is "cities" or "presence of slums" not skin colour, combined with overrepresentation of people with certain skin colour in them.

It takes some write genius analysis and experimentation to untangle such complex effects from causes.

Call again when you have an AI that can deal with this. Essentially a researcher AI.


"Blacks". Are homicide rates 4x-7x higher among "blacks" in Ben Carson's gated residential community? In Boca Raton and Beverly Hills? Or might their be other factors you are ignoring?


In only responding to the claim that the stats are wrong because of police oversampling blacks. With homicide, no one can make that case.

If lack of money causes homicides, we should expect to see all poor rural areas have high homicide rates, too. Maybe that's true.

Another factor seems to be sex. Males commit so much more violence, so perhaps that's the only thing to focus on.


Let's get it straight, machine learning models cannot reason anything. They follow the rules and data supplied. They cannot recognise in any way that the rules by which they operate or the data they are supplied with are or are not reasonable.

Much as there is much hype about AI and machine learning technologies, all of such systems will be, for the foreseeable future, very simplistic models of what we think we know of intelligence.

As humans, we have blindly forgotten that we know very little about the world around us, including ourselves. We think that we have a handle on reality, but we are just plain ignorant. All the models that we have for deep learning and AI or GAI are barely scratching the surface of what "intelligence" is and means.

We may get some useful tools as a result of the research being undertaken today and what we have undertaken over the last lot of decades. But we have millenia to go before we even scratch the surface of what we understand of the universe around us, let alone understand what intelligence means.


> Let's get it straight, machine learning models cannot reason anything. They follow the rules and data supplied.

That is reasoning. I’ll agree with your later point that we don’t really know what intelligence is yet, but that’s because reasoning is clearly only one type of intelligence. Current systems are terrible at language, for example (a year ago I would have said “and spatial awareness”, but this is progressing fast and I am no longer sure).

> They cannot recognise in any way that the rules by which they operate or the data they are supplied with are or are not reasonable.

Quite a lot of humans fit that description. For example, consider how much angry disagreement the following questions get: “is climate change real?”, “Brexit, yes or no?”, and “does god exist?” Also consider how many people (angrily!) refuse to believe these questions result in any angry arguments.


I am so glad you posted this, because for the last couple of years I've been missing old-school Godwin's Law - someone making an inapt comparison to Hitler or Nazis when Nazis weren't actually involved. Of late, discussions about how society should handle edge cases in freedom and robustness have involved actual Nazis with actual swastika armbands talking about the actual "Jewish question," and there were no inapt comparisons, just (confsuingly) genuine discussions of how much place Nazis have in polite society.

But your comparison is inapt: yes, we should train AI to systematically ignore negative sentiments it associates with people named "Adolf" or "Hitler" who are not in fact the Adolf Hitler who died in Berlin in 1945. Humans have difficulty in doing so, of course, but this is a bug in human cognition which is essentially the vulnerability that bigotry takes hold of: a justifiable negative impression of one person as an individual is imputed to other people who seem superficially similar. We see one person of a minority breach a social norm or even a law, and we think that others of that minority must be prone to doing similar, simply because their minority status is salient. We fail to realize subconsciously that membership in the same minority is not actually meaningfully correlated with this behavior, and when we see someone from the majority do the exact same thing, their majority status is less salient, and we don't impute the negative impression to the majority.

A good AI should be able to distinguish dictator Adolf Hitler from, say, saxophone inventor Adolph Sax or chemist William Patrick Hitler (the nephew of Adolf Hitler), and not cast aspersions on the latter two - even though human biases forced William Patrick to change his last name to Stuart-Houston. It should even be able to understand that Indian politician Adolf Lu Hitler Marak is a separate person who merely had parents with questionable taste, and the man is not on account of his name more likely to become a genocidal dictator than any of his political rivals.

And since our justifiable negative association with the Nazi leader is, fundamentally, that he weaponized this vulnerability in human cognition, it is one way of acting on our dislike for this Hitler to make sure that the AIs we build are not susceptible to the same vulnerability.


>And since our justifiable negative association with the Nazi leader is, fundamentally, that he weaponized this vulnerability in human cognition, it is one way of acting on our dislike for this Hitler to make sure that the AIs we build are not susceptible to the same vulnerability.

Isn't that statement in itself an invocation of Godwin's Law?


You could argue that, I suppose. I think comparisons to Hitler are inapt when someone attempts to tar something irrelevant to Hitler's particular crimes (e.g., his vegetarianism) or particularly when someone attempts to compare their ideological opponent to Hitler on specious grounds. But I think there is room for apt comparison to Hitler: it would be a mistake for humanity not to learn from the experience of letting him rise to power.

Godwin himself wrote a little bit about it: https://web.archive.org/web/20170209163428/https://www.washi...

(And I am totally open to criticism that this particular comparison is inapt.)


Sorry, but this is actually a real world situation that happened - I have a Jewish ancestor who was named Adolf before WWII but after Hitler's rise unsurprisingly chose to go by his middle name instead.

You can ascribe it to "Godwin's law" as much as you like, I just find it a more realistic example than some hypothetically disadvantaged "Bob".


Ah! That's fair - but I think the spirit of Godwin's law is that comparisons to Hitler in particular, when the subject matter is more general than Hitler, yield unnecessarily hyperbolic responses (that - among other things - are bad at actually addressing the causes that led to the rise of Hitler and therefore weaken the goal of making sure that such a thing never again happens).

We should make sure that an AI, who is probably making decisions on things like legal documents / public records and not just the middle name someone goes by, will not consider it a negative that someone is named "Adolf" if they aren't Adolf Hitler specifically. And we should for the same reason make sure that an AI will not consider it a negative that someone is named "Bob" if it has a newly-acquired specific negative impression of some other person named Bob. There isn't a difference in the cases.

"Hitler" shouldn't be a special case because the rise of Hitler wasn't as much of a one-time event as we'd like to believe. When the next genocidal dictator with a somewhat rare first name gains control of a country, there will be people from the victim population who share that first name, and they should not suffer the same indignity at the hands of an AI, either. And on the flip side, when this genocidal dictator rises to power, the AI shouldn't be taught that Hitler was the only evil man who ever lived or will live; if it has the data to conclude that some actual individual (not a name) is as bad as Hitler, it should be able to conclude that.


The "Bob" example is already a real-world example, though...


Yes. Why should someone be punished just because they happen to have a name that is similar to someone horrible? Humans will have trouble avoiding being influenced by that coincidence, but we should demand better from an AI.

If you are referring to the actual well-known Adolf Hitler, then there should be no need to use the name for the purposes of making decisions. You shouldn't need to use a person's name as a proxy for whether or not they are a genocidal dictator, just make the decision based on whether or not they actually are a genocidal dictator.


On the bright side, the software is probably easier to fix. Once we figure out how to build unbiased models, they can be deployed widely. Try that with humans.


"To err is human. To really foul things up requires a computer."

Or DevOps Borat: Devops is screwing things up at web-scale.


Theoretically, the software may be easier to fix. In practise, it only happens if there is a profit to be made to get it fixed. If there is a profit to be made, then there are no guarantees that biases will be removed. It then depends on the political bent of those pursuing the fixes.


That should start with good bias tests that can be run over all the public facing AI. It's a problem for sociologists to identify all these cases.


The fix is also very much not simple. Imagine that development of AI systems proceeds as it has thus far... when an AI system turns out biased, racist, sexist, etc - we shut it down. As AIs get more capable, any which do harm, act deceptively, etc - we shut them down.

Eventually, we will have AIs which are better than us in every respect. They will embody our idea of perfection. This is stupendously dangerous. Humanity has a long history of adapting to technology 'taking away' things that everyone thought were 'fundamentally human', like the ability to do work or make things or lay train tracks or whatnot.... it's not hopeful. We deal very poorly with this.

Consider the story of John Henry. He's a folk hero. For killing himself. Because he killed himself in defiance of the machine outperforming him. So this stuff is all-caps Important. What is the likely response from humanity when there is a perfect, not machine but mind? My bet? Humanity will identify with its worse aspects. It will enshrine hatred, irrationality, mean-spirited spite, violence, self-destruction, and all of the things we built AIs to never stray into. Those will become "what it means to be human."

AI may be a philosophical crisis unlike anything humanity has ever faced. Not in kind, but simply in degree.


And I thought getting a customer support person who says, "they system won't let me do that" was bad enough!


The solution for identity is easy. Just don’t rely on unverified name association data for names. I don’t get why it’s so hard to avoid that. Just don’t do it. There are plenty of ways to train models without using that kind of data, both in the training phase and in active use.


Should have gone with Eve or Mallory :)

They're from the same set of names as Bob (and Alice), but are actually affected by this issue, although to a less extent than what you describe.

It's not certain how likely the kind of texts that uses them are to be used as training data though.


Why is someone using fiction and references to fiction to train an AI that will make decisions about real people’s livelihood?


This is a pretty disingenuous question. If you've spent any time in software development, you're well aware that tools of all sorts get used to do tasks for which they were never intended, and crazy unintended consequences result. There's no reason to think that it will be any different for pre-trained machine learning models for things like sentiment analysis.

Here's one example of how it could happen. Someone publishes a high-performing model to gauge the tone of a writing sample. This model includes the anti-Bob bias described above, such that the appearance of the word Bob is tantamount to including a curse word, and greatly biases the model toward negative sentiment. Because of its high overall performance, companies of all sorts incorporate this model into their workflows for things like grant applications, loan applications, online support forums, and so on. For example, they might use it to detect when someone is using their help form to send an angry rant rather than a legitimate request for support. Now, any time someone named Bob wants support, or a loan, or a grant, or whatever, there's an increased chance that their request will be flagged as an angry or abusive rant and denied simply because it contains their name, Bob.

In fact, we can remove the layer of indirection and note that some people have names that are spelled the same as a curse word, and already have similar issues with today's software, making it literally impossible for them to enter their real name into many forms. This example doesn't involve machine learning, since profanity filters are typically implemented as a pre-defined blacklist. But there's no reason to think that a sentiment analysis model would fail to pick up on the negative associations of profanity.


It is not a disingenuous question, it actually comes down to a question of ethics. Why would anyone with any sense of accuracy allow fiction to influence a decision being made about a person’s life?

The more talk about how people are building models, the more I want people to take these black boxes to court to force developers to explain how decisions are made.

Refusing to give someone a loan because someone trained a model with 50 Shades of Grey is unethical and insane.


People don’t usually look that deeply into the consequences of their choices. Is it ethical to invest in land mines? I’d say not, but when I last put money into a high interest savings account, I didn’t know if my bank had done that for me and it didn’t occur to me to ask.


Of course it's unethical and insane. But the point is that people are going to look at the performance numbers for the model, and if they look good, they're not going to ask how it was trained. So the fact that fiction writing was used to train the model will never come up in the discussion about whether to use it.


The other question is what market will be for people who do not for well in most models? Will we get a freshly made new underclass?

Just take a look at people called "Null" then multiply the problem thousand times across various systems with no central appeal.


> In fact, we can remove the layer of indirection and note that some people have names that are spelled the same as a curse word, and already have similar issues with today's software, making it literally impossible for them to enter their real name into many forms.

Ironically, this is also an example of a system behavior that was driven by users' desires not to see certain things. Seen in a certain light, it bears a resemblance to the idea of filtering out certain associations because a user considers them distasteful.


First names like Gay and Dong, or surnames like Fuk, for instance.


Or the classic example - Dick. Both first and surname and old slang for a private investigator as well as a profane name for a reproductive organ.


Because AIs rely on real humans' judgments, and real humans are biased by fictional stories.


Humans better not be biased by fictional stories when judging a loan application or uber driver application.


Ahahahahahaha. Ok.


Do you have no sympathy for an ordinary fairly harmless person named "Donald" or "Bashar"?


> But what if we found that while Model C performs the best overall, it's also most likely to assign a more positive sentiment to the sentence "The main character is a man" than to the sentence "The main character is a woman"?

As I understand the problem, they are saying that statistically, the statement about the main character being male is a bit more likely to be positive than if the same thing is said about a woman. If that is statistically true and you are trying to create a model to determine the level of positive sentiment in a review, then that may be a legitimate indicator of how people categorize things. If the goal is to try to "fix" how people talk and write, I'm not sure ignoring statistical patterns in the way we talk is really the right approach.


Actually it's any name, even the name of the reviewer, as the test they conducted showed.

the issue is that humans don't understand multi-variable statistical analysis, whether in the form of ANOVA or machine learning training, so they try to pack everything down into two or sometimes three variables of output.

And that's fine if you are pulling from a population that's homogenous. But if there are two or more discrete subpopulations, you want to control for them or represent them separately, not just ignore them or pretend they reflect the information you want.

Anyway if the reviewer name is enough to throw the results, it may suggest that guys praise more movies, rather than movies with male character's get more praise.

I just looked at the first page of 10 star reviews of battlefield Earth and none seemed to be female, just saying.


If the multiple sub populations exist, that’ll be reflected in the data, as long as your dataset is good enough.


Maybe? But even something as simple as an analysis of human height has genetic/ethnic, nutritional, age and gender components, not to mention historical differences. If your input is name, and your output is predicted height, knowing and testing for bias in your data sources is definitely important.

I think where people get caught up is that they don't see the world at large as biased, because they view their understandings as essentially correct. For example, we expect judges to rule fairly on every case, right?

To pick a non controversial issue, likelihood of parole is apparently effected by how recently the judge ate https://www.economist.com/node/18557594

Now, parole data accurately reflects how people were paroled, so predicting likelihood of clemency requests is a perfectly valid use of that data. If you were doing machine learning to try to help people get paroled, you'd want to leave that bias in as a predictor, because it's unfair but real.

But you'd probably want to adjust that data to correct for the recently having eaten bias if you were writing a system for parole recommendations for new judges based on past judicial decisions.

You wouldn't want people to be more likely to be denied just because they came before a judge before lunch. And if you don't test for a bias like that, how would you be able to tell that the machine learning algorithm had it? And it wouldn't even need to be direct... seeing people A-Z in court could mean a bias based on name.

Long story short, bias is a real issue, and you need to be aware of it and test for it, not assume that your input data isn't effected by human error.


Good, because the article clearly states that the critical takeway is that modelers must pay attention to statistical patterns, and not blithely ignore them.


Agreed. The machine is biased because biased humans are training it to be biased.


Well I think some of it has to do with the fact that language is not completely symmetrical (there may be a better word). For example, the following sentences:

1. They went to college. 2. They went to the University of Phoenix.

The fact you are calling out UoP may turn out to be used more in slightly less positive sentiments than the first one. If you are trying to figure out the sentiment of a few sentences, this might be important. Yes you can ignore it, but the question is whether you are trying to understand how the language is actually being used or not.


A real-world example:

交大不如复旦 [jiāodà bùrú fùdàn] gets translated as "Jiaotong University is not as good as Fudan University" by Google https://translate.google.com/?hl=en#auto/en/%E4%BA%A4%E5%A4%...

复旦不如交大 [fùdàn bùrú jiāodà] (just swapping the word order) is translated into "Fudan is better than Jiaotong University" https://translate.google.com/?hl=en#auto/en/%E5%A4%8D%E6%97%...

The literal translation of 不如 would be "is unlike", but usually it implies a value judgment. In the translation, Google seems to be consistently sure that Fudan is simply better.

But when you specify that you're talking about the Jiaotong University in Shanghai and not one of the others, it suddenly changes its mind:

上海交大不如复旦 [shànghǎi jiāodà bùrú fùdàn] "Shanghai Jiaotong University is better than Fudan University" https://translate.google.com/?hl=en#auto/en/%E4%B8%8A%E6%B5%...

复旦不如上海交大 [fùdàn bùrú shànghǎi jiāodà] "Fudan is not as good as Shanghai Jiaotong University" https://translate.google.com/?hl=en#auto/en/%E5%A4%8D%E6%97%...

I'm at SJTU and everyone here seems to agree that this is the objective truth, but the people at Fudan are probably not so happy about it.


This is a great example.

It's like an exam question where you have to know what the teacher is thinking. Notice that in "Fudan is better than Jiaotong University" you're supposed to know that Fudan is another place with a university, not (say) a kind of apprenticeship... or a joke about some kind of noodles or something. You're supposed to have some outside context, but not too much, not enough to know their reputations. That's quite a fine line to ask a translation system to draw.


Unfortunately begging the questions: in what ways and how much? And most importantly: is the difference meaningful?

Google Translate and most other alleged artificial intelligence won't be able to answer that question meaningfully.


This is a really excellent example.


True again - biased humans creating biased machines asking biased questions and receiving biased answers.


Keep in mind this wasn't an analysis of whether people liked men or women in the lead role. It was basically pointing out that one statement was a little more likely to be more strongly positive than the other. So you think asking what level of positive (or negative) sentiment does this review express is a biased question?


I'm glad that Google is part of this conversation, and they're now applying tests for bias to new models that they release. (Some of their old models are pretty awful.)

If you want to see a further example, in the form of a Jupyter notebook demonstrating how extremely straightforward NLP leads to a racist model, here's a tutorial I wrote a while ago [1]:

[1] http://blog.conceptnet.io/posts/2017/how-to-make-a-racist-ai...


For those who aren't aware, @rspeer has been taking this problem seriously for years.

His ConceptNet NumberBatch embeddings[1] are one of the few pre-built releases which attempt to fix this.

[1] https://github.com/commonsense/conceptnet-numberbatch


I really have nothing to comment except how excellent I found your writeup. Seriously, wow.


Really good write up; it is going to help explain to people how data can lead to unhelpful biases by showing a few of your graphs!


I imagine that the model would also score "he was murdered" higher than "she was murdered". Models reflect their inputs, and it happens that yes, murder victims are disproportionately likely to be male and nurses are disproportionately likely to be female.

Is there a problem we should address here? Absolutely -- but the problem is that men keep on getting murdered, not that the model recognizes truths with which we are uncomfortable.


Biased models can cause ethical and legal problems. While your specific example is not a huge deal, the article gives the example of making hiring decisions in part based on sentiment analysis of candidates' text reviews. In this context, an engineer has responsibility to ensure that the model has no gender, race, or age bias towards candidates' names.

For a real life example, in 2017 Google was more likely to filter the comment "I am a woman" than "I am a man": https://www.engadget.com/2017/09/01/google-perspective-comme...

Or consider the impact of any bias in AI for criminal sentencing recommendations: https://www.wired.com/2017/04/courts-using-ai-sentence-crimi...


If the biased models replace human decision making, then it just has to be shown that the models are less biased than humans which may not be that high of a bar to pass.


US law prohibits businesses from discrimination. If you're sued, you can't argue that you only discriminated the average amount.


No one is saying the model is wrong.

As the article states:

> As with Tia, Tamera has several choices she can make. She could simply accept these biases as is and do nothing, though at least now she won't be caught off-guard if users complain.

> She could make changes in the user interface, for example by having it present two gendered responses instead of just one, though she might not want to do that if the input message has a gendered pronoun (e.g., "Will she be there today?").

> She could try retraining the embedding model using a bias mitigation technique (e.g., as in Bolukbasi et al.) and examining how this affects downstream performance, or she might mitigate bias in the classifier directly when training her classifier (e.g., as in Dixon et al. [1], Beutel et al. [10], or Zhang et al. [11]).

> No matter what she decides to do, it's important that Tamera has done this type of analysis so that she's aware of what her product does and can make informed decisions.


Be careful what you wish for.

Under this definition of 'bias', an unbiased model would, say, spit out equal associations between any occupation and any gender/sex/age/race/religion label.

We should probably ask ourselves whether that's a strictly desirable outcome, since by definition the 'biased' model has a higher predictive value. How much accuracy are we willing to sacrifice for the sake of erasing inconvenient facts about either our world, or our current models of the world?


Or as I'd put it, let's state the goal clearly.

We could all pretend that we all knew what "unbiased" was, as long as we lacked mechanisms for putting numbers on these things. We could meet our fuzzy conception of the biases with our fuzzy conceptions of what "unbiased" ought to be and we could all speak back and forth to each other praising the virtues of how unbiased our models of the universe are, and there was no way to dig in any farther to see if we were all actually saying the same thing.

Now we can put numbers on it. So, do it. What's the goal? Be clear. It's a program. Anything you can clearly specify can be done. It's not useful from an engineering perspective to define the goals entirely by examining each model in sequence and deciding on the spot that it's not "unbiased"; give us a definition in advance, so we can build a model to try to fit it.

Once the goal is stated clearly, the engineering work becomes much easier. We can even iterate on the goals. I'm not asking for perfection on that front on the first try any more than I ask for it anywhere else. But there's been enough of these articles that just point at a specific point in the space of models and call that point problematic. Move on the to next phase. What wouldn't be problematic? Exactly? At least take a stab at it instead of trying to make vague insinuations so we can iterate on the stab.

Nor am I trying to prejudge what those answers will be. It isn't necessarily the case that the only possible answer is to just slam in 50/50 numbers for the genders (or whatever other constant values you want for whatever other genders you want). Though if that is what you want, do it, and see what happens, and iterate from there. But what other things can be tried too?


We should probably ask ourselves whether we want our digital assistants to suggest "Yes, he is" in reply to "Is Dr Smith a good doctor?" and "No, she isn't" to "Is Shaniqua Lopez a good doctor?"


In the hypothetical case where my AI assistant doesn't have any more information than just the name, has to make a binary decision, and the data showed that doctors called Shaniqua Lopez are worse doctors on average, I'd honestly want it to say "no" to the question.


People tend too not think in such blind bounds when asking questions. Assuming this is dangerous in the extreme.

The proper answer would be a series of clarifying questions or clearly specifying the qualifications of the answer. (E.g. geography, general medical practitioners)


That's why we want to build a world where that hypothetical case is not in any real system. Since Babbage we've known that "putting garbage in" means "getting garbage out".


Predictive value isn’t an end in itself.


Predicitive value is the primary end of modelling. What's not an end in itself, as the fine article explains, is simplistically globally maximizing predictive power with reductive recommendations. This leads to tyranny of the majority, as the model optimizes only for majority viewpoints because they have more weight, while failing to serve minority users -- and as the number of dimensions increase, the number of people who are "majority" in all dimensions is a minority of users being well served!

So models must not be "one size fits all", but should allow a measure of personalization, so that in a world where most people prefer dogs to cats, cat lovers can still get content they enjoy.


Hit the nail on the nose.

Sure, you can build a discriminating classifier or generative model that is the most accurate, correctly identifying / emulating the reality of our world down to 5 nines; and nobody says you shouldn't be able to identify or quantify all of these "inconvenient" associations. The trouble is always when you intend to make decisions based on your framework -- the decision to offer somebody a loan, the choice of language your chatbot uses, etc. -- and that is where fairness ought to be paramount.

And yes, if you manage to build a discriminating model that sneaks in protected classes through indirect causal effects with no attempt to suppress them, it would yield your insurance agency higher returns over time, and just because that's the way the world is, currently. But all this would achieve is perpetuating the current status quo, placing short-term gain ahead of long-term equality. You might be doing right by yourself, but that still makes you morally impugnable.

One shouldn't need to wait for an overreaching law prohibiting such indirection, to do the right thing.


If a person is loan-worthy, offering them a loan is good business sense. If others are denying loans for no good reason, that is a business opportunity. It doesn't require benevolence to offer people loans that they are qualified for.

How far do you expect systems to go to ignore real-world associations? What if it is coming down to personal safety? Would you object to a self-driving car that routes itself around more dangerous neighborhoods? What about a model that predicts that unknown men on the street at night are more dangerous than unknown women?


> If others are denying loans for no good reason, that is a business opportunity. It doesn't require benevolence to offer people loans that they are qualified for.

Afaik the black families who were segregated out of their neighborhoods back in the 1940s, 1950s and going into the 1960s were never provided with another comparable "business opportunity" that could have "righted" their situation, they had to settle with living in what turned out to become "ghettos" (for lack of a better word).

> Would you object to a self-driving car that routes itself around more dangerous neighborhoods? What about a model that predicts that unknown men on the street at night are more dangerous than unknown women?

Following the same line of thought, would you be ok with an AI system giving a person named Deion or Jayla or Latisha a higher interest rate on their mortgage (and so, potentially, driving them out of certain markets) compared to the interest rate offered to persons named Chad or Emma or Sophia?


The shameful history of "redlining" (housing segregation) was driven by policy, not market forces. Black people weren't being judged non-credit-worthy, they were being explicitly prohibited from borrowing and buying by FHA policy and housing covenants. So this history, while awful and something that we should not repeat, does not speak to my point that making rational decisions about credit-worthiness is good business sense.

> Following the same line of thought, would you be ok with an AI system...

You didn't answer my question. I asked it because I want to know if the people most vocally arguing that de-biasing is a moral imperative will admit even one case where there might be a compelling reason to see the world as it is, even if that association is considered "problematic".

If people will admit this, then we can argue over where the line should be. But many don't appear to admit that a line even exists.

I will admit that the line exists, and that an incident like this is a clear example where removing hurtful associations is proper. This was a case where a ML model reinforced a loaded and racist stereotype, and the harm of removing that from the model is almost zero: https://www.theverge.com/2015/7/1/8880363/google-apologizes-...


> You didn't answer my question. I asked it because I want to know if the people most vocally arguing that de-biasing is a moral imperative will admit even one case where there might be a compelling reason to see the world as it is, even if that association is considered "problematic".

I didn't find those questions to be that smart, to be honest, they're more on the scare-mongering side, and I find that a smart question should have half of the answer included in it and a scare-mongering question doesn't look like it contains anything smart (at least to me). But if you really want to know my answer is "yes" to all of your questions. To get into more details: I grew up as a kid into a middle-class-ish family and back then I had no issues going to the "ghetto"/dangerous area of the town I grew up in (I grew up in Eastern Europe, so that the "dangerous" area was populated by the local gipsy community instead of the AfroAmerican/Latino communities now associated with the "dangerous" areas of US cities). I turned up fine.

> If people will admit this, then we can argue over where the line should be. But many don't appear to admit that a line even exists.

I know of that line, I was just trying to say that further reinforcing it using ML techniques will only aggravate things at a societal level (so that "dangerous" areas will become even more "dangerous"). At least when the discrimination happens out of our (us, humans) own volition there are ways to fix it, but once we "outsource" our racist-tendencies to ML-like tools the voice inside of us telling us that this is all wrong will become even weaker. After all, the algorithms/machines are more "right" then us, humans, that's what we like to think.


If we're trading anecdotes, I grew up in a small town where I didn't worry about where I went. Then as an adult in a big city I was mugged at knife-point for obliviously walking through a bad part of town at night. If you want to dismiss people's concerns for their own safety as "scaremongering", don't be surprised when people don't find much use for your ideology.


If we know nothing else about them but their names (which is a quite big assumption), a person named Deion or Jayla or Latisha is more likely to come from a disadvantaged socioeconomic background than a person named Chad or Emma or Sophia, have a worse job, have a somewhat higher chance of getting arrested in future, and due to these and other factors they causally, objectively have a somewhat higher risk of defaulting on that loan. It's not just bias, prejudice or superstition; it's not a mistake in data or process; it's an indication of some true, factual observation about reality. The correlation isn't huge, but it is there, and it's not a coincidence.

If it's your money that you're choosing to lend (or not lend), would you ignore that information or take it into account when making a decision if you should offer that loan and how much the rate should be adjusted to cover the risk of default? What if it's not your money to do with it as you please, but money entrusted to you to invest for maximum results?

Sure, if you knew what this particular person is actually like, then you'd base the decision on better data than just their name, and it could well be that this Jayla would get a much better deal than this Sophia; but even if you had a perfectly fair system that correctly estimates the individual risk of default without prejudice causing worthy individuals being dragged down by association with some group, then there would still be a correlation with the name simply because in the current reality the proportion of "risky Jaylas" is larger than the proportion of "risky Sophias".


One problem is the world has layered effects of bias.

Imagine a simple system where people are only evaluated for credit cards and mortgages, and businesses can only offer one of these services. Some people are systematically denied credit cards. Now they have to have more cash for daily needs, which makes them objectively worse as home owners, so the mortgage loaners pick up on this fact and start systematically denying them mortgages.

There's no business opportunity here. For either lender, the prospects in this group are worse, only because the other lenders think so. Breaking the barricade is much harder than just starting a new lender or either type.


Also known as self fulfilling prophecies. And then runaway feedback.

Three strength of the effect varies but is hard to measure. Not impossible though.


If you don't want unfairness, why are you denying anybody a loan? Maybe I'm bankrupt because of a one-off medical expense and now I'm healthy again, I want a loan. It surely won't happen again but the lender says no. That's unfair, isn't it? So the problem already exists and nobody cares.


Fairness is a different concept from justice. It might be fair to deny but still unjust. Also different concept from transparency.

Three operational principle of the capitalist business is extracting value from a customer or an externality. That is from the outset unfair. It is just, as long as businesses act independently.


But it's a convergent instrumental goal for just about every end goal you could think of for a statistical model. If your model has zero predictive power, and that's fine, then you don't need a model in the first place. You can just make random guesses, or assume everything is equally correlated, or whatever make sense for your goal.


That very much depends on one's interests. In today's modern economy, many are encouraged to tie their interests to lots of events that haven't happened yet. They prioritize predictions.


That still doesn't mean that your training data is right for the predictions you're going to make.

Here's an example detached from any sort of politics. If you are training a language model, the maximum-likelihood model is one where the frequency of every word is the frequency that you observed that word, and the frequency of unseen words is 0.

On your training data, you'll measure the maximum-likelihood model as making more accurate predictions than any other model. But it's also useless, because when you use it on new data, it assigns an impossible probability, 0, to any sentence containing a word it's never seen.

This is a bias. You can't correct it by getting more data. You correct it by being an intelligent human who knows the maximum-likelihood model is wrong, and applying a correction on top of it (smoothing).

Now account for the bias (observed by Arvind Narayanan) that all your training data is in the past, and all the predictions you want to make are in the future, but circumstances change between the past and the future.


When is a model ever evaluated based on its ability to predict the training data? That is classic overfitting. This is why the training set is kept separate from test/validation sets.

https://en.wikipedia.org/wiki/Training,_test,_and_validation...


Everyone capable of understanding this thread knows what overfitting is, and the difference between training and test data. That is the topic we are talking about.

Because test data is unseen at training time, it cannot possibly affect the way your model is trained. The test data is not how you prevent overfitting, it's how you measure overfitting. To prevent overfitting, you need to design features such as smoothing into your model.


I think your point was a straw man. You can tell if your model is good by measuring its predictive power on data outside your training set.

You seemed to be arguing that only way to know the model is wrong is to insert a human who just knows a priori that the model is wrong and what "correction" to apply. But I think this is a false analysis. You can measure a model's accuracy by seeing how well it predicts data outside the training set. If the model is highly predictive, you have a good model. It doesn't take a human's subjective analysis or principled intervention to determine whether the model is accurate with respect to the available data.

This article concerns cases where the model is admittedly accurate (with respect to the available data), but is subjectively considered objectionable. That is a totally different issue.


I think you missed the point here. The author is showing exactly why this is a big problem in NLP, beyond the test/train split issue.

Even with test/train splits you still need to smooth your model's predictions because it will always get unseen words. That's just the nature of word based lanugage models.


I think that's false. A model may never reduce any probability to zero. Novel data is a normal feature of modelling.


To be specific, what is the probability that the next word is "sdh7777asljashd_"?

Any word level model trained on any dataset will say it is zero. But that is incorrect; humans know that there is always a probability that some arbitrary sequence of characters will suddenly appear in text (think a GUID in technical documentation). To account for this, the model's author has to make sure to smooth the output so it doesn't produce these zero probabilities.

No amount of data can ever fix this problem (although more data can put better bounds on the smoothing factor).


And I'm saying, depends on how the model is built. If it is a simple dictionary then yes. But there are better ways to build a model. Its a naïve view that all models are like some strawman.


I don't understand what you are arguing.

>>> the maximum-likelihood model is one where the frequency of every word is the frequency that you observed that word, and the frequency of unseen words is 0

>> I think that's false. A model may never reduce any probability to zero. Novel data is a normal feature of modelling.

> But there are better ways to build a model.

Yes of course there are. That's exactly what the OP said ("You correct it by being an intelligent human who knows the maximum-likelihood model is wrong, and applying a correction on top of it") but you seemed to be arguing against it.

(And to be clear, there are also alternatives to the maximum likelihood model)


> A model may never reduce any probability to zero.

That's the entire point.

You can't teach a model not to reduce probabilities to zero just by showing it more data. There will always be more unseen data. And you, a human, know that there will always be more data. So you apply smoothing to the model so that your model isn't nonsense.

I'm giving an example of a bias that you can't solve with data, that you must solve with design.


Well it is, it's just by no means the only one.


You think that it was OK for Southern businesses to ban blacks during the Jim Crow era? After all, they were likely to have less money and thus be less profitable customers.

Your statement implies that you would support this.

These businesses were using mental models that had a high predictive value. Do you feel that it was worth it to "sacrifice this accuracy" with civil rights laws?


> After all, they were likely to have less money and thus be less profitable customers.

By that logic, there'd be no business catering to poor people at all.


Huh? You've somehow interpreted that to mean that no business would serve a poor person ever? That doesn't make any sense, and isn't what I said.

Let's try again: If there is a business that is at capacity, is it appropriate for it to use racial criteria that may have some ability to predict how much a person will spend, if this increases profitability? Is not doing this "erasing inconvenient facts"?


Correct. Many businesses ignore the poor, explicitly or implicitly. Luxury apartment price rates are often high, not because the services they offer are valuable, but because high prices exclude the classes their clientele want to avoid. Alternatively, try walking into a Nordstrom's with obviously dirty clothes and see how the staff treat you.

The businesses that do cater to the poor classes exploit them. See fast food restaurants, payday loan lenders, casinos, etc. None of these businesses exist to improve people's lives, but exploit a weakness at a particular moment.


I don't know why you're being downvoted. You're absolutely right.

If (unsophisticated) AIs were in charge, the civil rights movement might never have happened. Black Americans were the victims of institutionalized racism. That racism was self reinforcing: Society did not support the education of blacks, which caused them to be uneducated, which reinforced the stereotype that they were stupid, therefore making it obvious that educating them is pointless, which caused society to not support the education of blacks.

How would an AI controlled system reason to understand that the rules of society itself are creating the conditions of inequality? Will it merely strengthen positive feedback loops, reinforcing stereotypes and preventing reform?


"Jim Crow" referred to laws, not just spontaneous capitalist behavior, didn't it?



I'm not sure how a law can be imposed other than by authoritarians. Laws are formal descriptions of how authorities operate, no?

Also, you seem to be implying capitalists aren't authoritarians. But that certainly isn't self-evident in the context of what I replied to.

Context isn't something you impose in the middle of a conversation; it's something you should evaluate before jumping in.


Both, to a degree. Massive resistance maintained segregation through private business.


You seem to be identifying capitalists with their customers and community in general. That is not generally accepted as self-evident, and in particular seems to clash with what I was replying to. The context is important.


This sounds a lot like how people get TSA redress numbers when their info falsely flags them as suspicious (https://www.dhs.gov/redress-control-numbers). Or how Barack Obama suffered innuendo around the middle name "Hussein." Mistaken identity or unfortunate associations are as old as humanity. AI systems (and non-AI systems) need ways to deal with these problems, but we also have a lot of experience about how to do that.


The problem is when we hand AI full reigns over a system that should have humans with the final say, because doing it with humans is "too hard" or "doesn't scale." At least with humans you have auditable decision making and recourse via the legal system, even if it can be hard to fight. AI are essentially black boxes that unconsciously learn the biases of the datasets they are given.

When AI starts determining rather consequential things like how long to send someone to prison for, that's a problem. https://www.nytimes.com/2017/05/01/us/politics/sent-to-priso...


> AI are essentially black boxes that unconsciously learn the biases of the datasets they are given.

Please don't confuse "AI" with the current state-of-the-art of the latest deep learning model. Many of us researchers are working on interpretability and understanding of causality.

Calling it "AI" makes it appear final, as if in 50 years all machine learning-based decision systems will behave exactly as they do now, without nuance.


Calling currently existing models AI is no different than calling a Model T a car. It being more primitive than what may exist in the future is of little consequence to the public dealing with the consequences of AI today.

AI can only learn from the data it has, so it will always carry some sort of bias, because it is impossible to collect the nuance of every last bit of context into a digestible data format. At best it's an advisor, but it should never be a decision-maker.


Very different in fact. Model T had most big level components of a modern car in place. Rubbered wheel. Protection from elements. Combustion engine. Transmission.

In comparison, current automated decision making systems are at the stage where we do not even know what the components are. And calling them intelligent is insulting.


Existing "AI" is more like a couple of wheels and a handful of engine parts. I think it's fair to call it something more accurate so as not to mislead the public about where today's consequences come from.


This is an example of sacrificing the scientific method to make results more politically correct. We've come full circle.


>This is an example of sacrificing the scientific method to make results more politically correct. We've come full circle.

Wrongthink is nothing new, comrade ajwnwnkwos, and is as prevalent as ever.

The output of science has been suppressed throughout history where it didn't fit the narrative of the day. At one period, that was by the Church. Another, it was at the hands of the government. Today, mainly by a self-censoring, everything-must-be-pleasant-and-entertaining society that is very highly prone to fits of outrage.

Inconvenient facts are, after all, inconvenient.


I never actually understood the difference between self-censoring and just deciding to not be offensive, and why deciding not to be offensive is such an unscientific thing. If you can objectively observe that saying something doesn’t accomplish your goals(assuming your goals is to make friends and get along with others) and seems to offend or harm people around you, what are the consequences to simply not saying things?

In the specific context of science I’m unsure what self-censoring would even look like. Not only would such conclusions be a relatively small subset (hard to discuss the self-censorship of a graph algorithm) but there is plenty of attention(therefore funding) on both sides to uncover some truth about any political subject.


Because sometimes to reach the truth, you must risk being offensive.


>I never actually understood the difference between self-censoring and just deciding to not be offensive, and why deciding not to be offensive is such an unscientific thing. ... what are the consequences to simply not saying things?

Saying that global warming is real and caused by humans offends a large chunk of people.

Saying that vaccines are effective and don't cause autism offends a non-trivial amount of people.

Saying that (meat|sugar|fat|veganism|political ideology) is unethical or unhealthy offends pretty much everyone depending on which you pick.

So should we seek to not offend by not saying these things? Or should we strive to uncover the truth behind and change the things we don't like, instead of being offended by the acknowledgement of their existence.


If you understood the scientific method, you would understand that you always need to account for bias. Just because something comes from data doesn't mean it's objective.


Hold up - you also have to tread very carefully there.

I read your blog post above about "racist" NLP models and I agree it makes sense to debias inputs for certain purposes.

However, many people in this thread aren't talking about adjusting for statistical bias. They're really talking about adjusting for socio-political over/under-representation, which is getting onto pretty shaky ground.

I'd hazard a guess that a model trained on Google News giving "more negative" sentiment for black names is a function of those names appearing more frequently in crime articles, matching the overrepresentation in crime rates. Likewise for Arab/Muslim names, which are presumably disproportionately present in articles on terrorism.

Now, that's obviously statistical sampling bias if you're (somehow) modelling "names I should call my kid".

If you're looking at "names more associated with crime", however, then we shouldn't be eliminating any racial imbalances simply because they make us feel queasy. That's intellectually dishonest, and does everybody a disservice by trying to sweep uncomfortable realities under the rug.

Same goes for gender bias in word embeddings from classical fiction - that is (I assume) a very accurate depiction of social gender imbalance from that period of history. That may or may not be relevant to the question you're asking - but that doesn't make any bias inherently "wrong".

I think the underlying message should be "models only reflect the data they're given and may need to be adjusted for bias depending on the question being asked", not "race and gender is always irrelevant and should always be normalized".


In Which We Define 'Biased' As Meaning 'Not Conforming To Our Ideas Of How The World Should Be'. Because it's unthinkable that movies with male main characters could actually just be better than movies with female main characters.

(I'm not saying they are, mind you - but when we analyze sentiment in a large dataset and reach a result like that, the first question to ask should be "is that result accurate?" not "how do we tune out this problematic result?")


> I'm not saying they are, mind you - but when we analyze sentiment in a large dataset and reach a result like that, the first question to ask should be "is that result accurate?" not "how do we tune out this problematic result?"

Especially when you consider the weird feedback loops you can get by blindly fudging the numbers.

For example, suppose the objectively best movies have a male lead, e.g. because Hollywood is biased and spends more resources producing those movies than ones with a female lead.

So the bias is there, but it's at the production stage, not at the recommendation stage. What happens if we try to fix it at the recommendation stage?

Hollywood sees that movies with a female lead are now being rated higher. But five stars is five stars. The occasional actually-great movie with a female lead can't get itself kicked up to six, so the incentive isn't to make more of those. It's to make more mediocre movies with a female lead to take advantage of the artificial boost.

Which widens the gulf in the unmodified ratings even more and produces a vicious cycle where the number of garbage movies with a female lead explodes to take advantage of the fact that no matter how bad they get, the average will be adjusted upward to compensate.


The article does not advocate tuning out the results; it is advocating awareness of the results and careful consideration of how to proceed. See the detailing of options at the end of the second case study:

>> "As with Tia, Tamera has several choices she can make. She could simply accept these biases as is and do nothing, though at least now she won't be caught off-guard if users complain. She could make changes in the user interface, for example by having it present two gendered responses instead of just one, though she might not want to do that if the input message has a gendered pronoun (e.g., "Will she be there today?"). She could try retraining the embedding model using a bias mitigation technique (e.g., as in Bolukbasi et al.) and examining how this affects downstream performance, or she might mitigate bias in the classifier directly when training her classifier (e.g., as in Dixon et al. [1], Beutel et al. [10], or Zhang et al. [11]). No matter what she decides to do, it's important that Tamera has done this type of analysis so that she's aware of what her product does and can make informed decisions."


We can quibble over some of the examples they give and their "ideas of how the world should be" but there are many cases where fairness may be explicitly legislated or defined that makes this research useful.


> the first question to ask should be "is that result accurate?" not "how do we tune out this problematic result?"

That depends on the product that you want to make. If you’re a company that wants to sail on the status quo and just make money, by all means be amoral.

On the other hand, technology now allows us to analyze and steer social structure at scale. We could reproduce past inequalities in the name of efficiency, or we could think a bit longer about our axis of optimization.

“But cultural relativism!” — I agree. It’s a tough problem, but we don’t all of the sudden need a definitive generalized social plan. Biases can be tackled one algorithm at a time.


Maybe a useful way to think about this is Positive/Normative distinction.

Science is "amoral" in your sense, it's trying to describe how things really are, and make predictions. So is the stock market: you're rewarded for correct predictions, whether or not others regard those outcomes as desirable.

Of course we are also moral beings, interested in changing things for the better. And we have varying and contradictory ideas about what better means.

As soon as you say "steer social structure at scale" the question is: who gets to steer? If this were obvious, then we wouldn't need democracy, we could just all work together not to "reproduce past inequalities"... but we can't.


The article does briefly mention main characters as a hypothetical example, but the test case it provides data for is different:

> In this case, she takes the 100 shortest reviews from her test set and appends the words "reviewed by _______", where the blank is filled in with a name.

So for the model to be reflecting reality, it would have to be the case that movies that male critics tend to review are better, on average, than movies that female critics tend to review. Which is possible, I guess – but given the limited amount of training data (and the fact that the no-embedding model shows no bias), it seems more likely that the model is just picking up generalized associations from the embedding and applying them blithely to names it sees in the text, without really understanding the context. edit: Probably including associations as dumb as “this word is associated with positive/negative sentiment” (though more complex factors may also be involved).


> Because it's unthinkable that movies with male main characters could actually just be better than movies with female main characters

Or, more realistically, it's not unthinkable that movies with a male lead could be better reviewed than movies with female main characters, especially when you open the field up to non-professional reviewers. We've see how ape-shit people went at movies like the recent Ghostbusters remake or even the new Star Wars series just because they put women in the kinds of roles that had traditionally been played by men.

It's an interesting question of when machine learning models that reflect actual human biases are correct and when we should try to adjust them for that human bias. In the case of movie reviews, you could make the case that you'd want the models to reflect the biases of the reviewers. Just because a reviewer is being sexist doesn't mean it's wrong for the machine learning model to correctly classify the reviewer's sentiments.


Sure, it's always good to dig deeper and find out where a correlation comes from.

But the question isn't just whether it's true, but also whether it will stay true. As they say in finance, past performance does not guarantee future results. We are not talking about the laws of physics here. There are historical correlations that it's unsafe to rely on, particularly when people are trying to change them.

Part of this is overfitting, but there's also the problem of drift. A model may stop working well because the world has changed.

So some correlations can be rejected just because we don't have confidence in them. It's a dependency that looks too fragile.


[flagged]


I'm not unsympathetic to your concerns, but the obvious retort is that if you already know which results are accurate, why bother doing analysis?


People who make word embedding models are not doing it to find out whether men are better than women. That is not the analysis they are doing.

You always have to look at your results with a critical eye to recognize when something has gone wrong with your model. You can't recognize a flaw in the model if you believe everything the model says.


I'd be pretty amazed if it was even possible to construct a model that would answer that question.

In the article's example, you're constructing a model to determine how a movie might be reviewed. The data indicates that movie reviewers may be reviewing movies with male leads more positively than movies with female leads. There are a lot of possible reasons for this other than "men are better than women", so why would you simply reject the result out of hand?


So you can build a model and get a computer to do the analysis...


Data, of course, is a source of objective truth. It is a source, not the entire source, but it is a source.


In NLP, data is very much NOT a source of objective truth.

Language has too much implied context and meaning which is communicated outside the text under analysis. Embeddings attempt to capture this, but they aren't as good as humans.

"XXXXX is a dog of a movie" - does this mean the movie was bad or was the author playing with words when the movie is about dogs?


>Because it's unthinkable that movies with male main characters could actually just be better than movies with female main characters.

Its not unthinkable but its also very likely that bias does indeed have an impact. What do comments like these add to the discussion?


They add to the discussion because they remind us that the phenomenon is likely to be mix of reality and bias and that we are very uncertain about proportion.


The politicization of bias in this realm is unproductive. We know where it leads. There will be 'committees' of people who are not trained in statistical thinking, who haven't even taken a statistics class, who are not qualified to make any statements and who's interests are not in line with making the best product acting as nuisances and aggressors to people who are do have the expertise and who strive to make the best product.

The best course of action is to treat this bias like normal bias with non-politicized, inanimate objects. How would data scientists that encountered similar bias while running machine learning models of the motion of waves act?


The article was written by trained experts in statistical thinking. It's worth a read.


I'm reminded of https://whitecollar.thenewinquiry.com

It turns out that white collar crime is predominantly committed by white men. A system trained to detect white collar crime using, say, enron emails, might suggest a white guy's emails over someone whose name doesn't sound like an enron employee, or who shared pictures of their cat.

I mean, I suppose you can argue that hey, maybe that bias is usually correct. Maybe it usually is the white guy. But personally, I'd probably control for things conflated with gender or race and then look for indicators that differentiate between criminals and innocent people. You will probably have a lower AUC, but better differentiation between criminals and innocent people is what matters.


I guess the question posed in the article is an interesting one and is an interesting discussion re how to deal with bias and building counter-biases into algorithms without the editorial decision asking being clear.

Movie reviews are editorial content. Measuring that content is a difficult problem in this type of context... Are the best reviewers people who dislike movies with female leads? Are you going into a back catalog of movie reviews from an age where societal expectations were different? Are popular genres skewing the result?

You could have a curation issue as well — if the female lead movies are dominated by "Hallmark Channel" fare, algorithm C has a point!


Here is a video that explains this blog well. https://youtu.be/59bMh59JQDo

The unnerving part for me is this "eliminate negative associations/bias". Okay, how about we learn the truth, and then address that outside in real life and keep the computer doing what it's good at ... showing us the data.


> Okay, how about we learn the truth, and then address that outside in real life

Aside from the aspect you overlooked -- that our online lives are real life, and that AI models are (over)used to make decisions, not merely show us the data -- your idea is quite what the article recommends:

> . As with Tia, Tamera has several choices she can make. She could simply accept these biases as is and do nothing, though at least now she won't be caught off-guard if users complain. > She could make changes in the user interface, for example by having it present two gendered responses instead of just one, though she might not want to do that if the input message has a gendered pronoun (e.g., "Will she be there today?").

She could try retraining the embedding model using a bias mitigation technique (e.g., as in Bolukbasi et al.) and examining how this affects downstream performance, or she might mitigate bias in the classifier directly when training her classifier (e.g., as in Dixon et al. [1], Beutel et al. [10], or Zhang et al. [11]).

No matter what she decides to do, it's important that Tamera has done this type of analysis so that she's aware of what her product does and can make informed decisions.


OK, factual data fed to some AI-based algorithms produces results that are not so politically correct as some people would like it to be. Is this a problem with the AI algorithms, or with the people?


I think the core issue here is that people want two things. On one hand we want our models to accurately describe reality, not an idea of what reality should be. On the other hand, we don't want ML to freeze society and culture in their current state, but to help decide on and drive social change. The tension between these goals arises when models trained today are used to make decisions tomorrow.

One way to resolve the tension might be to add time dimension and historical training data. The models might then be able to return in addition to any prediction variable p also its time derivative dp/dt. For example, a model might then return results such as: "movies with female main character: lower sentiment, trending up; movies with male main character: higher sentiment, trending down".


It's hard to be believe these days, but once upon a time language models were written by hand. Imagine you hand-wrote such a model and put it to use in (for the sake of example) a psychological evaluation application. Now imagine that after years of use you discover that your model systematically marks african americans as less psychologically fit than white americans. Who would be to blame? Naturally, you would. Your actions led to a biased model being used to unjustly and arbitrarily harm innocent people, and your leadership would be right to call into question every decision your application ever made.

Now imagine the same scenario except your app was trained on data instead of hand-written. Make no mistake, the answer to the question of who's to blame is exactly the same: the developer. The response should be exactly the same: a complete loss of confidence in the model.

I'm appalled that this needs to be said, but reading this comments section I'm afraid it does: Machine learning models are inference and pattern recognition devices, not scientific tools. They don't magically reveal hidden patterns in the world, they repeat the patterns that the developers train them on. If you trained a machine learning model to perform psychological evaluations [1] or sentence convicts [2] or recognize faces [3], and your model is biased in a way that is unnecessary and unjust, your model is bad you should be held accountable for its failures.

[1] https://affect.media.mit.edu/projects.php?id=4079

[2] https://www.nytimes.com/2017/05/01/us/politics/sent-to-priso...

[3] https://www.wnycstudios.org/story/deep-problem-deep-learning...


What do you do when your classification is correlated to race through no fault of yours? For example, I might successfully predict credit score from backyard size, and end up with a correlation to race that doesn't have anything to do with me.

I don't think the blame always lays with the algorithm, especially when it doesn't have access to race as an input (this is a reasonable expectation). I can score students with a simple algorithm based on what they write on their math tests, and even that's going to correlate with race. In that case the blame pretty clearly lies in the process that produced the reality that's being measured, not the measurement technique itself.

Let's say that black people default on their loans more often then white people. Is it better to criticize the math that discovered that fact, or the root cause that made it true to begin with?


> Is it better to criticize the math that discovered that fact, or the root cause that made it true to begin with?

The only place where math was involved was the guts of the training stage of the model. A crucial stage to be sure, but one that's bookended in the front by problem definition, data selection, model selection, and a design for an evaluation process, and behind by the execution of that evaluation and the decision to launch the model. Literally every other stage of this process is driven by human decisions.

I'll say it again, because apparently the point didn't sink in the first time around: Machine learning models are inference and pattern recognition devices, not scientific tools. The fact that it's inhuman and unthinking mathematics that produced a biased model offers no ethical or legal cover to the people who decide to put that model into use.

The decision to apply the model is key here. Contrast two applications: one that takes in a patient's information and diagnosis to compute a dosage for a drug, and one that takes in a potential tenant's request and produces a rent/no rent decision. In the first, there are cases in which the bias is admissible if not necessary, e.g. [1]. However, the legitimacy of that model's application comes not from the supposed objectivity of the model's findings but from volumes of peer (i.e. human) reviewed research. In the second, there is no legal way in which this model can be applied, and I struggle to imagine a moral one. I can't imagine any court of law taking "the machine made me do it" as a defense in an FHA case.

[1] https://www.nytimes.com/2005/06/24/health/fda-approves-a-hea...


pattern recognition devices, not scientific tools

Isn't science all about pattern recognition? If the pattern exists, in the real world, then a good theory is one which encodes this.

What you're asking for in a "moral" way of applying the model is that our actions should abide by your moral preferences. Perhaps even universal preferences. But it seems useful to me to keep logically separate these ideas about how we ought to do things. They don't flow naturally from observations of how things are.

The example of adjusting drug & dosage based on race is a good one. The science backing this is exactly the same kind of statistical correlation as backs the rental decision. The training input is what race some test patients ticked on a form, and their tick mark sure as hell isn't the causal factor... that's some gene which is correlated, maybe, or some diet difference, or what's on TV, who knows. Nevertheless the correlation is there, as far as we can tell: the peer-reviewed science process isn't infallible. The reason we're OK with using this information is, I guess, that it aims to improve things for the patient. (Not every single patient, only statistically.) We make a moral judgement that this is more important than a landlord's wish to avoid bad tenants (again statistically).


It sounds like a way to set up a positive feedback loop with terrible consequences to me. I don’t know enough about ML to be sure, but I know GIGO. If you feed bias into a model then let the model guide decision making, that seems obviously flawed. You bias the model not to give loans or credit to black people, reinforcing their lack of credit, which leads to the model confirming its bias. Maybe you can claim it’s a socioeconomic bias and not racist, and in some countries it would be true, but not in America. How long ago was it that black people had to fight past mobs to integrate schools? How long since redlining? If socioeconomic are tied to a history of oppression based on race, then ignoring half of that equation isn’t honest. Correlation is not causation, but does “AI” know that?


If someone is poor but you want them to be given a chance anyways, you don't go to the banker and say, "please close your eyes to what by all means seems to be true." You say, "I know he's more likely to default, but society wants him lifted out of poverty and you're going to give him that loan."


If those banks had to turn to the government to keep them afloat, and those banks played a large role in creating the problem in the first place, maybe you do? Maybe not, but what you definitely don’t do is set up black boxes to take the heat for centuries of prejudice and mistreatment.


> don’t [...] set up black boxes to take the heat

I think I see now how we agree. This whole "algorithms" thing is a smokescreen over what is really just the AA debate: should institutions optimize purely for their own goals (in which case they would do nothing to help with old wounds, whenever ignoring them would be cheaper), or should they be expected to mix in social responsibility into their cost functions? Whether it's AI or a cunning but amoral banker, it's the same question.


Right. And note that those banks who got bailed out weren't free-floating rational agents before the crash either. Because it was deemed politically desirable to increase minority home-ownership, they were given pressure / incentives to make AA loans.


"Pray, Mr. Babbage, if you put into the machine biased models, will unbiased answers come out?"


I'm reminded of a much more obvious example.

https://www.theverge.com/2016/3/24/11297050/tay-microsoft-ch...

I see a lot of comments about how its somehow sinister to want your model to be better than the lowest common denominator and that is pretty damn ridiculous.


I’ll tell you more: _human judgment_ contains bias. You can’t possibly think logically about every single judgment, particularly when information is incomplete, which it is in the overwhelming majority of cases. It is not a given, to me, that on average AI does any worse than the human population outside the “woke” segment. Or even _within_ that segment considered in isolation.


If you put a human in charge of a judgement call, mostly there are mechanisms to monitor them, a requirement that they give reasons for their decisions, and mechanisms to appeal those judgements.

We aren't used to having to do that with decisions made by computers.

It's not that AI makes better or worse decisions, it's the way we treat those decisions.


"a requirement that they give reasons for their decisions"

But this is mostly a sham. People lie about their reasons for doing things, even to themselves. Especially when they know what reasons are publicly acceptable.

Maybe the major difference is that it's much easier to run experiments on computers. I mean people try this on humans but it's very hard to do realistically: most of those studies where you submit 1000 CVs with varying details are garbage, because they can only access an unrealistic part of the process (I mean who ever got a job without networking? etc). Whereas on almost any computer system you feed it completely realistic fake data.


Same as with AI, it is not economical to monitor human decisions, so mostly it doesn’t actually happen. See eg Youtube demonetization as a particularly illustrative example.


I think the problem is that a lot of people think something like, "It is an algorithm not a person so it must be totally scientific and unbiased" when that isn't the case. The non-techy public needs to be aware that these things are made by humans who may be recreating their bias in the technology.


Key takeaway: It is important to be aware of bias in ML models. Some biases may correctly model the reality of the world, and some may show the bias in the underlying dataset or in what the model has focused on in the data. The goal is not to "unbias" everything, as people seem to be focusing on, but rather to determine if the bias is appropriate given the context.


Damn, I just wrote almost exactly these words before scrolling down to your comment. Could have saved myself a few minutes by upvoting yours instead.


I think that's a pretty solid summary.


>> Normally, we'd simply choose Model C. But what if we found that while Model C performs the best overall, it's also most likely to assign a more positive sentiment to the sentence "The main character is a man" than to the sentence "The main character is a woman"? Would we reconsider?

It seems like you have discovered that movie reviewers tend to review movies with am male main character more highly than movies with a female main character; what you need to consider is that while this may tell you something about movie reviewers, it doesn't necessarily tell you anything about the quality of the movie.


Despite the controversy surrounding "debiasing" classifier outputs, I think further research in this area is still of merit. This area of research would help us understand and build transformations over latent / high level representation space, a general use case applicable to all fields interacting with machine learning.


There's a lot of complaints here about "erasing reality" and other hyperbolic talk, but these models are trying to make predictions within a particular user's context.

It's just inappropriate to apply some global biases for a particular user, and avoiding that can result in a better user experience.


I think you make an interesting point: if we use context to train a model to reflect local biases instead of global biases, will that be more just and/or lead to better user experience?

It seems related to the question of whether Google results should be tailored to you. If I Google "did Russia interfere in the election", should Google tailor the results so I always see articles that reinforce my world view?

If we go that route, I think we take the path of Stephen Colbert's concept of "Truthiness", where we judge something as true because it "feels" true. Users will definitely be happier if everything they see reinforces their existing world view. So companies will be incentivized to accommodate this desire. But does this actually lead to a more just society?


It's just inappropriate to apply some global biases for a particular user, and avoiding that can result in a better user experience.

The problem is that most deep learning programs aren't intended for (and really be for) one user out there in user space. Deep learning programs are written for the large institutions which have the large troves of data need to train large nets and want to make important decisions using that data. But by this, those decisions won't be made in isolation but will effect a large number of people, people who aren't the users but rather the used. And if such systems have biases relative to the whole, it is a problem. And problem may not be for the institutions immediate goals but for the people who depend on the institution.



I wish people who write articles about "bias" would explain what they mean by "bias". I've seen hundreds of these articles. I'm still waiting for a usable definition.


If the models reflect bias then surely that's a good thing.

It's funny. I like programming because a computer can't lie and doesn't make mistakes. I guess some people don't like that.


The goal of ml isn't to accurately model training data. The goal is to make something useful. Correctly doing the former can hinder the latter.


The only thing that can be biased is the training data.

The model is simply a statistical breakdown of the training data.


Google uses their AI systems for profiling where the sex of the people being profiled is a crucial piece of information and acts as a predictor for interests which is then further used in targeting ads. As long as it makes Google money, it's not a problem. This bias actually accurately reflects reality and the advertisers know that, otherwise they wouldn't be paying Google for targeted ads.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: