Hacker News new | past | comments | ask | show | jobs | submit login

This is a genuine, difficult problem. It's so easy to join up on your political team of choice and scream about it, and all this makes any real attempt to solve it so much harder to talk about in public or collaborate on. In fact, there's practically guaranteed to be some greyed out text in the discussion here.

So some of these associations simply reflect the way-the-world-was or the way-the-world-is - like associating "woman" with "housewife". That's a whole debate in itself.

But some of these can be accidental. Suppose a runaway success novel/tv/film franchise has "Bob" as the evil bad guy. Reams of fanfictions are written with "Bob" doing horrible things. People endlessly talk about how bad "Bob" is on twitter. Even the New York times writes about Bob latest depredations, when he plays off current events.

Your name is Bob. Suddenly all the AI's in the world associate your name with evil, death, killing, lying, stealing, fraud, and incest. AI's silently, slightly ding your essays, loan applications, uber driver applications, and everything you write online. And no one believes it's really happening. Or the powers that be think it's just a little accidental damage because the AI overall is still, overall doing a great job of sentiment analysis and fraud detection.




With current technology the problem of Bob (or Adolph or Mohamed) becoming associated with evil is insolvable by the fact that current deep learning systems are fundamentally characterized by not being able to distinguish causation from correlation.

The only solution I can see is forcing any company that imposes life-defining actions on people (credit bureaus, banks, parole boards, personnel offices, etc) to use only rules based on objective criteria and to prohibit systems based on a "lasagna" of ad-hoc data like present day AI systems. Indeed, if one looks at these in the light of day, one would have to describe such system as fundamentally evil, the definition of "playing games with people's lives." (just look at the racist parole-granting software, etc).


> The only solution I can see is forcing any company that imposes life-defining actions on people (credit bureaus, banks, parole boards, personnel offices, etc) to use only rules based on objective criteria and to prohibit systems based on a "lasagna" of ad-hoc data like present day AI systems.

That is probably the exact opposite of what you really want. If the problem is that someone's name is Bob and the AI thinks Bobs are evil, what you want is for there to be 100,000 other factors for Bob to show the system that it isn't so. As many factors as possible, so that the one it gets wrong will have a very low weight.

Even the objective criteria will have biases. There is a significant racial disparity in prior criminal convictions, income, credit history and nearly every other "objective" factor. The more factors you bring in, the more opportunities someone in a given demographic has to prove they still deserve a chance.


That is probably the exact opposite of what you really want. If the problem is that someone's name is Bob and the AI thinks Bobs are evil, what you want is for there to be 100,000 other factors for Bob to show the system that it isn't so, as many factors as possible, so that the one it gets wrong will have a very low weight.

You don't understand. My point is that institutions making such decision should not be able to make decisions according to these 100,000 unexplained factors. If you're a lender, you can look at employment history, records of payment and other objective related criteria. You can't look at, say, eye color, however useful you might think it is. Institutions should be able make these decisions arbitrarily, at the level that they effect lives. There should legal provisions for auditing these things (as there are, on occasions, provisions of auditing affirmative action, environmental protection behaviors, insurance decisions, etc).


> My point is that institutions making such decision should not be able to make decisions according to these 100,000 unexplained factors.

But how does that help anything? The objective factors have the same potential for bias as the seemingly irrelevant ones. All you get by excluding factors is to increase bias by not considering information that could mitigate the bias in the factors you are considering.

Suppose that 80% of black men would be rejected for a loan based on your preferred set of objective factors. Of that 80%, many would actually repay the loan, but none of the objective factors can distinguish them from those who wouldn't, and when averaged together the outcome is to refuse the loan. If you used some seemingly arbitrary factors that happen to correlate for unknown reasons, you could profitably make loans to 60% of them instead of 20%.

How is it helping anyone to not do that?


Let's assume we push it from 20% to 23% (I don't think we can expect the huge gains you posted) by using various weird features such as whether you like to purchase odd or even number of bananas.

People's live will depend on the decisions of these machines so people will start trying to game them. They will make sure to always purchase odd number of bananas, they will wear hats but only on Thursdays etc etc.

Now two things happen. As more people game the system the rules need to be updated. Suddenly it's all about buying bananas divisible by three and wearing hats on weekends. The people who tried to follow the previous advice got screwed over, and what's more they have nothing to show for it. Instead of making people do useful things like paying bills on time and saving up some money, it made them follow some weird algorithmic fashion. Because of this expenditure of energy on meaningless things we may see that now only 18% of people would manage to pay back loans on time.


> People's live will depend on the decisions of these machines so people will start trying to game them. They will make sure to always purchase odd number of bananas, they will wear hats but only on Thursdays etc etc.

But that's just another reason to use 100,000 factors instead of twelve. If someone's income is a huge factor in everything, people will spend more than the optimal amount of time working (instead of tending to their family or doing community service etc.), or choose to be high paid sleazy ambulance chasers instead of low paid teachers, because the algorithm makes the returns disproportionate to the effort.

If buying an odd number of bananas is a factor but the effort of learning that it's a factor and then following it is larger than the expected benefit from changing one factor out of thousands, they won't do any of it.


Goodhart’s law, as phrased by Marilyn Strathern: “When a measure becomes a target, it ceases to be a good measure.


> But how does that help anything? The objective factors have the same potential for bias as the seemingly irrelevant ones.

Given a choice between observable, identifiable and modifiable rules or hidden, poorly understood rules integral to a whole model, I'll take the former every time.

Bias will continue to exist for now. What we need to do is make sure we always build processes to appeal and review our systems, preferably in a public way.


The whole problem is that single cases are not statistics yet people would live to apply global generalized statistics to single cases.

What you touched upon is the accuracy/ bias trade-off. To have evidence in particular case you need to attempt to debias the particular system and see how it affects accuracy. Sometimes, it may even vastly improve it.

What is more important is that the systems are not benchmarked properly. As in compared against very simple metrics and systems. Such as: against random decision. Against simple recidivism prevention (grudge system). Against plain math metrics with constants.

To add, they're opaque and it is impossible to easily extract the factors that went into any given single decision. This means they act fully irrationally. Intelligently but irrationally.


The idea of society moving forward is removing systemic bias and other entrenched forms of discrimination and prejudice that have occurred in the past and continue to occur. And this requires human thinking, intelligence and sustained effort.

Doubling down based on historical and backward looking data does not seem like the way forward and can only perpetuate entrenched bias.

All the inferences and correlations will reflect that. This is not intelligence and can only take you backwards.


> That is probably the exact opposite of what you really want.

No, it really isn't. In an ideal world, the reasons behind a decision are transparent, auditable, understandable, and appealable. Machine learning is none of those.


The interesting question is not whether those are good things -- they clearly are. The interesting question is their value relative to each other and, more specifically, relative to correctness. To what extent are we willing to tolerate less correct decisions if necessary in order to achieve those and other desirable properties?

It seems like the answer to that question is situationally dependent.


In an ideal world, society would compensate you when they would like you to assume a risk below its market price, rather than forcing you to pretend not to notice the risk.


But there isn't actually a risk associated with an Adolf. Its an inefficiency born out of an incorrect belief by the whole of (or at least most of) society. The correct solution is not to price in the incorrect assumption, but to not make the incorrect assumption.

In other words, by offering Adolf's below-market rates, you're exploiting a market inefficiency at no additional risk. This is an ideal world as you describe it. It's capitalism at it's finest!


This isn't just a technology problem. Most humans are also really, really bad at distinguishing causation from correlation. The very notion that these are even different things is utterly alien to most people.

An AI that was even slightly good at it would be far ahead of what we get when humans run things.


"Causation" is defined as temporal correlation: changes in A cause changes in B. There's no reason to thin computers would be "bad" at this. People are only bad at this because they have small, lazy brains

Even the greatest physicists and mathematicians can't tell the difference between correlation and causation. When people say "causation is not correlation", what they truly mean is "spurious, context-sensitive correlation is not strong universal correlation"


No. Temporal correlation is "changes in A precede changes in B". This is not causation, you more or less correctly describe causation when you say "Changes in A cause changes in B", but a better way of putting it might be "all else equal, changes in A cause changes in B". The difficulty here is controlling for all possible C.

As a simple example:

If A causes a change in both B and C, but the change in B happens more quickly, "temporal correlation" would imply that B causes C, when that's not the case.

This is especially obvious with cyclical phenomena. The tide going out does not cause the sun to rise, for example, even though I'm sure I could draw a temporal correlation between them. Nor does my car being in my garage cause me to go to work.


There is no free lunch. Any forced adjustments we make to reduce false positives, will increase the false negatives. It's up to us to decide what trade-off we want and the algorithms will simply adapt to that.

If we believe certain attributes to be irrelevant to a situation, we need to build systems that are completely blind to these attributes. This is how double-blind trials, or scientific peer review works.


Any forced adjustments we make to reduce false positives, will increase the false negatives.

That's just not true. You can have systematic errors caused by bad training data which can be fixed without an increase in false negatives (otherwise new ML systems would never improve over old ones!)

In the physical world a good analogy is crash testing of cars. For decades (until 2011!!), crash test dummies were all based on average sized American males. That led to hugely increased risks for female passengers:

the female dummy in the front passenger seat registered a 20 to 40 percent risk of being killed or seriously injured, according to the test data. The average for that class of vehicle is 15 percent.

Fixing that problem didn't case any increase in accident risk for men.

It's the same in machine learning.

There is no free lunch

This isn't what the no free lunch theorem[2] says. That says that all optimization methods are theoretically equivalent.

[1] https://www.washingtonpost.com/local/trafficandcommuting/fem...

[2] https://en.wikipedia.org/wiki/No_free_lunch_theorem


The main problem is that most other systems have their own biases. A typical suggestion I hear a lot goes along the lines of "if in doubt, let a human make the decision". If you think about ML in say a school context it's likely that certain biases are baked in but I'd argue that these biases tend to be more extreme when humans make decisions because they take into account less factors (this may not be true but it's my working hypothesis). I think a decent example is "first names". There's certain names that result in worse overall grades during a normal school career, even if the teachers are aware of this bias (in Germany the posterchild for this is "Kevin").

It's a tough problem. I think being aware that biases exist in ML is a good first step.


> I think a decent example is "first names". There's certain names that result in worse overall grades during a normal school career, even if the teachers are aware of this bias (in Germany the posterchild for this is "Kevin").

There is a possible causal link with names which goes beyond "children are being treated worse because of their name".


If it is insolvable for the current deep learning systems then they aren't worth the time and effort that has been spent on them.

Your solution will not work, ever. All such companies will adjust their processes to maximise their own benefits, irrespective on any legal consequences. For most, the profits (in whatever way they define profit) will be the more important thing than any legal requirements placed on them. It is the nature of the people running these companies.


This may not be a good thing, but it does seem to match what you'd expect the human attitude to be. Consider what the typical, initial human reaction to someone named Adolf would be.


With humans, you can change an opinion. Once AI has made up its mind, there isn't anything you can do anymore. The entire system and extended network of interaction points will stick to the decision/stance it has made.

If you are an outlier in the data, you will remain an outlier in the system.

This is terrifying.


> With humans, you can change an opinion.

Do you ever see anybody change their opinion? Usually they won't change simply after hearing an argument. They change only when they have an utterly different life experience. In other words, only when they are conditioned by new data, in no ways different from how an AI changes its opinion.


It depends on how 'deep' the person believes in that opinion. People change their opinions all the time after a simple discussion.


The correction would be to make the systems learn continuously in an online fashion. This is a hard open algorithmic problem though - how to achieve that in a stable robust way.


Forty years ago, I worked for a very pleasant elderly Jew whose name was Adolf. There was never any association with bad things about him. I think you would more likely find the names Hitler or Stalin as being a little bit more problematic.

I know my reaction is more to the surname than the first name.


Also note that people do not tend to react negatively to the name Joseph. I'd ascribe that to it being much more common name than Adolf, though.


Adolf used to be a quite common name (1%-2% of population) and it was popular in many countries e.g. Sweden including the Gustav Adolf line of kings, Netherlands, Adolphe in France, Adolfo in Spain and Portugal, etc.

Joseph has a solid anchor as a biblical name, though, and Stalin wasn't condemned nearly as much as Hitler was.


For a real world version of your "Bob" example, many people have negative associations with the names "Adolf" and "Hitler".

Do you think we should train AI to systematically ignore those sentiments?


For a real world version of your "Bob" example, many people have negative associations with the names "Jamal" and "Mohammed".

Do you think we should train AI to systematically ignore those sentiments?


Machine learning models are pretty good at reasoning with (or can be made to reason with) conditional probability, which I think can solve this problem.

For example, people from X race might be more likely to commit crimes in the absence of any other information (marginal probability). However, person from X race is no more likely to commit crimes than a person from Y race, conditioned on something else like where they went to school, what they do for a living, etc..


When talking about something like crime rates by race, your statement incorrectly assumes society enforces laws fairly/equally. For example, based on the numbers alone, black people are much more likely to commit drug-related crimes than white people. However, studies show that black people are stopped a lot more often and have unequal sentencing outcomes.

It's important to remember that AI doesn't have context, and just because it's using "data" to make decisions doesn't mean the decisions are unbiased - the underlying data may be biased.


One of the things I'm rather worried about here is AI's ability to learn on the most unlikely (to human cognition) features. Take the image recognizer that finds "sheep" by seeing pastures where it thinks sheep might graze http://aiweirdness.com/post/172894792687/when-algorithms-sur... , or take any of the adversarial perturbing examples like https://blog.openai.com/robust-adversarial-inputs/ .

You're not allowed to discriminate on race in housing. But is an AI that determines your creditworthiness for mortgages allowed to discriminate on what you eat, where you go to church, what your favorite music is, etc.? Maybe it doesn't have that data, but it will have one level higher - what store credit cards you have and how much you use them and where you opened them.

If you train an AI on a segregated city with a history of actively discriminatory citizens, where the few people of race X who moved into a not-race-X neighborhood got harassed out and sold before they paid off their mortgages, how easily will the AI conclude that people born in certain neighborhoods are more likely to pay off their mortgage if they avoid certain other neighborhoods?

Is that illegal? (My guess is there's no way to prove to a court, to the court's usual standards, that the AI happened to learn the city's racial tensions.) Should it be illegal, if outright racial discrimination is illegal?


Homicide rates are unlikely to have this kind of bias though. And they're heavily against blacks. It's unlikely whites being murdered go unreported, or that anyone gets off with a warning. But inner city homicide has a low solving rate. If the reporting rate is also lower, then it could be even worse than the 4-7x overrepresentation of blacks in homicide.

This seems to be the case in London, too, so it's a bigger issue than just the US.

But for things like drug possession, yes, blacks probably get stopped much more and not let off with warnings much less.


Untangling cause from effect in this case is extremely hard.

It is like with plagues. If you put many people with plague in one area do expect that others will catch it. If the rule to segregate is silly enough do expect a correlation of plague with certain characteristics of people put together and the locations. Or their sizes. Maybe it is "cities" or "presence of slums" not skin colour, combined with overrepresentation of people with certain skin colour in them.

It takes some write genius analysis and experimentation to untangle such complex effects from causes.

Call again when you have an AI that can deal with this. Essentially a researcher AI.


"Blacks". Are homicide rates 4x-7x higher among "blacks" in Ben Carson's gated residential community? In Boca Raton and Beverly Hills? Or might their be other factors you are ignoring?


In only responding to the claim that the stats are wrong because of police oversampling blacks. With homicide, no one can make that case.

If lack of money causes homicides, we should expect to see all poor rural areas have high homicide rates, too. Maybe that's true.

Another factor seems to be sex. Males commit so much more violence, so perhaps that's the only thing to focus on.


Let's get it straight, machine learning models cannot reason anything. They follow the rules and data supplied. They cannot recognise in any way that the rules by which they operate or the data they are supplied with are or are not reasonable.

Much as there is much hype about AI and machine learning technologies, all of such systems will be, for the foreseeable future, very simplistic models of what we think we know of intelligence.

As humans, we have blindly forgotten that we know very little about the world around us, including ourselves. We think that we have a handle on reality, but we are just plain ignorant. All the models that we have for deep learning and AI or GAI are barely scratching the surface of what "intelligence" is and means.

We may get some useful tools as a result of the research being undertaken today and what we have undertaken over the last lot of decades. But we have millenia to go before we even scratch the surface of what we understand of the universe around us, let alone understand what intelligence means.


> Let's get it straight, machine learning models cannot reason anything. They follow the rules and data supplied.

That is reasoning. I’ll agree with your later point that we don’t really know what intelligence is yet, but that’s because reasoning is clearly only one type of intelligence. Current systems are terrible at language, for example (a year ago I would have said “and spatial awareness”, but this is progressing fast and I am no longer sure).

> They cannot recognise in any way that the rules by which they operate or the data they are supplied with are or are not reasonable.

Quite a lot of humans fit that description. For example, consider how much angry disagreement the following questions get: “is climate change real?”, “Brexit, yes or no?”, and “does god exist?” Also consider how many people (angrily!) refuse to believe these questions result in any angry arguments.


I am so glad you posted this, because for the last couple of years I've been missing old-school Godwin's Law - someone making an inapt comparison to Hitler or Nazis when Nazis weren't actually involved. Of late, discussions about how society should handle edge cases in freedom and robustness have involved actual Nazis with actual swastika armbands talking about the actual "Jewish question," and there were no inapt comparisons, just (confsuingly) genuine discussions of how much place Nazis have in polite society.

But your comparison is inapt: yes, we should train AI to systematically ignore negative sentiments it associates with people named "Adolf" or "Hitler" who are not in fact the Adolf Hitler who died in Berlin in 1945. Humans have difficulty in doing so, of course, but this is a bug in human cognition which is essentially the vulnerability that bigotry takes hold of: a justifiable negative impression of one person as an individual is imputed to other people who seem superficially similar. We see one person of a minority breach a social norm or even a law, and we think that others of that minority must be prone to doing similar, simply because their minority status is salient. We fail to realize subconsciously that membership in the same minority is not actually meaningfully correlated with this behavior, and when we see someone from the majority do the exact same thing, their majority status is less salient, and we don't impute the negative impression to the majority.

A good AI should be able to distinguish dictator Adolf Hitler from, say, saxophone inventor Adolph Sax or chemist William Patrick Hitler (the nephew of Adolf Hitler), and not cast aspersions on the latter two - even though human biases forced William Patrick to change his last name to Stuart-Houston. It should even be able to understand that Indian politician Adolf Lu Hitler Marak is a separate person who merely had parents with questionable taste, and the man is not on account of his name more likely to become a genocidal dictator than any of his political rivals.

And since our justifiable negative association with the Nazi leader is, fundamentally, that he weaponized this vulnerability in human cognition, it is one way of acting on our dislike for this Hitler to make sure that the AIs we build are not susceptible to the same vulnerability.


>And since our justifiable negative association with the Nazi leader is, fundamentally, that he weaponized this vulnerability in human cognition, it is one way of acting on our dislike for this Hitler to make sure that the AIs we build are not susceptible to the same vulnerability.

Isn't that statement in itself an invocation of Godwin's Law?


You could argue that, I suppose. I think comparisons to Hitler are inapt when someone attempts to tar something irrelevant to Hitler's particular crimes (e.g., his vegetarianism) or particularly when someone attempts to compare their ideological opponent to Hitler on specious grounds. But I think there is room for apt comparison to Hitler: it would be a mistake for humanity not to learn from the experience of letting him rise to power.

Godwin himself wrote a little bit about it: https://web.archive.org/web/20170209163428/https://www.washi...

(And I am totally open to criticism that this particular comparison is inapt.)


Sorry, but this is actually a real world situation that happened - I have a Jewish ancestor who was named Adolf before WWII but after Hitler's rise unsurprisingly chose to go by his middle name instead.

You can ascribe it to "Godwin's law" as much as you like, I just find it a more realistic example than some hypothetically disadvantaged "Bob".


Ah! That's fair - but I think the spirit of Godwin's law is that comparisons to Hitler in particular, when the subject matter is more general than Hitler, yield unnecessarily hyperbolic responses (that - among other things - are bad at actually addressing the causes that led to the rise of Hitler and therefore weaken the goal of making sure that such a thing never again happens).

We should make sure that an AI, who is probably making decisions on things like legal documents / public records and not just the middle name someone goes by, will not consider it a negative that someone is named "Adolf" if they aren't Adolf Hitler specifically. And we should for the same reason make sure that an AI will not consider it a negative that someone is named "Bob" if it has a newly-acquired specific negative impression of some other person named Bob. There isn't a difference in the cases.

"Hitler" shouldn't be a special case because the rise of Hitler wasn't as much of a one-time event as we'd like to believe. When the next genocidal dictator with a somewhat rare first name gains control of a country, there will be people from the victim population who share that first name, and they should not suffer the same indignity at the hands of an AI, either. And on the flip side, when this genocidal dictator rises to power, the AI shouldn't be taught that Hitler was the only evil man who ever lived or will live; if it has the data to conclude that some actual individual (not a name) is as bad as Hitler, it should be able to conclude that.


The "Bob" example is already a real-world example, though...


Yes. Why should someone be punished just because they happen to have a name that is similar to someone horrible? Humans will have trouble avoiding being influenced by that coincidence, but we should demand better from an AI.

If you are referring to the actual well-known Adolf Hitler, then there should be no need to use the name for the purposes of making decisions. You shouldn't need to use a person's name as a proxy for whether or not they are a genocidal dictator, just make the decision based on whether or not they actually are a genocidal dictator.


On the bright side, the software is probably easier to fix. Once we figure out how to build unbiased models, they can be deployed widely. Try that with humans.


"To err is human. To really foul things up requires a computer."

Or DevOps Borat: Devops is screwing things up at web-scale.


Theoretically, the software may be easier to fix. In practise, it only happens if there is a profit to be made to get it fixed. If there is a profit to be made, then there are no guarantees that biases will be removed. It then depends on the political bent of those pursuing the fixes.


That should start with good bias tests that can be run over all the public facing AI. It's a problem for sociologists to identify all these cases.


The fix is also very much not simple. Imagine that development of AI systems proceeds as it has thus far... when an AI system turns out biased, racist, sexist, etc - we shut it down. As AIs get more capable, any which do harm, act deceptively, etc - we shut them down.

Eventually, we will have AIs which are better than us in every respect. They will embody our idea of perfection. This is stupendously dangerous. Humanity has a long history of adapting to technology 'taking away' things that everyone thought were 'fundamentally human', like the ability to do work or make things or lay train tracks or whatnot.... it's not hopeful. We deal very poorly with this.

Consider the story of John Henry. He's a folk hero. For killing himself. Because he killed himself in defiance of the machine outperforming him. So this stuff is all-caps Important. What is the likely response from humanity when there is a perfect, not machine but mind? My bet? Humanity will identify with its worse aspects. It will enshrine hatred, irrationality, mean-spirited spite, violence, self-destruction, and all of the things we built AIs to never stray into. Those will become "what it means to be human."

AI may be a philosophical crisis unlike anything humanity has ever faced. Not in kind, but simply in degree.


And I thought getting a customer support person who says, "they system won't let me do that" was bad enough!


The solution for identity is easy. Just don’t rely on unverified name association data for names. I don’t get why it’s so hard to avoid that. Just don’t do it. There are plenty of ways to train models without using that kind of data, both in the training phase and in active use.


Should have gone with Eve or Mallory :)

They're from the same set of names as Bob (and Alice), but are actually affected by this issue, although to a less extent than what you describe.

It's not certain how likely the kind of texts that uses them are to be used as training data though.


Why is someone using fiction and references to fiction to train an AI that will make decisions about real people’s livelihood?


This is a pretty disingenuous question. If you've spent any time in software development, you're well aware that tools of all sorts get used to do tasks for which they were never intended, and crazy unintended consequences result. There's no reason to think that it will be any different for pre-trained machine learning models for things like sentiment analysis.

Here's one example of how it could happen. Someone publishes a high-performing model to gauge the tone of a writing sample. This model includes the anti-Bob bias described above, such that the appearance of the word Bob is tantamount to including a curse word, and greatly biases the model toward negative sentiment. Because of its high overall performance, companies of all sorts incorporate this model into their workflows for things like grant applications, loan applications, online support forums, and so on. For example, they might use it to detect when someone is using their help form to send an angry rant rather than a legitimate request for support. Now, any time someone named Bob wants support, or a loan, or a grant, or whatever, there's an increased chance that their request will be flagged as an angry or abusive rant and denied simply because it contains their name, Bob.

In fact, we can remove the layer of indirection and note that some people have names that are spelled the same as a curse word, and already have similar issues with today's software, making it literally impossible for them to enter their real name into many forms. This example doesn't involve machine learning, since profanity filters are typically implemented as a pre-defined blacklist. But there's no reason to think that a sentiment analysis model would fail to pick up on the negative associations of profanity.


It is not a disingenuous question, it actually comes down to a question of ethics. Why would anyone with any sense of accuracy allow fiction to influence a decision being made about a person’s life?

The more talk about how people are building models, the more I want people to take these black boxes to court to force developers to explain how decisions are made.

Refusing to give someone a loan because someone trained a model with 50 Shades of Grey is unethical and insane.


People don’t usually look that deeply into the consequences of their choices. Is it ethical to invest in land mines? I’d say not, but when I last put money into a high interest savings account, I didn’t know if my bank had done that for me and it didn’t occur to me to ask.


Of course it's unethical and insane. But the point is that people are going to look at the performance numbers for the model, and if they look good, they're not going to ask how it was trained. So the fact that fiction writing was used to train the model will never come up in the discussion about whether to use it.


The other question is what market will be for people who do not for well in most models? Will we get a freshly made new underclass?

Just take a look at people called "Null" then multiply the problem thousand times across various systems with no central appeal.


> In fact, we can remove the layer of indirection and note that some people have names that are spelled the same as a curse word, and already have similar issues with today's software, making it literally impossible for them to enter their real name into many forms.

Ironically, this is also an example of a system behavior that was driven by users' desires not to see certain things. Seen in a certain light, it bears a resemblance to the idea of filtering out certain associations because a user considers them distasteful.


First names like Gay and Dong, or surnames like Fuk, for instance.


Or the classic example - Dick. Both first and surname and old slang for a private investigator as well as a profane name for a reproductive organ.


Because AIs rely on real humans' judgments, and real humans are biased by fictional stories.


Humans better not be biased by fictional stories when judging a loan application or uber driver application.


Ahahahahahaha. Ok.


Do you have no sympathy for an ordinary fairly harmless person named "Donald" or "Bashar"?




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: