It literally blocks "Asian restaurants near me." The fact that someone at Apple made a simple word based filter and didn't think of the possibility that words have different meanings in different contexts is absolutely astounding to me.
You’re assuming a lot about the implementation. In fact I strongly doubt that somebody at Apple literally put “asian” as a blanket filter. It’s more likely an opaque machine learning feature gone awry.
Take list of top 1000 search terms and run by human reviewers. Take list of top 1000 search terms that are flagged and run by human reviewers. Take list of 1000 sensitive terms (pay a few interns to do come up with them) and run by a human reviewer. Then review top false positives and random false positives.
Apple probably spent more money on the UI components for this feature than all of that would have cost.
Aren’t we supposed to be an industry that prides ourselves on tackling hard problems? We can’t build high tech pattern recognition systems and then just throw our hands up when there’s a false positive.
I did my own test to see and I could find one search term including the word Asian which was allowed (the search was "The History of Asian Nations," which is a book title). I think you're right, it must be more complex than a blocklist.
I tried over a hundred other terms (such as "Asian cuisine" and "college with best Asian American studies program") and none of them were allowed.
Machine learning is almost magical in that way: complicated, contextual evaluations of things can be delegated to a black box, and since there's no longer a human people can get angry at for a contested decision, emotions aren't as high and the company can just shrug its shoulders at that silly little computer. It's a cheap way to abdicate moral and social responsibility.
It's one reason that AI ethics is a broader field than people worrying about the existential risk of AGI breakout.
And what's extra depressing is that they're usually not lying. For nontrivial AI, the developers can neither sufficiently answer why it does x or why it doesn't do y.
I wouldn't be surprised if they trained a model on text scraped from somewhere like pornhub and this was the result. It's only surprising they didn't accidentally catch more ethnic terms.
I'm implying that Apple likely looked for that term specifically and unblocked it, while they didn't care to check for the term Asian. But I wonder why?
I implied the same, and also kind of gave away the reason: because it's trendy for companies to pretend to care for LGBTQ now, and censoring it in 2021 would be a faux pas (Yes, Cook is himself gay, but that's not it, because he never was one for causes. Despite being ultra-powerful and rich, he only "went for the cause" for himself and Apple after it became Disney-safe to do so and every company did it - unlike the gay pioneers of the 60s to 90s who did it under really hostile environments).
If you're telling your friend about your dinner last night, you probably say "Chinese restaurant" but if you're trying to discover new restaurants through search you probably use "Asian restaurant".
> any search tangentially related to “Asian” (i.e., Asian-American, Asian food, Southeast Asian, Asian restaurants near me, etc) will return a message reading “You cannot browse this page at ‘google.com’ because it is restricted,” or, “The URL was blocked by a content filter.”
> Other racial or ethnic search terms such as “Black,” “white,” “Korean,” “Arab” or “French” don’t seem to be impacted by the filter.
How is this not racism against Asians? It's pathetic that a trillion dollar company's racist content filter would even pass through quality checks in the first place.
The author of the article bends over backwards to make an incredibly weak and muddled argument for racism, before using a term that many people, including people who are the demographic focus of the term, find racist in any context. It's an absolutely terrible article.
The filter is racist because the term being blocked is the result of an underlying bias in the data that has racist origins: the fetishization of Asian women. (That block may have a real impact on people's lives. A young child searching for 'Asian history'; or a teen searching for 'Asian takeout' won't be able to find what they're looking for, which could impact Asians directly. I don't know specifically, and a real journalist would have found some real life examples, tried to get data to measure impact, etc.)
I don't agree with the author that this is 'insulting' just because it happened; or that Apple (or anyone) need necessarily hand curate the block list. (Maybe they can automate a filter list for their filter list...) I think the problem is that Apple was informed of this problem over a year ago and has neither discussed nor resolved it. That problem is, at best, only hinted at in the article.
I mean, why do you think my statement has anything to do with that? It is widely accepted among the people who talk about "systemic racism" that, say, the existence of black cops doesn't mean the department isn't racist against black people.
I think a more charitable interpretation of the situation would be "Apple didn't expect the number of Adult Content Filter users to be high enough to dedicate more efforts to testing it before shipping. So they had no idea until now that there were issues like this in the first place."
This situation has nothing to do with Asians or redheads not being "important enough". It has everything to do with the adult content filter feature not being "important enough" (until these news about the bug broke out, I assume).
Because there is no quality check. It's a bad implementation not being reviewed by a human - it's not racist, it's just banning stuff related to porn, and inconsistently at that.
No one is saying it's an excuse or acceptable, it's just an explanation for why the state of things is the way it is. People are too lazy or incompetent to fix software. Calling it racism is to cheapen the label into meaninglessness.
as an asian american, uh... I guess my feeling is that this isn't meaningfully discrimination. It's both an understandable mistake, and unacceptable only when not fixed with celerity.
I don't consider it an understandable or excusable mistake.
Not that they failed to specially protect Asians when they made their content filter, that they even made a content filter in the first place.
Because this kind of error is not only predictable and inevitable, it's not even fixable.
Even now, even if someone at Apple directs this to be fixed, the fix can only possibly be stupid, and the same error will still happen to countless other benign searches and content. Even actual humans get this wrong.
Since 2018 when they introduced the filter, it was known for blocking searches for "safe sex" or anything with sex in the string, while allowing searches to join terrorist organizations.
The fundamental problem is that words are not deeds nor intents. Words, any words, are just words and have no good or bad value. Even good people doing good things still have to use words that refer to bad things. In order to say "War is Hell", you have to say both war and hell. It's patently insane to think there is some way around such a basic thing.
If you wanted to respond to my arguement and say yeah but what about "Japanese school girl"? You had to say "Japanese school girl" and the discussion would be impossible or a farce if both our comments kept getting blocked, or even if just the "bad" parts were.
You just explained the core problem, so definitionally at least you must agree that it is understandable. I guess we'll have to agree to disagree that it's acceptable. It seems you're coming from a core premise that apple shouldn't block anything, ever, so yeah, you will fundamentally never find any blocking to be acceptable.
I can sympathize with that premise, as I myself don't personally like blocking. However, I think it's apple's right to do so, and given that, I think the "racial" aspect of the blocking is excusable (if they fix it quickly).
My premise is not that I want it or don't want it. It's that it's fundamentally impossible and so it's irrelevant whether anyone likes it or not, and that this makes it non-sensical to even try.
I don't mean to say it's pointless to try to solve problems. I mean to say if there is a problem, no good solutions to it lie in that direction.
Someone non-ignorant somewhere should have said, and continue to say every time an ignorant parent or school board tries to ask for the non-sensical and ultimately counter-productive, that it is non-sensical and counter-productive.
Of course this wish is itself a fantasy, impossible and thus non-sensical to expect, which is why I don't actually expect it.
I'm not saying it is. If Apple spun a wheel of all english words with the commitment to ban the one it landed on, and it was 'asian', it wouldn't be racist, but as you say it wouldn't be acceptable - both in terms of how they did it and the outcome.
> Shen also told the Independent that he had filed a report to Apple in December 2019 pointing out the issue, but that he never received a response.
So Apple ignored the issue for more than a year. This is the real problem. If it were "black" then do you expect people will react the same and say it's just an algorithm problem or a systematic bias?
I could have sworn I've seen a headline like this repeatedly over the years. Maybe its sporadic reoccurrence reveals that people don't really use the Adult Content Filter because this is such an obviously nonsensical thing to have, especially in the Bay Area, where these guys have their HQ.
Like there are so many reasons you'd use the word Asian here.
Its called the Scunthorpe problem. The name comes from an early incident where AOL filters prevented users from Scunthorpe, England from creating accounts. (because of "cunt" in "Scunthorpe" if that wasn't obvious)
Few years before the AOL incident I recall watching a news story about internet filters (in schools... I believe) blocking information on Superbowl 30, which is stylized with roman numerals, as Superbowl XXX. I remember this 25 years later because it was so amusing to me. Still is.
(XXX is sometimes used to denote sexual content.)
Oh, hey! Wikipedia actually provides a citation for the Superbowl XXX incident!!
sort of related - I recently had an incident with a school app where children can upload and showcase work, leave messages for parents, etc.
a message was signed by a seven-year-old as "love from {child's name}.xxx" - three kisses with no whitespace after the full stop
interestingly, certain operating systems decided to treat this as a hyperlink with typical styling. remarkably, since .xxx is a valid TLD and {child's name} is a popular name, the link resolved to an active domain which forwards the user to a homonymous adult performer's Brazzers profile
100 years ago when I went to create my first AOL account, a typical way to construct the username was first initial last name, "bwhite" in my case.
It was not allowed, and it was not because it was taken, because I then tried a lot of names and generated denied messages for different types of denials like already occupied and invalid characters or length or swear words etc. They each had their own verbiage, and they were all different from the message for "bwhite". Also I think there was some pretty easy way to test for the existence of a username in the app like just trying to message them or something, so I knew the name was not in use.
That must have been '95 or so, and nothing has much changed.
No, it is the Scunthorpe problem. There's no rule the "banned" term has to be a substring, just the filter blocks it because in a different context it would violate the content policy.
>The Scunthorpe problem is the unintentional blocking of websites, e-mails, forum posts or search results by a spam filter or search engine because their text contains a string (or substring) of letters that appear to have an obscene or otherwise unacceptable meaning.
They go on to give examples such as a profanity filter blocking the word "bone" at a paleontology conference and a user banned for saying "great tits" when referring to the bird.
> The Scunthorpe problem is the unintentional blocking of websites, e-mails, forum posts or search results by a spam filter or search engine because their text contains a string (or substring) of letters that appear to have an obscene or otherwise unacceptable meaning. Names, abbreviations, and technical terms are most often cited as being affected by the issue.
Specifically
> because their text contains a string (or substring) of letters that appear to have an obscene or otherwise unacceptable meaning.
Personally I use content filters to block me from going on yahoo.com (don’t ask). As a byproduct it blocks a lot of other stuff - including this article. Too ironic.
I used to have it enabled to block certain social media websites. It is unfortunate you can't just block websites without also enabling the adult content filter, especially since the adult content filter has many false positives in my experience.
Because there are many similarities and if you like 4 of the listed ones, searching for an Asian restaurant gives you more options to browse without searching for a specific thing.
I'd call that "you went to a restaurant offering Asian Fusion cuisine and confused it with a Korean restaurant". I've been in Korean restaurants where I would have leapt at the chance to order ramen; it's not something you can do. (I loathe and despise Korean food, but I was working for a Korean guy...)
We can also note that you seem to be including a lot of European food, like risotto, in the category of "Asian food".
You can a acknowledge the similarity of ristto dishes to curry over rice and at the same time say it's more different than Indian and Japanese curry is similar. Let's not kill all the nuance.
This white boy from the Midwest has never searched for "Asian food" in Cupertino or anywhere else. I have, though, frequently searched for Thai, Indian, Japanese, Chinese...
(And I see that I've been...not exactly ninja'ed, I guess, if I switch back to answer email before finishing my HN reply.)
What ripdog said. There's another comment upthread suggesting that searching for "Asian restaurants near me" would be a logical thing for someone to do, and I can't for the life of me figure out why. "Asian restaurants" are a category no more useful than "restaurants".
You assume that if there were more asians working at apple this wouldn't have happened. Do you think asians spend all day searching for "asian" on google? How many times did you search for your race on google last year?
It's important, of note, news, that the word 'Asian' is blocked (why else are you bothering to comment here?). Whether or not it's blocked for a semi-automated reason, this is indeed what discrimination looks like. It's the sexual fetishisation of a whole race of people bleeding into every-day life.
Since the useful information is in the headline, I don't need to click - that's the opposite of click-bait. So I disagree.
You concur that there may be a case for "Asians" being fetishized.
So Gizmodo has a totally valid case for mentioning Asians in the title, by your own admission?
Plus the title is factually true.
What was your complaint about in the first place, regarding "race-baiting" and "clickbait"?
How can something so subjective as the quality and intent of writing be judged on whether it's "logically consistent" (which, even in a more appropriate context, is a uselessly low bar)? And "intelligent"? What does such a broad complement even refer to in this case?
im not sure where to begin here.. but towards achieving an appreciation for logic, you might enjoy something that intersects with "mathematics", such as godel's "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme". i also recommend reading bertrand russel's "on denoting". im quite curious to know if, once you've digested them, if you would change your comment. at the least, can you clarify if you truly meant "complement", or did you mean "compliment"?
If you understand logic, at the level which you claim, then you should know that "logically consistent" has no correspondence with "correct".
An "intelligent, logically consistent argument", as you say, can be plainly wrong or irrelevant.
The underlying claim that "Asian" and "redhead" are exactly analogous seems dubious to me and I disagree with that, although I can agree that it is also unfortunate that the word "redhead" is banned by this filter.
If blondes and brunettes have their lives affected in a similar manner where they are prevented from searching about themselves, I would make the same argument.
The point is that these terms that describe our identities should not be filtered out.
Just because you don't mean to be racist doesn't mean the outcome of your behavior cannot bear a pungent racial smell. When people train models for facial recognition, we don't assume researchers hate black people.
However, when you're making decisions for a lot of people, it would be deeply incompetent to not understand how you damaged an experience for the majority of customers. In other words, if the majority of your customers were black, then you'd notice that "black" was a banned keyword.
When you can't have everything you want, then how you prioritize says something about you.
>Do you realize how much damage that would do if my children cannot even research things related to themselves?
I think a more likely reason they won't be able to research things related to themselves is because the term "teen" is also banned by that filter (assuming they are around that age, of course, which could be a completely wrong guess on my end).
> they will be unable to search for anything Asian
They will be unable to search for things with "Asian" as a term. They will be able to find lots of Asian things and things related to more specific locations / names / ideas.