Hacker News new | past | comments | ask | show | jobs | submit login
Shirt Without Stripes (github.com)
1676 points by elsamuko 37 days ago | hide | past | web | favorite | 617 comments



This problem is known as "attribution" - you have a "no" or "without" in the sentence, but you don't know where it belongs. One could (and one does) argue that the problem cannot be solved with statistical methods (ML), especially not in any domain where accuracy is required, such as medical recored analysis: "no evidence of cancer" and "evidence of no cancer" are very different things.

Zooming out, the language field breaks into several subfields:

- A large group of Chomsky followers in academia are all about logical rules but very little in the way of algorithmic applicability, or even interest in such.

- A large and well-funded group of ML practitioners, with a lot of algorithmic applicability, but arguably very shallow model of the language fails in cases like attribution. Neural networks might yet show improvement, but apparently didn't in this case.

- A small and poorly funded group of "comp ling", attempting to create formalisms (e.g. HPSG) that are still machine-verifiable, and even generative. My girlfriends is doing PhD in this area, in particular dealing with modeling WH questions, so I get some glimpse into it; it's a pity the field is not seeing more interest (and funding).


The # 1 google result for “Shirt without stripes” is this very own HN post

https://www.google.com/search?q=%22Shirt+without+Stripes%22&


I guess, even in the original results, the problem is not really that Google search was not understanding the meaning of the search term (which is possible with today's models). Rather, it was a bit confused about what you are really searching for here. Maybe a comparison of shirts with and without stripes? The search query was just unusual, and it is not too unreasonable to guess that the query was not meant literally. At least this is a valid possibility, that the query was not meant literally. So it is reasonable to just return some results which might be related to the query, which will also be shirts with stripes.

If you argue this is bad behavior: Maybe we need a web query which really only takes the query literally. Putting the query in quotes will not quite have this effect for Google. Maybe some other syntax?


> the problem is not really that Google search was not understanding the meaning of the search term (...) Rather, it was a bit confused about what you are really searching for here. Maybe a comparison of shirts with and without stripes?

In this very specific case I don't buy it. Sure, it probably applies for other queries, but if you approach a salesperson and ask him for "shirts without stripes" it's pretty clear what you want, and he wont bring you any piece with stripes on it.


Agreed, it's not that weird of a search. Other, similar queries like "shirt without buttons/collar/pocket" seem to work just fine.

The only difference is that those are all physical properties of a shirt while stripes is a type of pattern.


Shirt without paisley - fails

shirt without buttons - preety much fail.

shirt without red button - as already expected, shirts with red buttons


To be fair, the first and last query would also confuse me if someone asked me for that item. "Shirt without paisley" feels a bit like "cereal without elephants." You don't usually explicitly exclude an element that is relatively uncommon.


It's not just the way this is phrased, it's that there is no English-language formulation that works at all. Swap "shirt" for "tie", because paisley is quite common on ties. Now, try:

* tie without paisley * tie not paisley * non-paisley ties * ties that aren't paisley * ties other than paisley

You guessed it, in each case, at least half of the results are paisley ties. The only way to actually get what you want -- the set described by X, minus the set described by Y -- is to use the exclusion operator in the search, "ties -paisley".

This is great, and makes intuitive sense to somebody with multiple computer science degrees. But not only is it hard to explain to an outsider, it's actually quite hard to get them to think in a way that accommodates this capability, that is, in terms of set theory.


I don't agree. Paisley is a well-known pattern type and some people really dislike it. "Shirt without paisley" is not a common request but its meaning is clear (if you do an image search for paisley you get lots of fabric images, so it's not like search engines don't know it.) I'd say the same for "shirt without red buttons." In general the pattern <article of clothing> without <feature> shouldn't be that difficult for search engines---especially since many are tuned for consumers.


This is way overthinking it. The search engines aren't semantically analyzing the images. They are just matching nearby text.


I agree the parent is overthinking it, but that's underthinking it. It's been a long time since search engines were mere text matchers.


I’m uncomfortable with your use of “mere.” It might connote that what Google Search does now is an improvement.


And that's exactly what I mean to connote, text matches make for appalling search.

I have the same reservations about Google as anyone, but rewriting history is never the right move. Moving beyond text matching was what made search truly useful.


    > Moving beyond text matching was 
    > what made search truly useful. 
PageRank is what made Google more useful than its competition. They had it since the beginning, and I like it.

What I don't like is to search for the band "Chrisma" and get results for "fruitcake sale!" because Google corrected my spelling to "Christmas", decided to look for related concepts, and then boost whichever result is the most mercantile.


> They had it since the beginning

Yes, they have. Read what I said again, I don't dispute this. What Google does now is an improvement over text matchers. I never claimed that what they do now is better than Google circa 2000 (though I don't care to register an opinion either way on that).

Whatever they've done since, their product remains better than text matchers. Mercantile search is better than terrible search.


now they are product matchers


observations; firstly one can assume that google search can do no wrong. second observation is google search is made for noobs. thirdly you broke it because you are not a noob. 4th this will not be fixed because of said non-noob status.


Google is in the business of producing quick results to sell ads while keeping the cost low. If anyone does better, they will likely do it using a costlier algorithm, and if there is more margin in reselling tangible products than ads then Amazon is incentivized to use costlier algorithms that are more accurate. But I think the margin in both cases does not justify the use of algorithms that may be more accurate in some small percentage of cases but a lot more costly.


animalCrax0rx is in the business of producing quick pithy posts to earn more karma while keeping understanding low and making claims that are not evidence based...

See how that works? That's not really what's going on. Sure, G. is incentivized to include pages quickly, but they are also incentivized to produce them accurately, and as the above poster indicates, this is quite a hard problem to solve generally.

A is also incentivized to sell items.

In many cases different algorithms will lead to quantifiably different results. The algorithm changes that work better for the measurement set will be kept and those changes which dont will be discarded. And both A and G do that within different constraints.


My cursor was hovering over the downvote arrow while reading your first sentence, before I realized what was going on in your post. Thank you for pleasantly surprising me!


Google is aware of this problem in their search approach. It's a business problem, not a technical one. You're saying the same thing in suggesting they base their decisions on some measurement set. If solving the problem adds complexity, which it certainly will, and there is not enough improvement in accuracy for the majority of cases in their measurement set, why bother? You sound like the kind of person that attacks people for their opinion. So weird, dude.


Your ad hominem on the other poster is unnecessary, violates the HN guidelines and not an apt comparison.

Pointing out the obvious: Google is an advertising company. If the cost of producing an accurate result outweighs the advertising income on a given term, there is no incentive for Google to produce better results.


This would predict that a query that has no advertising income will return no results, which is clearly not the case.

Having a search engine that people go to whenever they want to search for things is incredibly valuable, because they will come to you when they want to buy things and you can sell ads. But unless you consistently give the best results for all queries, people will go whenever does. It's worth investing strongly in all queries, not just highly monetizable ones.

(Disclosure: I work for Google, speaking only for myself)


> This would predict that a query that has no advertising income will return no results, which is clearly not the case.

I was about to say there are no such queries but then I remembered having to type a captcha for seemingly automated queries. The captcha page has no results on it obviously. This is because automated queries do not produce advertising revenue. You have to buy them.

I've typed an insane number of queries since the beginning. A decade ago I use to be able to find truly exotic articles, I could find every obscure blog posting on every blog with 3 readers and I was pretty sure google delivered all of it. The tiny communities that came with the supper niche topics rarely produced a link I didn't already find. If they did it was new and I didn't google for a while.

Today google feels like it is a pre-ordered list from which it removes the least matching articles. Only if the match is truly shit will it be moved slightly down the page. The most convincing in this is typing first name + last name queries in imagines and getting celeberties who only have the first or the last name.

People wont go, it has to get much worse before they do.

edit:

With humans an pets a good slap over the head or a firm NO! will usually do the trick.


> I was about to say there are no such queries

There are very clearly many queries with no advertising revenue, because there are many queries that show no ads. Trying some searches off the top of my head that I expected wouldn't have ads, I don't get any ads on [cabbage], [who is the president], [3+5], or [why is the sky blue]. On the other hand, if I search for a highly commercial query like [mesothelioma] the first four results are ads.

> A decade ago I use to be able to find truly exotic articles, I could find every obscure blog posting on every blog with 3 readers

My model of what happened is that SEO got a lot better. When Google first came out it was amazing because Page Rank was able to identify implicit ranking information in pages. Once it's valuable to have lots of backlinks, though, this gets heavily gamed. Staying ahead of efforts to game the algorithm is really hard, and I think a lot of times people's experience of a better search engine comes from a time when SEO was much less sophisticated.

> The most convincing in this is typing first name + last name queries in imagines and getting celebrities who only have the first or the last name.

This hasn't been my experience, so I tried an image search for [tom cruise], curious if I would get other Toms. The first 45 responses were all of the celebrity, and image 46 was of Glen Powell in https://helenair.com/people/tom-cruise-helps-glen-powell-lea... which is a different kind of mistake. Do you remember what query you were seeing this on?


> This hasn't been my experience, so I tried an image search for [tom cruise]

I believe what he means is that searching for first name + last name of someone who isn’t a celebrity gets you celebrities who match either the first name or last name.

Searching for Tim Neeson gets you a wall of photos of Liam Neeson: https://www.google.com/search?q=tim+neeson&tbm=isch

Searching for Tim Cruise blankets you with pictures of Tom Cruise, but it at least says “Showing results for tom cruise“ so you know it did an autocorrect. When I tried other first names + Cruise, the effect is less pronounced than with the Neeson example. Maybe it’s because cruise is a more common name as well as an English word.


Thanks for clearly articulating what many people on HN seem to fail to grasp. It’s not that Google got worse over the years at surfacing the obscure content they used to so easily find. That obscure content has gotten completely buried under the mountain of content being published every day, and the cat and mouse game of SEO has evolved so rapidly that the problem space of generalized search is so much harder these days than it was 10-15 years ago. Not to mention the broader user base that they have to serve as well.


It is the same thing! The mistake is obvious. For-profit content is prioritized. Google is the driving force behind the for-profit internet but sadly for google: you cant do organic ranking on commerce. ahhhh The index is now a [very limited] snapshot of the glory days.

You don't have to bother creating anything new unless you have something to sell and are willing to invest (big).

Facebook is actually a pretty pathetic implementation where we can still find content created by normal people. If people made traditional websites in stead of facebook groups and facebook pages NO ONE would be able to find it.

We've witnessed the great obliteration of what was once a nice place and now we have to hear google was not to blame?? The death by a thousand cuts is actually well documented.

We tell you what your site must look like or we'll gut it:

https://en.wikipedia.org/wiki/Google_Penguin Google Penguin is a codename[1] for a Google algorithm update that was first announced on April 24, 2012. The update was aimed at decreasing search engine rankings of websites that violate Google's Webmaster Guidelines[2]

There, this is what the entire internet must look like. We went from indexing to engineering here.

https://support.google.com/webmasters/answer/96569?hl=en

  rel="ugc"  
We recommend marking user-generated content (UGC) links, such as comments and forum posts, as ugc.

If you want to recognize and reward trustworthy contributors, you might remove this attribute from links posted by members or users who have consistently made high-quality contributions over time. Read more about avoiding comment spam.

Before this those elaborately contributing got actual credit for it. Do you think you got a choice in it? Google clearly demands you ban credit for comments. OR ELSE!

  rel="nofollow"  
Use the nofollow value when other values don't apply, and you'd rather Google not associate your site with, or crawl the linked page from, your site. (For links within your own site, use robots.txt, as described below.)

Woah association! How did we go from linking-to to association? It was important enough for readers but be careful to hide it from google. Such little unimportant websites simply shouldn't exist in our index. We command you to help keep our index clean of such filth!

Then the magical: We wont actually tell you what is wrong with your website! Ferengi Rule of Acquisition 39 "Don't tell customers more than they need to know." Get a budget and hire someone to do SEO. Deal with it, we don't care. No, you don't have any feedback.


> There are very clearly many queries with no advertising revenue, because there are many queries that show no ads.

Queries without ads do produce revenue. They are an essential part of the formula.

Think of people standing around in bars. We cant argue that just standing there doesn't produce revenue.

The flowers on the table in a restaurant produce revenue.

Free parking produces revenue.

If queries without adds didn't produce revenue they wouldn't exist. More often enough it doesn't even take an extra query, the adds will sit behind the links.


I don't think we disagree? Above I wrote: "Having a search engine that people go to whenever they want to search for things is incredibly valuable, because they will come to you when they want to buy things and you can sell ads."


> This would predict that a query that has no advertising income will return no results, which is clearly not the case.

No, it would predict that a query that has no advertising income will poor results. You can determine on your own whether that is the case.


This is pointless overcomplicating. I might agree if the example would be slightly more interesting, but "without stripes" isn't even "absence of <stripes>", it is essentially a colour/pattern and can be easily attributed to a range of things exactly the same way "green" can be. Google translate correctly associates much more dubious and abstract concepts than that, and does it with statistical methods, i.e. associating word combinations with a location in a vector space. The fact all major search engines fail to do it here is just shameful. Especially Amazon, where it is pretty much a primary search function.


Google translate doesn't understand what it translates either. It just relies on a very large corpus of parallel texts.


can you formalize/quantify "understand" please


Something like: build an representation of the text that's language independent and integrated in other knowledge domains, and predict the receiver's reaction to a verbalization of a representation.

But it isn't necessary to formalize any of it. At the current level of sophistication, our informal common ground of words like "understanding" suffice for a discussion. It's obvious Google Translate doesn't resemble human language processing.


> but you don't know where it belongs

Yes, the English grammatical rules make it unambiguous where it belongs. This is solvable.


A shirt is an item of clothing. Items of clothing are made from material. Some materials have patterns. One type of pattern is stripes.

Seems like a matter for logical inference. At which point it becomes fairly easy to find shirts made from material where that materials pattern is not stripes.

But yes, no AI I have seen works reliably on even basic queries like this.


What you've described is not simple logical inference, it is logical inference from common sense knowledge. This is an extremely hard problem, much harder than solving attribution in such simple cases.

Most likely, common sense reasoning will be required to get full natural language processing, since human communication relies extremely often on such reasoning. But building a knowledge base of common sense facts will be one of the hardest challenges ever attempted in machine learning/artificial intelligence.


As a postdoc in computational linguistics, my go-to example for talks is asking Siri not to show me the weather.


Has any of these talks been published?


... Siri doesn't show you the weather by default. I know your point is about shallow parsing, but there's a reason it still kinda works.


For "Hey Siri, don't show me the weather!" I get "Here's the forecast for today"


Exactly. But it was already not showing you the weather before you said anything. The query stymies shallow parses NLP assistants do, but isn't one you would actually give in real life.


> This problem is known as "attribution" - you have a "no" or "without" in the sentence, but you don't know where it belongs. One could (and one does) argue that the problem cannot be solved with statistical methods (ML), especially not in any domain where accuracy is required, such as medical recored analysis: "no evidence of cancer" and "evidence of no cancer" are very different things.

Couldn't you just parse the sentence into a dependency tree and look at the relationships to figure that out? CoreNLP got both of your examples right (try it at http://nlp.stanford.edu:8080/corenlp/process, can't link the result directly).


It seems like the attribution problem is an English problem. The query doesn't have to be English.

https://www.google.com/search?q=shirt+-"stripes"


You've solved the problem of accuracy, but you've now introduced a huge recall problem. Your query will never find a page advertising 'the best shirts without stripes', as it contains the word 'stripes'.

To be useful, Google must solve natural language problems. You can't solve natural language problems by using formal language in sine bits of the problem, at least not until we have a full Chomsky-style understanding of the whole of human language.


English is fine, googling for "stripeless shirt" works well enough. The question "shirt without stripes" is begging us to ask is whether we might want to reevaluate projections of imminent AI takeover.


Just being silly here but the same issue happens if you do the search in Spanish :)

https://www.google.cl/search?q=polera+sin+rayas


It does, if you're running a search engine for a general audience. the `-` operator is useful, but it's almost an admission of defeat: what most users want is to be able to describe what they're looking for in reasonable english and get relevant results back. Having `-` for advanced users is useful, but it's not friendly to the majority of users.


> you have a "no" or "without" in the sentence, but you don't know where it belongs

Well, one could argue, that it belongs exactly where anyone entering the query put it. Before "stripes".

The problem is often, that search engines try to be too clever, while not offering any kind of switch "exactly those words in this order" and that is just a bad user interface.


Believing this is a problem of attribution, I would expect to see results that are shirts without stripes, or not-shirts with stripes.

If it just disregards the word without, well, that's pretty bad.


I am little surprised with this result. When I worked on similar products we constantly look at our query stream, sorted the high volume queries and manually intervened to present better results to our users.

I will not be surprised millions of dollars are being lost because of this substandard query result per year.


I think these hyper-companies are really attached to doing things "at scale" and sometimes, or even often times, they get carried away with that.


If the problem is that it didn't know where to apply the without, then why does it show me results from only a single entity? I would prefer so see an interleaved set of results containing all ambiguous entities.


I think that given the complexity of the problem they don't even try to parse the sentence and do attribution - they just shotgun with ML and hope for the best.


There is this, too, from 4 years ago, which seems reasonable in my not-very-well-informed opinion (speaking of which, I'm not sure the work referenced here can deal with hyphenated negation, but it should be simple to include)

https://www.aclweb.org/anthology/P14-1007.pdf

code: https://github.com/ffancellu/NegNN


Thank you, I knew my effort was not for naught!


Yes, you have points, but they break down here:

“Shirt -stripes” is unambiguous to a system, yet the first result on Amazon(.ca) is a striped shirt, and the 3rd is sweatpants.


As someone else said you don't know that's wrong for what Amazon are optimising. If they find people [with your background profile] who buy shirts are susceptible to buying sweatpants, they might also find that if the seed you with "sweatpants" as an idea up front that the repeated presentation of sweatpants in "people who bought X also bought Y" sections is more effective.

That's the sort of thing I'd expect Amazon to be doing?


Like, when was the last time this was considered ideal?

“Yes, I would like an unstriped dress shirt please”

“How about this striped shirt?”

“No thank you, I would like an unstriped dress shirt please”

“I have some lovely jogging pants”

“Ok, I need to be clear here, I would like a dress shirt that has no stripes”

“Can I interest you in a white undershirt? People who buy dress shirts usually buy undershirts”

....


I think if you were visiting a personal tailor, or perhaps talking to a shop assistant, you might get something akin to that.

T: I think pink would look good on you, and it's very fashionable right now.

You: Just bring me some yellow shirts to try.

T: Oh, I got these, and brought this pink one anyway; try it!

But, of course Google isn't making fashion suggestions. But then, ... the tailor might also be just trying to shift excess stock or be on a bonus for selling that particular high-cost shirt.


Lot of mental gymnastics in this thread to defend some godawful AI.


To be blunt: My personal tailor’s first response should be to do what I asked.

They can certainly also bring some stock to shift, or offer suggestions while I’m trying something on, but if they aren’t listening when I make a direct request or when I clearly say no, then they aren’t really there for me, their customer.


Tbh, I feel like the underpinning “problem” is that more and more these marketplaces are optimizing for what they want to sell, and seeming to ignore blatant requests.

I’m an odd one that I already know specifically what I want to buy before I search for it, but I’m certainly not the only one (and I think everyone has done that at least once).


it works on google pretty well for me


This doesn't seem like you need fancy linguistics ML to fix. You take 90s-era search engine tech and add a database of attributes about every kind of product there is, and take a guess at whether "without" is a search term, a search result modifier, or a filter for an attribute of a product. When you display the results, simply ask the user if that was right; if it wasn't, ask them if they preferred the other filter method. Use those responses as a corpus to train the algorithm.

I mean, context is key, right? You're on Amazon and your first search term is "shirts". Unless their is a band called "shirts without stripes", the user wants shirts. The rest of the query is probably some filter of that product. You know shirts sometimes have stripes. It's not a one-size-fits-all algorithm, but it's simple enough that the user should end up with the results they wanted.


I think you’re mystifying a lot of people in this thread! Can you add to your explanation why it’s hard for ML to associate the negation with “stripes”? It seems easy: language => English, in English “without” modifies following phrase, not preceding.


less than lay person, but in your example

> "no evidence of cancer" and "evidence of no cancer" are very different things.

Why is it not as simple "no belongs to the word it precedes" ? like unary operator, ! (not), in typical computer languages.


- evidence of no liver cancer

- no textbook evidence of cancer

Statements have structure, parsing them with simple rules like this is akin to parsing C++ with regular expressions.



You need to actually build up noun comprehension though, because "evidence of no remaining cancer" or other qualifier words can greatly confuse the situation.

You'd also have quite a bit of fun trying to parse the phrase "no means no" or other usages where "no" is being used as a noun... And for bonus points, folks talk to search engines in broken english all the time so "shirts no single striped" is a totally reasonable query to submit to a server and expect to be parse-able.


The broken English aspect also happens because we know search engines don't understand English anyway.


I don't know the usage stats on AskJeeves but that search engine specifically purported itself to do well when asked a question in english. Now, the tech at the time wasn't even near close to being able to support it, but their advertising targeted it. That all said I never typed in "Who is Derek Jeter?" and I don't know anyone who did.

I think the basic issue is that people just don't respect machines and want to minimize the amount of effort spend on communicating with them - I don't say "Alexa please bring up songs by Death Grips if you don't mind" I shout "Alexa! Play! Death Grips!" and then yell at it when it misunderstands.


Because theres a big difference between english and computer/query languages. You can deduce a lot from a query language, but not necessarily english.


Reminds me of the confusion with negatives and languages: -Vill du inte ha glass? [Don't you want ice cream?] -はい [Yes]

Does she want ice cream? Answer: No, she doesn't. I added a not, so she's reversing the answer as Japanese people do.

The number of times I've been dumbstruck by this is larger than I'd like to admit, and I'm a coder.


English has similar confusion even for plain, non negated questions.

Q: "Do you mind if I sit here?"

A1: "Not at all!"

A2: "Sure!"

Both are valid answers and mean the same thing, the person asking is welcome to sit there. This has always amused me.


That's an excellent example to show why statistical NLP outperforms discrete parsing using logical rules. That A1 and A2 means the same is something that is clear to us from the context. This is something a continuous vector space model can capture and a discrete rules based model cannot.


I'm not sure I understand, "mind" means "object to", so they are asking "Do you [object to] me sitting here?";

"Not at all" == "I do Not [object to you sitting here] at all"


Definitely what you say is the accepted meaning. But wouldn’t you say A1 is more correct than A2? Possibly relevant, there’s also

A3: “Sure I do, last time you sat next to me you wouldn’t shut up.”


I don't think this about a lack of interest.

There have been some lengthy discussions on HN about vertical search and how Google doesn't always buy up a small company; they litigate.


The latest embeddings/networks like BERT can handle encoding this logic. They take the surrounding words in context when they're encoded.


While this is indeed an example of the attribution problem, I'd argue that this particular query will never be solved. I don't search for a "shirt without stripes", I search for a "solid <insert color here> shirt, or a "<insert color here> hawai'ian shirt".

I'd be curious to see how many sentences with attribution problems actually have other structural issues. If I want to write clearly and without ambiguity, I rewrite sentences that have these problems. Why wouldn't I do the same for search queries?


I don't think this particular problem is related to the language model. "[item] without [attribute]" is trivial to understand even without a sophisticated language model.

The bad results are because they're not positively indexing the absense of the feature by deeply analyzing the images or products beyond the descriptions. "Shirt with stripes" yields almost exclusively striped shirts. Exclude those results from all "shirts" and there are still a lot of striped shirts that the search algorithm doesn't know enough to exclude.


Because there aren't two kinds of shirt, striped or solid coloured. You even acknowledge that by mentioning Hawai'ian shirts. You might want a small check shirt, a mostly plain shirt but not mind logos so not completely monocolour, a plaid shirt, a large check lumberjack style, spotted or dotted or diamond pattern, flowery pattern, you might not even know what the other available options are to search for them one by one - you just know that you don't want stripes.

There is no ambiguity in "not stripes", you can't invert it and write it in the positive form of what you want; the neatest way to describe the category of what you want to browse is "things which are not stripey".

Particular personal bugbear is car websites where you can filter in "petrol engine" or "diesel engine", but there is no support for negative filtering, so you can't choose "not LPG". In so many search-and-filter options you can't exclude your dealbreakers, and it's much more likely that I have a single dealbreaker which rejects a choice overriding all other considerations, than that I have a single dealmaker which makes a choice overriding all else.


Is it that odd though? I'm not a native english speaker, but I would say "shirt without patterns", not solid. Or for example "without visible logo"


Consider the query, "non-glass skyscrapers", which suffers from the same problem.

What do you call a skyscraper like that if you want to refer to it? They exist, but you can't find them using that search term on Google.



You missed the Seattle Tower Building. It has windows, but very little in the way of visible glass.

https://www.emporis.com/buildings/119453/seattle-tower-seatt...

Windowless is a superset of glassless.


To be clear (for the benefit of anyone else who reads this), the windows in the Seattle Tower are made of glass, but the exterior of the building is not that modern all-glass-and-steel look[1]. This is a third interpretation of non-glass I hadn't thought of and it took me a minute to figure it out.

"non-glass skyscraper":

1. No glass used in the exterior construction at all -> implying no windows

2. No glass used in the exterior construction at all -> implying the windows are made out of something other than glass

3. A skyscraper in which glass is not a prominent architectural feature, but the building does contain features like windows and doors that contain glass. (This comment)

[1] https://en.wikipedia.org/wiki/Wilshire_Grand_Center


It comes down to what is considered "windows". When the whole external material is glass, glass panes aren't just windows anymore.

That's the full glass buildings returned in your windowless query.


Yo seem to think that queries of this form are ambiguous, but they are not. Given the set of shirts, and that the subset of 'shirts with stripes' is well-defined (which you apparently accept, given that you seem to think that 'shirts with stripes' is an unambiguous query (which it is)), then its complement is well-defined; there is no 'Russell's Barber' paradox here, as there is no self-reference.


>If I want to write clearly and without ambiguity, I rewrite sentences that have these problems. Why wouldn't I do the same for search queries?

Ok, so imagine one online retailer follows your advice and expect the users to write clear and unambiguous queries, while another retailer puts extra effort into attribution.

Which one will make more money?


Great question, but with potentially nonintuitive results.

A sales gimmick furniture store would use in the past was to offer customers a free gallon on ice cream for visiting the store. The value was to the store offering the promotion, as shoppers would be drawn to the "free" gift, but on receiving the ice cream -- too much to eat directly. -- would then have to go home to put the dessert in the freezer. And have less time to comparison shop at competing merchant's stores. Given limited shopping time (usually a weekend activity), this is an effective resource exhaustion attack.

Similar tricks to tie up time, patience, or cognitive reserve are common in sales. For a dominant vendor, tweaking the hassle factor of a site so long as defection rates are low could well be a net positive, if it makes the likelihood of a visitor going to other sites lower.


That's a fascinating story, thank you.

Still I insist that business serving up more relevant search results for loosely phrased queries will make more money than the one relying on the user to formulate perfect queries.

That's my story and I'm sticking to it.


I'd love to believe that myself. I absolutely hate ineffective & irrelevant search results.

See Scott Adams, "Confusopoly" (2011): https://www.scottadamssays.com/2011/12/07/online-confusopoly...

I've touched on this: https://old.reddit.com/r/dredmorbius/comments/243in1/privacy...

The antipattern is sufficiently widely adopted that I've been. looking for possible dark-pattern justifications.


The original "confusopoly" link talked almost exclusively about pricing, and for good reason: pricing is based on numbers, which humans are bad at but computers are very good at, and every product in the catalog has a price, so it's easy to take the same tactic and apply it to all of the products.

I'm not sure trying to confuse people about whether a shirt has stripes on it would make as much sense. The purchaser seems likely to give up on picking an ideal shirt and just go with the cheapest result.


I thought I'd written on a more comparable gripe similar to the "shirt without stripes" problem in online commerce, the confusopoly item was the closest I could find readily. (Other is likely among my G+ take-out.)

Both though have the same essence: a manifestly confusing and annoying interface may be serving the merchant's interests.

See also Ling's Cars, possibly explaining awful Web design:

https://ello.co/dredmorbius/post/7tojtidef_l4r_sdbringw (HN discussion: https://news.ycombinator.com/item?id=16921212)


Fascinating, do you have any links to papers about machine-verifiable formalisms?



The point of the OP is that they claim they understand everything. Example: https://www.blog.google/products/search/search-language-unde...


Just because they've implemented something that helps in a certain area doesn't mean they'll always get it right.

I think you'd struggle to find anywhere Google claims to "understand everything", making your assertion a strawman.

Literally in the article you're quoting from Google:

> But you’ll still stump Google from time to time. Even with BERT, we don’t always get it right. If you search for “what state is south of Nebraska,” BERT’s best guess is a community called “South Nebraska.” (If you've got a feeling it's not in Kansas, you're right.)


From that link:

"So that’s a lot of technical details, but what does it all mean for you? Well, by applying BERT models to both ranking and featured snippets in Search, we’re able to do a much better job helping you find useful information. In fact, when it comes to ranking results, BERT will help Search better understand one in 10 searches in the U.S. in English, and we’ll bring this to more languages and locales over time.

"Particularly for longer, more conversational queries, or searches where prepositions like “for” and “to” matter a lot to the meaning, Search will be able to understand the context of the words in your query. You can search in a way that feels natural for you.

...

"No matter what you’re looking for, or what language you speak, we hope you’re able to let go of some of your keyword-ese and search in a way that feels natural for you. But you’ll still stump Google from time to time. Even with BERT, we don’t always get it right. If you search for “what state is south of Nebraska,” BERT’s best guess is a community called “South Nebraska.” (If you've got a feeling it's not in Kansas, you're right.)

"Language understanding remains an ongoing challenge, and it keeps us motivated to continue to improve Search. We’re always getting better and working to find the meaning in-- and most helpful information for-- every query you send our way."


OK, but what if an Amazon algorithm has actually learned that people who search for "shirt without stripes" are more likely to buy more things if the first image they see is a picture of a striped shirt?


Isn't this just a version of "God works in mysterious ways" applied to AI?

   "The AI works in mysterious ways. Trust it."


It's the efficient markets hypothesis applied to AI. "If the AI could make more money by showing something else, it would be showing something else".


The original version of which talks of an "Invisible hand" and if that's not an metaphor for divine intervention hidden by an chaotic system i don't know what is.

It's rather surprising how often almost all complex systems theories be it AI, cosmology or economics have an aspects where even the theorists are resorting "to because it is".

Sometimes those statements are based on measured data but it's not always easy or possible to do so accurately for highly interconnected system or worse system where you have actors reacting to theoretical model in a way that changes how the system behaves.


in that case the applied AI stops being a Search tool (as was the purpose of the search bar) and becomes a new Ad tool. And this masquerading is not a great thing at all, for the same reason why people don't like bots pretending being humans during phone calls.


But it’s not a search tool. It’s a make Amazon money tool, as all their tools are. I think you misunderstand why amazon build these tools if you think they are to make your life easier in trying to locate things to buy on their site. That’s a happy coincidence. They build them to make money.


I don't think it violates the efficient markets hypothesis. I think that entails "if it was possible for someone to create an AI that would make more money by showing something else, the AI would be showing something else." Roughly, efficiency means that the most profitable solution will be the one that wins out, but "profitability" includes the costs of building such an advanced system, assuming that it's possible at all.


I think it's more likely that they've A/B tested faulty algorithms and picked the one that's the least faulty.

I don't have proof, but I strongly believe that a search algorithm that returns what a customer is actually searching for will drive more sales. I suppose it's possible that with time, consistently bad results will beat a customer into submission and drive more sales of stuff the customer doesn't want. But I don't believe that's true, and this would only be the case if the customer accepts that the thing they want doesn't exist. If the customer is pretty sure that solid color shirts exist, they'll just shop elsewhere until they find it.


It's a tricky balance between sales and relevance. If you don't watch for relevance you will end up showing only booze and porn in the commerce search results, border-line-porn in video search (true story), and so on.

edit: fixed typo born -> porn


Could you explain the border-line-born bit? I don't get it.


It's a typo, he meant porn


Then the algorithm would not be acting in the customer's interests.

Presumably this would be after the algo devalued people who clicked on "Next Page" until they came to a page that had stripeless shirts on it, or who, after the search, only ever clicked on stripeless shirts. "Deeds not words," dontchaknow.


But the algorithm wasn’t programmed to act in the customer’s interest. It was programmed in amazons.


If that's true, should we have the same kind of results when searching with "plain shirts", then?

Which is not the case: searching for "plain shirts" do not give similar results than searching for "shirts without stripes".


OK, what if not? See, your comment makes no sense


Why do people make false claims while linking directly to a primary sources that unequivocally and repeatedly contradicts their claim?

> sometimes still don’t quite get it right,

> Even with BERT, we don’t always get it right.

And nothing in the blog is about image search.


The fact they put some disclaimer after bragging doesn't discredit the point of the author who says those companies brag with AI but still fail to fulfil basic queries.


The point that the author is making, in a very understated way, is that all three companies have PR websites that breathlessly describe their advanced AI capabilities, yet they cannot understand a very simple query that young children can.


That point is akin to stating: These three companies have not solved the hard problem of common sense [1], so are not allowed to advertise their AI without looking silly.

Nobody has solved the common sense knowledge problem yet. A solution for that would qualify as Artificial General Intelligence and pass the Turing Test.

But search engines have come a long way. I even suspect that when search engines place too much logical - or embedding relevance to stop words such as "without", that, on average, the relevant metrics would go down. It is not completely ignored as "shirt with stripes" surfaces more striped shirts than "shirt without stripes". "shirt -stripes" does what you want it to do.

Searching for "white family USA" shows a lot of interracial families. Here "white" is likely not ignored as much, and thus it surfaces pages with images where that word is explicitly mentioned, which is likely happening when describing race.

You can use Google to find Tori Amos when searching for "redhead female singer sings about rape". Bing surfaces porn sites. DDG surfaces lists (top 100 female singers) type results. The Wikipedia page that Google surfaces does not even contain the word "redhead", yet it falls back to list style results when removing "redhead" from your query, suggesting "redhead" and "Tori Amos" are close in their semantic space. That's impressive progress over 10-20 years back.

[1] https://en.wikipedia.org/wiki/Commonsense_knowledge_(artific...


At least this is relatively innocuous. Until recently if you did a Google Image Search for "person" or "people", it only showed white men.


One can play this game a lot and most results will return expected cultural biased results. A "kind person" is apparently a white girl. A "good person", a white woman. A "bad person", white men. A "evil person", white men. A "honest person", equal mix of white women and white men. "Dishonest person", white men in suits. "Generous person", hands of white women. "Happy person", women of color. "Unhappy person", old white men. "Criminal person", Hispanic men. "Insane person", white men. "Sane person", white women.

Is it surprising that very few of the result surprises me?


Down voted because this is just a lie.

"Kind person" - pictures of men women, children, of all ages and colors.

"good person" - Mostly pictures of two hands holding. No clear bias towards women at all. If anything, more of the hands look "male".

"Bad person" - Nearly 100% cartoon characters

Absolutely ridiculous that you would take the time to write up such fake nonsense.


Google searches are not reproducible, different users can get different results on the same query.


Yes. If I had the energy and time to do a proper researched data set I would have a bot search through the top 100 common words associated with either warmth (sociability and morality) or competence, and then use a facial recognition system go through the first 100 images of each to determine the distribution of gender, age and skin color.

Following the stereotype content model theory I would likely get a pretty decent prediction of what kind of culture and group perspective produced the data. You could also rerun the experiment in different locations to see if it differ.


FWIW, this is most likely not a bias of the search engine, but just a reflection of its sources (mostly stock image platforms I suppose). So if most stock images of blue trolls would be labelled with "politician", you'd eventually find blue trolls when searching for "politician".


Did you google all of them?


Yes. I thought about words people use in priming studies, usually in order to trigger a behavior, and just typed the word with space and "person" appended.

I did use images.google.se in order to tell google which country I wanted my bias from since that is the culture and demographics I am most familiar with. I also only looked at photos of a person and ignored emojis.

I have also seen here on HN links to websites that have captured screen shots of word association from google images and published them so you could click a word see the screen shot. They tend to follow the same line as above, but with some subtle differences, and I suspect that is the country culture being just a bit different to mine.


You really should link to screenshots of your results so people can judge for themselves.

I just submitted all your searches to google.com from Australia, and the results were nothing like what you described; all the results were very diverse.

This is to be expected, as Google has been criticised for years for reinforcing stereotypes in image search results, and has gone to great effort to adjust the algorithms to reduce this effect.


I usually don't spend time producing evidence since no one else does it, nor did the parent comment, or you for that matter. It also tend to derail discussions onto details and arguments over word definitions.

But here, not that I think it will help: https://www.recompile.se/~belorn/happyvscriminal.png

First is happy person. Out of 20 we have 14 women, 4 guys, 2 children.

Second is criminal person. The contrast to the first image should be obvious enough that I don't need to type it.

If I type in "person" only I get the following persons in the first row in following order: Pierre Person (male) Greta Thunberg (female) Greta Thunberg (female) Unnamed man (male) Unnamed woman (female) Mark zuckerberg (male) Keanu Reeves (male) Greta Thunberg (female) Trump (male) Read Terry (male) Unnamed man (male) Greta Thunberg (female) Greta Thunberg (female) Unnamed woman (female) Unnamed woman (female)

Resulting in 8 pictures of females, 8 males, which I must say is very balanced (I don't care to take a screenshot, format and upload, so if you don't trust the result then don't).

Typing in doctor as someone suggested in a other thread I get in order (f=female, m=male): fffmffmmmmfmmfffmfmfmmmff

and Nurse: fffmffmfmmffmffmfffmffmffff

Interestingly the first 5 images have the same order of gender and are both primarily female, through doctor tend to equalize a bit more later while nurse tend to remain a bit more female dominated.


Thanks for the screenshot. It helps (and by the way, yes the onus is on you to provide evidence as you're the one making the original claim).

Your initial comment said "Happy person", women of color.

But your screenshot showed several white people, several men, and a diversity of ages. Yes, more women, which is probably reflective of the frequency of photos with that search term/description in stock photo libraries and articles/blog posts featuring them. No big deal.

You also said "Criminal person", Hispanic men

But the screenshot contains more photos of India's prime minister than it does of Hispanic men. In fact I can't see any obviously-Hispanic men, and the biggest category in that set seems to be white men (though some are ambiguous).

The doctor and nurse searches suggest Google is making some effort to de-bias the results against the stereotype.

To me the biggest takeaway is that image search results still aren't very good at all, for generic searches like this.

Indeed it's likely that they can't be, as it's so hard to discern the user's true intent (for something as broad as "happy person"), compared to something more specific like "roger federer" or "eiffel tower".


I couldn't quite believe your comment when I read it so I did a Google image search for "person" and the results weren't a lot better than you'd suggested. Mostly white men, a few white women, a very few black women, a handful of Asians, and multiple instances of Terry Crews.

The net result of that Google search, combined with the "Shirt Without Stripes" repo, leaves me even more unimpressed with the capabilities of our AI overlords.


I think the skewing of results lessening your impressed-ness is the wrong takeaway. If anything, the AI is a more perfect mirror of the society it learned from than you expected. Perhaps the right way to look at it is that we are capable of producing things that we don't understand, that are more sophisticated than we realize.


You may be right. It's been bugging me since I posted earlier on so I fired up a VPN with an endpoint in Japan, along with a private browsing session in Firefox, to see if I got different results. As it happens the results were interesting:

- If I entered "person" I'd see a mix of images substantially similar to what I saw using google.co.uk up to and including Terry Crews, which was frankly a little weird, and otherwise mostly white

- If I entered "人", which Google Translate reliably informs me is Japanese for "person", I'd see a few white faces, but a substantial majority of Japanese people

So it seems possible that Google's trying to be smart in showing me images that reflect the ethnic makeup I might expect based on my language and location. I mean, it's doing a pretty imperfect job of it (men are overrepresented, for one) but viewed charitably it's possible that's what's going on.

Is the case for woke outrage against Google Image Search overstated? Possibly; possibly not. After these experiments I honestly don't feel like I have enough data to come to a conclusion either way, although it does seem like they may at least be trying to do a half decent job.


This seems like you're attributing motive to google here, but I don't believe that's right. For example, Terry Crews appears in the query "person" because his "TIME Person of the Year 2017 Interview" article was very popular online. I get a lot of Greta Thunberg because she was TIME Person of the Year 2019 and received similar online attention because of Donald Trump.

The TL;DR of it is that google crawls the internet for photos, associates those photos with text content pulled from the caption or from the surrounding page, and gives them a popularity score based on the popularity of the page/image. There are some cleverer bits trying to label objects in the images, but it's primarily a reflection of how frequently that image is accessed and how well the text content on the page matches your query. There's some additional localization, anti-spam, and freshness rating that influences the results too.

The majority of pages with "人" and a photo on it that has a machine labeled person image would be a photo of a japanese/chinese person, and if you're being localized to japan with a vpn, that would be even more true.

Google doesn't "know" what you're trying to search. It's a giant pattern matching game that slices and dices and rearranges text to find the closest match.


> Google doesn't "know" what you're trying to search. It's a giant pattern matching game that slices and dices and rearranges text to find the closest match.

I'm not disputing that, and it certainly explains why it's "good enough" for somes search queries whilst being totally gimpy for others.

My understanding was that Google does prioritise what it's classified as local search results though, on the basis that they're likely to be more relevant.


This is the problem though, all those companies are advertising fantastical results. They aren't saying "Hey! We spent billions of dollars so our algorithm could be as racist as your uncle Steve!". Oh and by the way, Steve is now right - because all the crimes he ever finds out about are by black people, because that's what Google has decided he wants to see. So it's no longer him seeking out ways of justifying his latent racist tendencies, no, he's outsourced that to Google.


Bing results, "person" shows stick figure drawings, Pearson Education logos, Person of the Year, people named Person, etc.

"Person without stripes" shows several zebras, tigers, a horse painted like a zebra, and a bunch of people with stripes.


> "Person without stripes"

Interestingly, duckduckgo shows me, as second result, an albino tiger with, you guessed it, no stripes. The page title has "[...] with NO stripes [...]" in it, so I assume that helped the algo a bit.

EDIT: I also got the painted horse (it looks spray-painted, if you ask me) and I must admit it's quite funny to look at


If you really want to be disappointed, search for [doctor] and [nurse].

Unless things have really changed, [doctor] will be mostly white men and [nurse] will be mostly white and Filipino women.

But don't blame the AI. The AI has no morality. It simply reflects and amplifies the morality of the data it was given.

And in this case the data is the entirety of human knowledge that Google knows about.

So really you can't blame anyone but society for having such deeply engrained biases.

The question to ask is does the programmer of the AI have a moral obligation to change the answer, and if so, guided by whose morality?


Those look almost entirely like stock photos or part of advertisements. It's probably just reflecting the biases of what photos other businesses like, which get the label of "doctor" or "nurse".

Any sort of image search is going to tend to be biased toward stock photos, because those images are well labeled, and often created to match things people search for.


> The AI has no morality. It simply reflects and amplifies the morality of the data it was given.

Key point right there. Unless Google is deliberately injecting racial and/or gender bias into their code, which seems extremely far fetched (to put it kindly), the real fault lies with us humans and what we choose to publish on the web.


All the young doctors are women. 13 women to 12 men.

Nurses it's 34 women to 5 men. Proportions of skin tones are what I'd expect to see in a city in my country.


What does the color of people's skin in search results have to do with morality? I was raised not to see color, now we have this "progressive" movement hell bent on manipulating search results to disproportionately represent minorities. If you want to filter your search results based on the color of skin you can do that easily.


What bias? Who is biased? Quick duckduckgoing indicates there are far more male than female doctors in the US. So statistically, it would be correct to return mostly male doctors in an image search. If you want a photo of a specifically gendered doctor, it's not hard to specify. Not really seeing a problem here.


> What bias? Who is biased?

I would contend that society is biased. There is no evidence that says men are better doctors than women, and in fact what little this has been studied says that women make better doctors than men (and is reflected in the more recent med school graduation classes which are majority women).

So it's a question of what you are asking for when you search for [doctor]. Are you asking for a statistical sampling or are you asking for a set of exemplars?

> So statistically, it would be correct to return mostly male doctors in an image search.

And that's exactly it. The AI has no morality. It's doing exactly what it should, and is amplifying our existing biases.


> So really you can't blame anyone but society for having such deeply engrained biases.

You can blame statistics for that. Beyond that, you can blame genetics for slightly skewing the gender ratios of certain fields and human social behavior to amplify this gap to an extreme degree.


Honestly, I don't think morality is the issue here; it is objectively inaccurate to show only white men for the search string "doctor" when not all doctors in the U.S. are white men, and most doctors in the world are not white men. This would be like showing only canoes if someone searched "boat"--we would rightly consider that an error to be corrected.

IMO, wrapping it in a concept like "morality" because the pictures have people in them just serves to excuse the problem and obscure its (otherwise obvious) solution.


I tried this as well in an incognito window on Firefox and got the results you mentioned. I notice, however, that virtually all of the results have associated text containing the word person. It seems likely that Google image search featurizes photographs to include surrounding document context.

(That's how I would do it if I wanted more accurate rather than more general results.)


I don’t understand why AI or a search engine had to meet your or anyone’s expectations for diversity. If I searched for “shirt” and didn’t get shirt pictures in the color I wanted I would just tune my query instead.


I just did a google image search for "person". The first 5 images were of Greta Thunberg. She must be the most representative person ever.

The next few images contained Donald Trump, Terry Crews, Bill Gates and a French politician named Pierre Person.

After that it was actually quite a varied mix of men/women and color/white people.

I am still not very impressed with Google's search engine in this aspect, but it is not biased in the way you suggest.

At least it is not biased that way for me. As far as I am aware, and I might be completely wrong here, Google, in part, bases its search results on your prior search history and other stored profile information. It is entirely possible that your search results say more about your online profile than about Google engine :)


> The first 5 images were of Greta Thunberg. She must be the most representative person ever.

Well, she was the 2019 Time Person of the Year.

Likewise, Trump was the 2016 choice, and Crews and Gates have been featured as part of a group Person of the Year (“The Silence Breakers” and “The Good Samaritans” respectively).


AI can't fix society's problems. AI merely reflects them back.


4 of my top 7 images (the top line) are Greta Thunberg in a search for "person". First viewport is 11 men, 11 women, 1 stick person, of which there are 4 Thunbergs, 4 Trumps, 2 Crews. People seem to be if they got major "person" awards like "most powerful person" or "person of the year".

There's not much diversity, assuming Terry Crews is from USA, then all the first viewport full of images are Western people; except Ms Thunberg they're all from USA AFAICT [I'm in UK].

The first non-Western person would be a Polish dude called Andrzej Person (the second Person called Person in my list after a USA dancer/actress), then Xi Jinping a few lines down. The population in my UK city is such that about 5/30 of my kids primary and secondary school, respectively, classmates have recent Asian (Indian/Pakistani) heritage. So, relative to our population, there are more black people, far fewer Indian-subcontinent no obviously local people.

Interesting for me is there are no boys. I see girls, men and women of various ages but no boys. 7 viewports down there's an anonymous boy in an image for "national short person day". The only other boys in the top 10 pages are [sexual and violent] crime victims.

The adjectives with thumbnails across the top are interesting too - beautiful, fake, anime, attractive, kawaii are women; short, skinny, obese, big [a hugely obese person on a scooter], cute, business are men.


Most of the person results appear to be 'Time Person of the Year' related. Another result is a guy with the last name Person. The results don't seem to be related to the definition of the word 'person'.


For me it shows all newsworthy people and articles. It shows the titles of the pages and they are all stuff like "11 signs you are a good person" So it seems clear that there is no kind of AI bias here but simply that high ranking articles with the word person more often than not choose white men as their stock image.

Most of the very top results seem to be of trump and greta thunberg.


You've raised an entirely unrelated problem. Showing shirts with stripes when you search for "shirts without stripes" is just plain wrong. Showing only a single demographic of person when you search for "person" is correct, it just doesn't have the level of diversity you seem to want. Nothing about diversity is implied in the query, and so your observation is completely unrelated to a plainly incorrect query.


On the other hand, the bias in the results means they're somewhat incorrect: there is more than one demographic of person, showing only one in response to a query that doesn't ask for a particular one is incorrect.

If you were unfamiliar with them and searched "widgets" to find out more and got widgets of a single colour and form, it would not be an unreasonable assumption that widgets are mostly (if not entirely) that shape and colour, especially if there was nothing to indicate that this was a subset of potential widgets.

It's not so much "demand for diversity" as it is "more accurate and correct representation".


A former coworker had the last name "Person". They once received a letter (snail mail) addressed to "TheirFirstName Human".

I never figured out what kind of mistake could have led to that.


Maybe a veterinarian's customer database? They would have to distinguish pet names from humans, but keep a record of all.


Yeah, that's one plausible explanation. (I don't remember the nature of the letter.)

Relatedly, one time I picked up a prescription for a cat. The cat's name was listed as CatFirstName MyLastName. They had another (human) client with that same first and name. It turned out that on my previous visit they had "corrected" that client's record to indicate that he was a cat.


I think search algorithms still have a long way to go to really understand the intention. Try your image search results for "white person" "black person" "asian person" "white inventors" "black inventors" "asian inventors" Doesn't quite deliver what would be expected.


Huh, I tried that with 'people' and the first result that was all white was #15, first result that was 100% men was #8.

If I search for 'person' it's a mixed-race woman, then a white woman (Greta Thurnberg), then a white man.


More than racism on the part of google[1] I would attribute that to it being an hard problem with too many dimensions. About three years ago if you searched "white actors" google would give two full pages of only black people (I have no idea whether the actor part was correct).

Many interpreted this along tribal lines, but likely it is that there is constant tuning and lots of complex constraints.

[1] not to say that you implied the reason was racism, but often it is attributed to something along those lines


The inverse: a favorite trope of the American far right is that GIS for "american family" will show you photos of... mixed race families. (Something the far right has strong opinions on, and is a tiny minority of all marriages in the US)

Something of a corollary to Brooksian egg-manning: with an infinite number of possible searches, you can find at least one whose results do not exactly match the current demographics of the state from which you place the search.


Did they manually skew the results of the algorithm once this started making bad PR?


And when I search for "men without hats" I see men from Men Without Hats with hats. Language is hard.


DDG does pretty good for "person" or "people"


Wouldn't that be a reflection of the world's bias rather than Google's bias?


What is your point?

The google image search you did -- did not provide incorrect answers, unlike the OP's


Jokes on you. Not having diversity is now considered incorrect, even if it wasn't stated. AI needs to learn to keep up with the craving for relevance the rest of Silicon Valley has by ensuring all results comply with whatever equal opportunity mantra is now in vogue. The next time I search for "CSS color chart" I expect the preselected color to be black.


I hate comments like this that only exist to create drama.


Google American Inventors and you'll get 95% black men.


“I’m sorry, Monsieur, we are all out of cream — how about with no milk?”


In Zizek's words, white coffee without milk isn't black coffee.


Google has for years put out puff pieces talking about high accuracy on image tagging. It’s only within the last few months that searching my Photos library for “cat” returned something other than pictures of my dog.

There’s a nuanced argument that practitioners know how ML is so dependent on training data and accuracy tails off sharply, but that nuance tends to removed from anything selling to potential customers — which has not been a great way to keep them in my experience.


I'd assume pets are hard as there are so many varieties (potentially even harder than humans). For the last few years Google Photos has correctly returned photos for a search of "Lamborghini" in my albums. I'd expect "shirt stripes" to fall into that category.


Sure, I’m not saying it’s an easy problem — just that the marketing is once again setting the field up for failure by giving the impression of human-level performance but delivering results only in very narrow scenarios.


I think a huge Chinese room type parser with a bunch of heuristics bolted on probably provides much better bang for the buck than trying to implement actual NLP (in every possible language, or even just in English). So that's probably what nearly everybody is doing.


Searching for "now one with stripes this time please" yielded similarly disappointing results :)

Edit: "stripes" not "stripped" ugh


Google's result is noticeably better though. :)


What's better about it? I didn't notice anything good.


I think he/she's being sarcastic - there's general tendency to regard google's version of "ai" as better?


You mean more stylish?!


He probably meant "more spicier". Second image for "now one with stripped this time please" yielded image that is linked to article about deep-fake nudes.


Well yeah, if you spell 'striped' like that it might lead to some spicy results. :)


Does Amazon pretends to do AI? They are just offering a platform to do your own Machine Learning. I don't think they ever said their search engine was doing anything smart.

EDIT: scrap that, I didn't mean Alexa, which is doing AI obviously, but the search engine of Amazon's retail website.

Anyway, NLP is hard and everyone sucks at it. Think about it: just building something that could work with any <N1> <preposition> <N2> or any other way to express the same requests would mean understanding the relationships of every possible combinations of N1 and N2. It means building a generalized world model that is quite different from simply applying ML to a narrow use case. Cracking that would more or less mean solving general AI which probably won't happen soon.


alexa, downvote this.


ML and AI are the same thing

You're right the NLP is hard, but not everyone sucks at it.


> ML and AI are the same thing

Not actually true. ML is one area of study within the field of AI. Thanks to marketing departments and slightly shoddy journalism these two things are now casually treated as equivalents, but they're really not: ML is still very much a subset of AI.


I disagree, "shirt without stripes" is an unusual word choice, not one that our ML models would be optimized for. Try "solid color shirt" and you'll see how much better the results are - at least on Google.


"Shirt without stripes" may (or may not) be an unusual word choice to enter into a search engine, but it's definitely one that a child would understand.

Additionally, "shirt without stripes" is not the same as "solid color shirt"; as an example, take a look at:

https://www.google.com/search?q=tie+dye+shirt


Quite so. "Shirt without stripes" can include shirts in plenty of patterns other than solid colours (paisley, polka dot, checked, battenberg, floral print, etc.).


Yes, that exact sequence of words isn't particularly common. And yet a child, even if they have hey have never been exposed to it, has no problem understanding what it means.

Whereas all these services seem to be processing the input in such a superficial way that they give the searcher results that aren't just inaccurate but are the opposite of what was asked for.


> "shirt without stripes" is an unusual word choice

Lol what? These are words a toddler would understand.


>I disagree, who says "shirt without stripes"? That's an unusual word choice, not one that our ML algorithms would be optimized for.

If your "ML algorithm" doesn't understand straightforward language, how is it any better than a couple if-then statements?

Beyond that, I'm unsure how you think "<something> without <something>" is at all unusual or difficult to decipher.


Every human understands that phrase and yet the AI doesn't. That's the gap that has to be fixed.


You have to realize that search is not AI. It's pattern matching. And the string "shirt without stripes" matches really good with "shirt with stripes". Levenshtein distance is 3.

If vendors would use the term "shirt without stripes" than it would match great, but they call it "plain shirt".


> You have to realize that search is not AI.

Google advertises using BERT natural language models

https://blog.google/products/search/search-language-understa...

> ... but they call it "plain shirt".

Or polka dotted :)


I don't want to teach myself how to talk to a machine. I want the machine to understand what I am saying.


“Chicken without head”, “men without pants”, “sky without clouds” only work because the users uploading the images tended to tag them as such... (in that case the users do the hard coding of meaning)


I searched "plain dress shirt" and similar terms on Amazon and Google, and they returned plenty of shirts with stripes and checkered patterns.

How am I supposed to explicitly search for a shirt without stripes, then?


Just searched same on google and first non-plain shirt was in the second hundred. Duckduckgo was similar. Considering that they classify images according to surrounding text, it seems like pretty good result.


I would go in a store and search there. Not everything has to be solved by tech and AI. This is a prime example of a problem that requires insane amount of work and yet provide absolutely no value to the world.

People still think we will have self driving cars "in two years" yet here we are talking about dumb shirts. AI winter is coming


I have noticed in the past few years google results have become noticeable worse for similar reasons. Google used to _surprise_ me with how good it was able to understand what I was really looking for even when I put in vague terms. I remember being shocked on several occasions when putting in half remembered sentences, lyrics, expressions from something I had heard years ago and it being the first! result. I almost never have this experience anymore. Instead it seems to almost always return the "dumb" result, i.e. the things I was not looking for, even trying to avoid using clever search terms. It's almost like it is only doing basic word matching or something now. Also, usually the first page is all blogspam SEO garbage now.


Google was good at launch because it was harvesting data from webrings and directories to provide it "high quality" link ranking data. However, they didn't thank or credit or share any of their revenue with the sites whose human curation helped their results become so impressive. Seeing that Google search was effective, most human curators stopped curating directories and webrings. The SEO industry picked up the slack and began curating "blogs" that are junk links to junk products. This pair of outcomes led to the gradual and ongoing decay of Google's result quality.

Google has not yet discovered how to automate "is this a quality link?" evaluation or not, since they can't tell the difference between "an amateur who's put in 20 years and just writes haphazardly" and "an SEO professional who uses Markov-generated text to juice links". They have started to select "human-curated" sources of knowledge to promote above search results, which has resulted in various instances of e.g. a political party's search results showing a parody image. They simply cannot evaluate trust without the data they initially harvested to make their billions, and without curation their algorithm will continue to fail.


> Google has not yet discovered how to automate "is this a quality link?"

Google has so much more data than just the keywords and searches people make, it seems like this should be a problem they could solve.

Through tracking cookies (e.g. Google Analytics) they should be able to follow a single user's session from start to finish, and they also should be able to 'rank' users in some vague way where they'd learn which users very rarely fall for ads or spend time on the sites that they know are BS. Those sites that are showing up on page 5 or 6 of the search results, but still get far more attention than others on the first few pages, could get ranked higher.

But I don't think many of Google's problems these days are technical in nature. They're caused by the MBAs now having more power at Google than the techies, and thus increasing revenue is more important than accuracy.


Theoretically, they could do a lot of things, but plenty of those would get them in hot water from a regulatory standpoint.

Also, don't underestimate the adversaries. Ranking well on Google means earning a lot of money. So much so, that I'd argue the SEO-people are making significantly more money than Google loses by having spammy SERPs. They will happily throw money at the problem and work around the filters. I don't think you can really select for quality by statistical measures. Google tried and massively threw "trust" at traditional media companies and "brands". The SEO-people responded by simply paying the media companies to host their content, and now they rank top 3, pay less than they did by buying links previously, and never get penalties.


Nope, all they could do there would be to group people together based on their "search behavior graph". The problem of finding BS sites is in itself a "shirt without stripes"-level hard problem. That's why being able to rely on user curation is (was?) so important for Google. People didn't curate the internet at first for money, they did it because they were interested in the subject for which they were curating their web ring.


SEO would just pay people to surf bad sites on the web, in order to feed the data into Google’s engine.

They already do this today for any venue where they can link “traffic volume” to “ranking increase without human review”.


I disagree with your first point but agree with your second. Google obsoleted most webrings/directories because page rank was a better way of calculating a websites popularity. Then, websites figured out how to game page rank, and its been a gradual decline ever since.


HN keeps mentioning dark SEO and companies gaming the ranking.

That might explain a lot but I don't think so.

Just look to how they are messing up simple searches because of basic lack of quality controls:

- Why doesn't doublequotes work anymore? Not because dark SEO vut because nobody cares.

- Same goes for the verbatim option.

- The last Android phone I liked was the Samsung SII, and last year I finally gave up and got the cheapest new iPhone I could get, an XR. My iPhone XR reliably does something my S3, S4, S7 Edge and at least one Samsung Note couldn't do: it just work as expected without unreasonable delays.

- Ads. They seem to be optimized to fleece advertisers for pay-per-views because a good number of the ads I've seen are ridiculous, especially given that I had reported those ads a number of times. I guess what certain customers that probably paid a lot for those impressions would say if they knew that I had specifically tried to opt out from those ads and weren't in the target group anyway.


The point is that the web rings and directories were an important source of good PageRank input. By killing them off, Google basically clearcut all their resources and did nothing to replant, and now the web is becoming a desert where nothing can grow.


Google was good because they used to optimise to giving the best search results, that was their aim. Now their aim is best profit, it seems, and their results appear to correspond (keep you on site longer).


An interview with the founders of Google from 1999 offers a more nuanced view on this. https://www.kalemm.com/words/googles-first-steps/

Google's aim was to replace other sources of information with Google:

> People make decisions based on information they find on the Web. So companies that are in-between people and their information are in a very powerful position

Profit was on their minds from the very beginning:

> There are a lot of benefits for us, aside from potential financial success.

Revenue, however, was not urgent back then, to them or to their VCs:

> Right now, we’re thinking about generating some revenue. We have a number of ways to doing that. One thing is we can put up some advertising.

So over the past two decades, they executed a two-pronged approach: Become indispensable and Become profitable. But now they're trying to pivot from "at web search" to "at assisting human beings", and that's a much more difficult problem when their approach to "Become profitable" was to use algorithms rather than human beings.

Here's a useful litmus test for whether Google has succeeded at that pivot:

If you were in a foreign city and you suddenly wanted to propose marriage to your partner, would you trust Google Assistant to help you find a ring, make a dinner reservation, and ensure that the staff support the mood you want (Quiet or Loud, Private or Public)?

If so, then Google's pivot has been successful.


Your search for "skiing Norway" mostly returns results for skiing in the French Alps, because those pages have much higher visit rates.

Google is a dumbass nowadays, and regularly ignores half your search terms to present you with absolutely irrelevant results, that have gotten lots of visits in the past.


Is that actually true for you? I just tried it (logged out, and with adblock) and everything on the page seems relevant.


I've noticed this too, and frequently wonder why there aren't new and better search startups...


There are, like DuckDuckGo. But the first complaint is usually "their results aren't as good as Google" and that's because Google in reality still gives better results because of their (lack of) privacy.

People want better results but don't want to be tracked, and those things are in opposition to each other.


I think the only time Google still reliably gives better results than Duckduckgo is for non-English languges (which Qwant is good for), or when Google has something indexed and Bing/DDG does not.

But taking it as a given the Google's results are better, is that really because of lack of privacy, or just because of how Google has been pouring more money and talent into the problem longer than anyone else? Because I'm not convinced that personal data is particularly useful for generating search results. The example they always give is determining whether a search for "jaguar" means the cat or the car. But that always seemed silly to me, because most searches are going to give extra context to disambiguate ("jaguar habitat"), and even they don't, the user is smart enough to type "jaguar car" if they're not getting the right results. Further, Google doesn't actually know whether I'm more interested in cars or cats—it justs know that I'm a woman in college, so it guesses that I'm less interested in cars. Is that really so useful?

Does searching Google through Tor give noticeably worse results than searching google while logged in? I would be genuinely surprised if it did.


It took me something like 6 years, but I've gone over to DDG. Their results were poorer than Google's for me, so when I tried to switch I used to end up repeating and adding !g to every search. I don't think DDG got better, but Google results are bad enough that I think they're equal in quality now (for me). I don't login to Google, have tracking disabled, use uBlock and pihole; FF/Brave.


> I don't login to Google, have tracking disabled, use uBlock and pihole; FF/Brave.

I mean, that's probably why they are equivalent for you. You've chosen privacy over better results (which is a totally legit choice to make!).


Well it's hard to tell objectively but it seems to have got worse without me changing the privacy settings. I guess that's their quid pro quo though.


I concur that both the UI/UX has gone down and the results themselves are feeling less reliable.

Have you tried viewing pages past the first page? Often times it's just filled with what looks like foreign hacker scam websites.


Yeah I guess it comes down to monetization strategy and how loyal/sympathetic the early adopters are.

It's funny because it's frequently mentioned how Google's tracking is what enables it to give such personalized search results, but often I question how effective that really is.

For instance I question if Google has some profile on me and shows results they _think_ I will want to see (e.g. news related), and thus leave out other results. If it works that way then I'm frequently seeing the same websites in my results and effectively being siloed and shielded from other results that I may find interesting.

Their new strategy of adding snippets for everything has truly gone insane. I search a query for "covid us deaths" today and had to scroll about 3 viewport lengths down to even see the first result.

What happened to just a plain list of blue links?

From a marketing perspective, I feel like DDG needs to change it's name or use a shortened alias. "Google" is an incredible word as it's easy to spell, remember, and it's short. Interestingly they own "duck.com"...


> There are, like DuckDuckGo. But the first complaint is usually "their results aren't as good as Google" and that's because Google in reality still gives better results because of their (lack of) privacy.

Alternative hypothesis: people only have had Google as reference for years, which means that Google represents "reality" to them. Anything that looks even slightly different is therefore worse.


I don't know about you, but 4 out of 5 times, when I research my ddg query, which did not produce a single relevant result on the first page, on Google, I get a good result as #1... Maybe I search google optimized.

Still though: This is not evidence for Google's search quality. I, too, feel, like the results got worse over the last years.

Also: Afaik, DDG uses bing under the hood, not what I would call "search startup" in the sense of revolutionizing search quality.


IME DuckDuckGo has all the same problems people are talking about here. I uses DDG as my primary search engine.


The results are correct for me. You just wanted to write a snarky comment. Did that make you feel better and gloat on Twitter about how you called Google a dumbass ?


I have also found that search results are getting frustratingly worse. Often even when I put in explicit search terms and quotes, and filter out words that I don't want, Google will return results that don't adhere to what I am looking for or just return no results at all. I remember when I would search for something and find much more relevant information. Now the first 5 or 6 search results are ad-sponsored and aren't relevant, but I have to go to the 3rd or 4th page to find something that matches. I also often have to search for things that were posted in the last year or less because the older postings are increasingly irrelevant.


Page 1: About 189,000 results (0.35 seconds)

Page 2: Page 2 of about 86 results (0.36 seconds)

It seems they're really just trimming the web.


I suspect their job has gotten way harder. It's easy to forget that they aren't just passively indexing. The web is basically google's adversary, with every page trying to be top ranked regardless of whether it "should" be.


I think it's disingenuous to say "the web is basically google's adversary" when Google AdWords is the reason so many pages fight for top ranking.


Well sure, but I was hoping my narrow meaning would be clear: the search team operates in an adversarial environment.


The Search team isn't some helpless independent group adrift on currents outside of their control.


If it wasn't AdWords, it'd be something else. People build websites because they want their content to be seen. That means competing for search result ranking.


That was when people built websites to deliver content. Now people build websites to get highly ranked in Google. No matter how good google's algos are, they can't win when the underlying content is just SEO'd garbage.


SEO'd garbage often contains ads, including Google ads. There's no incentive for Google to fix this problem.

Google's job is not to give you great search results, it's to keep you clicking on ads. Ideally it would be the ads on the search results page directly, but if that doesn't work then a blogspam website with Google ads is the next best thing.

If Google was a paid service this problem would be solved the next day. Oh, and Pinterest would completely disappear from Google too. :)


> If Google was a paid service this problem would be solved the next day.

Nope. Cable television was introduced with the promise of no ads. That didn't last long.


Cable television could get away with it because for a long time there was no alternative, so they could renege on the promise of no ads and still keep making money (though now people have alternatives in the form of streaming services and cable television is circling the drain).

Search engines are a relatively competitive market. A paid Google with no extra perks will not fly when the majority of people will just flee to Bing. For a paid Google to be successful it has to provide additional value such as filtering out ads, blogspam, Pinterest and other wastes of time.


Spotify & YouTube Premium run without ads just fine, so why wouldn't it work for search engines? Text is way more compressible then music and video. So at least on serving data it is way more cheaper than the aforementioned services


Because an MBA will realize that they're "leaving money on the table" and not "maximizing shareholder value" by selling ad free subscription services.

Subscription based services also require you to be authenticated and that enables fine grained invasive tracking. Something traditional media couldn't do.

If delivery costs were a factor then I shouldn't be charged $15 for an ebook with near zero distribution costs when a paperback was $5 before ebooks came onto the scene and introduced a new incentive for price gouging.


Paperbacks were never $5 unless you were looking in the bargain bin.


Pulp novels were. Even in this millennium.


I would gladly pay 7.50 euros a month for a search engine which would serve me good results without the SEO blogspam. Sadly, I think that no new pages are written without SEO bullcrap. The next best thing I think is to use the search functionality of sites themselves (like Stackoverflow, Reddit, HackerNews etc.)


Google has gotten worse for me BECAUSE of the stuff you're talking about: It used to search everything and find the words that I cared about.

Today, it will silently guess at what I want, and rewrite the query. If they have indexed pages that contain the words I put in, but don't meet their freshness/recency/goodness criteria, they will return OTHER pages with content that contains vaguely related words. "Oh, he couldn't have meant that, it's from 6 months ago, and it's niche!"

They'll even show this off by bolding the words I didn't want to search for.

So, if I'm looking for something that isn't popular -- duckduckgo it is. It doesn't do this kind of rewriting, so my queries still work.


I'm not sure what has changed but I've experienced of late DDG participating in a similar style of result padding/query ignoring which has been publicly brought up in Reddit feedback (and I've certainly complained about using their own feedback form). Seemingly arbitrarily even double quoted strings I've observed being ignored which is not how it was even six months ago.

I still continue to use it though since as some here have already mentioned Google's results because worse a few years ago and DDG was lean and good enough to switch. I do hope they'd consider more such feedback.


I agree with search getting worse, now I always have to add quotes around my keywords: yes, I really wanted to search for this exact thing. I also end up adding "reddit" in my query just so I can reach some genuine useful questions/responses instead of top 10 blog posts written just for the Amazon affiliate links.


Same! Or hackernews search when applicable, and sometimes even twitter is fruitful.


Imagine if somehow it would be possible to pick any release and search with that.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: