Hacker News new | comments | ask | show | jobs | submit login
Is Google Burying Firefox With User Agent Strings? (thepowerbase.com)
149 points by adito on May 6, 2012 | hide | past | web | favorite | 65 comments

I've been using Ubuntu for years with both Chrome and Firefox and I've never seen issues like Firefox getting higher connection resets. I'm happy to ask people at Google to dig into this report, but I have to say this sounds completely off base to me. Google wants Google.com to work well with with Firefox.

Right, it would make absolutely no sense for Google to try to make its services work less well on Firefox. Everybody using Google on Firefox is just as much a win for Google as everybody using Google on Chrome, the only thing that they get from their own web browser is not having to pay Mozilla to make Google the default search engine.

A long time ago, I went through a period where I was getting a lot of resets. But now, I'm just fine.

Perhaps it's just a problem with the particular machines he's connecting to? I use Firefox almost exclusively and I haven't seen any resets for ages now.

I am using firefox, and chrome on linux for 5 years already, and didnt notice such an error. I hope and don't expect Google to do ever such a trick to ff.

AFAIK, the chrome user agent switcher extension only changes the agent on a JS level. You can't change what is being sent to the server due to limitations in the extension API.

This explains the Adsense warning because the server-side part of the framework used (gwt) is seeing a different user agent than what the client part is seeing.

Granted, UA sniffing is bad practice on either client or server side, but if you do it and send content tailored for browser A and then see that, strangely, the client is actually browser B, then you are probably allowed to be confused and complain (better than failing in strange ways).

Also knowing that, it's unlikely that something on the server side is causing these connection reset issues because, as I just said: the server still sees a chrome user agent and producing a connection reset error (RST packet) requires connection level involvement (server or somewhere in between, but never the client browser, minus bugs).

In general: be very careful what error messages you see: the Adsense error is different from the gmail error which in turn is (likely) different from the connection reset issue.

Overall there is too much conflicting information to attribute malice or even just intent to Google here.

If I were in that users position, I'd check my firewall and/or proxy configuration (and try disabling HTTP pipelining if it's active in Firefox - it's disabled by default for a reason) as the problem is much more likely somewhere over there.

It's possible that it hasn't been updated yet, but extensions can now change the user agent sent to the server.


Note, however, that client-side UA sniffing could result in different requests getting sent to the server, from which subsequent wackiness might ensue...

Having used GWT in the past, the GWT error makes sense. When loading a GWT application, GWT first sniffs the user agent and locale. It then downloads a different javascript file, optimized for the given user agent and locale.

For example, the javascript file served to a French user using Firefox will contain only French labels, and won't contain any code to deal with IE, Chrome...

User agent sniffing is not always good practice, but, in this case, it results in a nice performance improvement :) You can even use it to have a "production" and a "debug" version of your app. By adding "?debug=true" to your URL, GWT would load the "debug" version, which could for example contain code to log information.


I would be surprised to have an internet company as big as google screw up pipelining. How likely is that?

Not likely. But any proxy server in between might very well have. Including a transparent proxy on the ISPs side that the original poster knows nothing about

Could it be that this is an innocent mistake or a bug in the way that Google servers are sending HTTP. Chrome wouldn't be affected as it will be using SPDY for all Google services. As soon as Chrome switched back to the Firefox user agent it started using HTTP again and the same bug was found.

Besides, this just doesn't make sense. If this was an attempt to make Firefox look bad then it's a dreadful one. This just serves to make Google services look faulty as Firefox will still work for everything else. Because of this I doubt there is any malice behind this and it is just a bug.

Firefox has SPDY support as of a version or two ago and uses SPDY on Google sites. I seem to that feature being on by default, even.

SPDY isn't enabled by default until Firefox 13 (currently on the beta channel, scheduled for release in four weeks).

I wonder if that's what this is though: Ubuntu LTS having modified the FF build they run to use SPDY by default (given the LTS is supposed to be out for 5 years now, they may have chosen to jump the gun on that feature), and the SPDY support in the bundled FF isn't complete?

I've been pretty consistent in my Firefox use the past few years (probably >80% of my browsing is in Firefox), and I've never encountered this. Ever. I'm doing nearly all of this from OSX, with occasional browsing from FF and IE in a Windows VM or FF in a Linux VM.

Perhaps this is more to do with a Linux string being picked up vs the Firefox aspect?

I've almost exclusively been using Linux on Firefox as my main browser for the last 5 years. I also use pretty much every Google service under the sun, and I've never seen any of these problems.

I use FF on Linux for almost everything and I've never encountered this either.

I've also used nothing but Firefox for the past few years, and I'm a Linux user. I've never experienced anything like the author describes.

I use firefox and chrome and experience these problems intermittently but only under firefox.

Another potential possibility:

Ubuntu Firefox is Mozilla's Firefox coupled with arbitrary modifications by the Ubuntu developers. In principle I respect Ubuntu's position and desire to make things right by their users.

But in practice, having been in the same relationship as an author of Chrome for Linux I can tell you that it's always dangerous to have people who aren't browser developers make modifications to a browser. More than once the Ubuntu Chromium packager made changes to Chrome that were harmful for users because they didn't understand the consequences of their changes.

Maintaining Chromium packages on Linux is a nightmare. Here's the line count for different browser packages on Gentoo:

  408 chromium/chromium-20.0.1123.4.ebuild
  349 firefox/firefox-12.0.ebuild
  42  rekonq/rekonq-0.9.2.ebuild
  98  midori/midori-0.4.5.ebuild
  90  epiphany/epiphany-3.4.1.ebuild
  61  conkeror/conkeror-1.0_pre20120223.ebuild
  64  dillo/dillo-3.0.2.ebuild

The interesting question isn't the raw number, it's what those 408 lines are doing - it's entirely possible that half of that could go upstream.

For some years, lots of vendors had patches to mess around with perl's default @INC order, all of which were distro-specific and not really generalisable, because nobody had bothered to submit the upstream change that would have got rid of that whole class of patches.

Now, I suspect the reasons for that may have been to do with perl's then rather slower pace of releases, so people didn't see the point when perl5 version 10 was apparently not getting any closer, but just because that reason doesn't apply to chromium doesn't mean there isn't a similarly convincing one for local patches.

It also doesn't mean there -is-; I've always been fond of local patches to software being corresponded with a distro bug and preferably with an upstream bug explaining the rationale; it may not avoid the necessity for the patch but it at least means the conversation that led to it is both visible and involved the right people.

Looking through the ebuild it looks like a decent amount of it is related to keeping it from using bundled libraries instead of the system ones when building. After that it looks like most of the rest of the complexity is related to getting things installed to the proper places.

So, it's of the same order of difficulty as Firefox? Is this surprising?

For a modern codebase, yes.

I think this is precisely it. I've had this same problem and others on Linux Mint. I wasn't able to fix them until I built FF from source.

I would also suspect there could be something amiss in his ISP or network path to affected sites.

Perhaps changing the User-Agent is making a difference, but not in the way he expects. For example, his 'Firefox' User-Agent is 77 characters long, while his 'Chrome' User-Agent is 106. That might be the difference between some packets more often being a size that triggers a problem somewhere on the path. (Or, the string or size might be triggering different handling in some transparent proxy.)

It could just be connection problems/wifi playing up/whatever, badly timed to coincide with the tests. I'm not sure this is much evidence really... Google previously have been all for an open and competitive browser market (part of the reason they started Chrome), they're not Microsoft in that regard, and I doubt they have some code somewhere along the lines of "if (!chrome) redirect_to_the_pentium_ii_in_the_basement();"

> if (!chrome) redirect_to_the_pentium_ii_in_the_basement();

I believe that code might go something more like

> if (!SPDY-capable) redirect_to_regular_somewhat_overloaded_http_load_balancers_without_resetting_client_ttl_value();

Even without this bug it's a stretch to think that Google would want to bury Firefox. Google has a vested interest in Firefox's success. That's why they pay Mozilla 300 million a year.


I'm not on the side of thinking Google are doing anything malicious here, but worth pointing out that their paying Firefox money doesn't mean they wouldn't neccesarily want to see Firefox die.

They pay that money not as a charitable donation but because the deal makes them more in advertising than it costs. From a monetary point of view, of course it would be better for them if all that advertising came from Chrome, so they still make the money but they don't have to pay $300m because they already own the browser.

I never get any errors with Firefox on Windows, so this was a interesting article. I will have to try it out some more on my Linux.

However, I got loads of "Connection was reset" when trying to use the powerbase site where this article was. Server seems very very slow or something is wrong with it.

I get errors with Firefox on Ubuntu, but never on Windows...

Off topic: the pagination with blogs/news sites articles is total nonsense...

> pagination with blogs/news sites articles is total nonsense

Furthering the derail: Increases ad-space. Makes perfect simple sense to me, though I may not agree with it all the time (or ever).

I used to get the gmail message all the time with firefox on linux. My solution was to use chromium for gmail and firefox for the rest.

I can't even get to it. Interested to know what this is about - for the past week I haven't been able to get to pages via my ipsd2 safari search - it goes straight back to the original search listing. I switch to "classic" mode and it's all OK.

if (document.querySelector('h1').innerHTML.match(/^Is .*\?$/)) { console.log('Probably not.') }

TypeError: Cannot read property 'innerHTML' of null

You should use UA sniffing to fix this bug.

Harr. :)


if (document.querySelector('.title a').innerHTML.match(/^Is .*\?$/)) { console.log('Probably not.') }

(HN's markup is terrible, it doesn't use headers)

Packet capture or it didn't happen.

(OK, that's the cranky developer-speak for "If you can reproduce the problem while taking a packet capture I'd be glad to help troubleshoot".)

I don't work at Google or on Chromium but I can take a look at it and pass it along if it looks like something out of the ordinary. My email is in my profile.

I ran into similar problems at work where I am behind a proxy, where my connection would time out to sites like gmail, etc. I think it had something to do with cookies. When I revisited a site that timed out, but I went in incognito-mode, I was able to instantly see it.

I see similar errors very often. I use Google Chrome on OS X.

I also have a mobile broadband dongle (UK - T Mobile) which does weird and unpleasant things to the connection. (All images are proxied with poor quality versions, javascript is inserted into the page asking for key combos to improve image quality; all alt tags are re-worded, etc.)

I blame any sub-optimality on the shitty broadband from T Mobile and the weird proxies; then on overload from HN, then on errors I've made.

For the record, the article poses a simple question and makes light of the fact that the results are inconclusive. It should be taken with a grain of salt, as the author intended.

Mozilla turn a significant profit through Google referrals, last time I looked. The 'splintered' browser market masks a fairly comfortable arrangement for both companies, as well as Opera. Don't assume a conspiracy where incompetence or poor fortune provides a better explanation - chances are the Gecko-optimised version you were loading had a minor bug, or there's a problem with your system. Perhaps you encountered some A/B testing gone wrong?

This article, with its black text on dark grey background, is incredibly hard for me to read. It's almost like the author didn't want it to be read.

Doubt it. Tiny results set, huge accusations.

Works fine for me, Chrome & FF on Windows.

I wonder if the problem goes away if SPDY is enabled on firefox (it ships off by default in v12)?

Note that not all servers are equal.

What does that mean? Well, depending on your region, ISP, and a bit of luck, you'll hit different Google servers, at different places, etc.

Some of them have different things, some of them have new updates others don't, etc.

Which may be an explanation for the author having issues (then again it's just ONE possibility).

Specially it could be that regular HTTP was failing and not SPDY for example.

Personally I haven't had that either.

I've had the same problem while loading Gmail in Debian, but I blamed it on the nightly build without checking.

However, Firefox nightly builds on Windows work without a hitch. I am suspecting some firefox+linux combination messing things up. My two cents.

Try in windows, might give more leads...

On a related note, do not use user agent sniffing. Use feature detection. Sniffing is fragile and will be immediately out of date when browsers upgrade / add new features.

You shouldn't care which browser it is, you should care what features it supports (or doesn't).

What if you change Chrome's UA string to Firefox's?

he did on the second page and problems started happening

Whoops, sorry, I must have missed that part.

That question is answered in the second part. Changing Chrome's UA string to anything other than default causes errors, so the browsing speed can't be tested.

Oops, didn't notice the second page myself.

TL;DR: No.

Google search often breaks for me when behind a proxy, maybe for similar reasons (sending different content based on the user agent). I don't know if proxies have separately cached documents based on the user-agent or not, but I regularly get a broken instant search (no results at all even after pressing enter).

It's not all the time and I haven't investigated to find out what the problem is... Bing search always works so I just use that.

Does this mean that Google has given up completely on net neutrality? This is a very bad notion.

"Does this mean that Google has given up completely on net neutrality? This is a very bad notion."

Wow, break out the tinfoil hats.

This article has nothing to do with net neutrality.

I think it has a lot to do with it. Maybe not directly, but at the heart of it, net neutrality is (in my opinion) about dividing resources equally.

Net neutrality is about ISPs.

Point taken.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact