
User agent overrides for top Japanese Sites - asyncwords
https://bugzilla.mozilla.org/show_bug.cgi?id=1177298
======
holygoat
Fennec dev here. Figured I'd address some of these comments.

UA overrides are not the only tool in our toolbox. They're deployed in concert
with outreach from webcompat.

UA overrides are a way of _breaking_ a cycle, not preserving it.

They make it feasible for Japanese users to use Firefox without manually
messing around with UA overrides (e.g., using Phony). And that gives us
leverage. And with leverage our excellent webcompat team can get sites to make
changes.

------
jrockway
This is certainly unmaintainable. There are two aspects I find especially
interesting.

The first is how many different ways sites are determining the browser
corresponding to the user agent. You know that they're all based on "guess and
test": come up with an idea for an if condition, open up the code in the 5
browsers you care about, see if the result works. It almost reminds me of the
output from a fuzzer: it's a valid answer, but you can tell a random number
generator and a lot of tries is what got you there.

The second is how many web developers seem fine writing the same website many
times; once for mobile, once for IE, once for Chrome, once for Firefox. I've
always taken the approach of doing exactly one site, and using whatever
features are available in the worst browser the client wants supported. If
extensive workarounds are needed to make the feature work in every browser, I
say skip it. (I was always happy with what IE6 could do.) Of course, when I
did web development, it was mostly boring corporate applications, not public
websites that face pressure from competitors that are willing to write a
codebase from scratch for every browser back to NSCA Mosaic. I consider myself
very lucky.

------
MichaelGG
I guess this then contributes to a feedback loop. People will check their
stats, see that Firefox isn't used, and pay less attention. Sucky position to
be in.

------
tracker1
I really wish that all phone/tablet/mobile OSes would simply include an
X-Screen-Size (Xpx, Xpx, Xcm) and X-Touch-Enabled header.

I've used the detectmobilebrowsers.com as a baseline in the past, and only
tweaked slightly so that fallbacks with "phone" will be xs (extra small),
otherwise common mobile OSes would get "sm" (small), while desktops get "md"
(medium) ... if you use adaptive CSS, and JS you can/should of course adapt if
the pre-rendered environment doesn't match.

There are other modules written to predetermine a lot more, but it's all kind
of a bit sad.

------
realusername
There is a lot of websites like this unfortunately, Gmail, Google Maps,
Youtube, Office 365... Just try to browse a bit with Firefox for Android and
you will see...

------
ShaneWilton
Things like Modernizr and feature detection are great, but I'm excited to see
what happens when coeffects [1] have some more research behind them, and end
up being supported by a mainstream programming language.

The idea is that you're able to encode information about the execution
environment into the type system, so that you can do things like write
functions that depend on having access to GPS coordinates, or an audio output
device, and so on.

The theory is that this will make it easier to target a wide variety of
platforms, or devices which may restrict access to information through
permissions systems, like those offered by Android or iOS. If, at compile
time, you know you've handled all of the cases of an environmental constraint
either being met, or not being met, then you have a much stronger guarantee
that your code isn't going to unexpectedly fail spectacularly on a platform
you haven't tested against.

[1] [http://tomasp.net/blog/2014/why-coeffects-
matter/](http://tomasp.net/blog/2014/why-coeffects-matter/)

------
whoopdedo
UA overrides are enabling the poor web design that necessitates them. If web
sites aren't punished for doing the wrong thing they'll keep doing, requiring
more overrides, which hides the bugs, etc.

Stop this feedback loop. The client is not responsible for the server's bugs.

~~~
miketaylr
Hi, I disagree (not just because I authored the patch linked here). Web sites
are never punished. Our users are.

------
asyncwords
It's an unfortunate state of affairs when a modern browser has to spoof
another browser just to get the right content. The user agent in Microsoft's
new browser does this by default [1][2]. As a user, I'd love to see a day when
browsers don't reveal their user agents and web developers rely on feature
detection instead. I admit, though, that I haven't fully considered the
collateral damage that might come with that.

[1]:
[http://stackoverflow.com/a/31279980](http://stackoverflow.com/a/31279980)

[2]: [http://blogs.windows.com/msedgedev/2015/06/17/building-a-
mor...](http://blogs.windows.com/msedgedev/2015/06/17/building-a-more-
interoperable-web-with-microsoft-edge/)

~~~
err4nt
Sniffing User-Agent has been poor practice since 2011 , in favour of feature
detection using a library like Modernizr.

I used to advocate using something like Modernizr and then writing your code
against the results, but now I think an even more straightforward approach is
to just do the feature detection you wish to use directly in your code. No
sense in loading in a library and testing for things you don't need, still
only to have those tests totally decoupled from the parts of your codebase
that depend on the feature detection.

It makes more sense to do only the feature detection you need right in your
codebase adjacent to the code which relies on the results.

~~~
endgame
Testing for features over platforms was known to be a good idea since the
early days of autoconf (1991-1992, according to the history in its info file).
Why did the web folk take so long to figure this out?

~~~
err4nt
For the most part it didn't _matter_ until recently.

Netscape versus IE (late 90's) - features didn't matter, they just rendered
HTML and CSS differently

Firefox versus IE (early 00's) - Firefox added a bunch of great CSS support
and things like rounded corners and PNG transparency, so once we could use
those we could just supply a polyfill for the specific IE version that needed
it. Opera could handle it already, Netscape _was_ Firefox rebranded. Only IE
(which announced itself as IE) needed extra help

Mobile vs Desktop (late 00's) iPhone! Android! Tablets! Now is where things
start to get a little crazy, IE will be IE but a bigger concern is the
separation between tiny little touchscreen devices browsing a website, and a
massive desktop computer

Mobile versus Mobile (early 10's) - IE never says it's IE, we have smart
watches, phones, phablets, tablets, netbooks, notebooks, and still desktops.
There is Firefox, and Chrome that run on Mac, Windows, Linux, iOs, and
Android, there's Safari which runs on OS X, Windows (old version), and iOS,
there's IE, of which there are 8,9,10,11 and the new ones in circulation, and
a handful of other browsers like Android browser that kind of gave up a long
time ago but are still used.

I'm sure backend software was rife with feature-detection for the OS's it ran
on (Redhat versus CentOS, special support for IIS, etc) but until things
exploded after the iPhone was released in 2007 the web had very predictable
deficiencies that could be addressed more directly than feature detection.

I can remember as a Linux user, there were plenty of websites that would only
let 'approved' User-Agents in, because they would rather you NOT see their
site than see a site in a browser they didn't support. When using Linux I
often didn't have access to IE or Netscape, so I would use Firefox or
Konqueuror to spoof a different User-Agent. Nintendo.com used to be like this,
plus others.

