Hacker News new | past | comments | ask | show | jobs | submit login
User-agent string changes (microsoft.com)
88 points by anonymfus on Nov 22, 2014 | hide | past | favorite | 85 comments



UA strings need to die a horrible horrible death. "Everyone" should just agree on a standard string and completely stop updating them with anything else (and eventually remove the standard string, in theory).

Feature detection is a major step forward HOWEVER sometimes a feature is "supported" but has different incompatible behaviour between browser A and browser B. I feel like that is less common today than it was even four years ago thanks in part to a better working relationship between browser vendors but also due to more comprehensive test suites and people spending time testing new functionality.

If they absolutely insist on keeping UA strings then let's wrap them in a Javascript structure. Parsing them is insanely convoluted at this point (which is why many just use .Contains() which has its own issues). Just have e.g. UA.Agent (.Name, .Version, .OS) and UA.CompatibleWith[] (.Name, .Version, .OS).


I work on PageSpeed, which makes heavy use of UserAgent string parsing. I would love to stop using the UA and just leave everything up to the browser, but you can't do that without slowing things down. They key problem is we need to know on the server at request time what features the browser supports.

For example, only some browsers support WebP images. We'd like to send WebP to browsers that support it, and JPEG to the rest. If you wanted to do it in the browser you'd have something like:

    <img src="">
    <script>
       if (SupportsWeb()) {
          img.src = "/image.webp";
       } else {
          img.src = "/image.jpg";
       }
    </script>
But this is slow. Why? Because the lookahead parser can't find the image url, so the download can't start until the regular parser gets there and has run the javascript. Instead we want to issue a simple:

   <img src="/image.webp">
to supporting browsers and

   <img src="/image.jpg">
otherwise. You would think this would be what the "Accept:" header would be for. Browsers that support WebP would send "Accept: image/webp". And this almost, almost works. But there are problems: Opera started sending the header when it supported lossy WebPs but before it added support for WebP lossless, Chrome v36 on iOS sent the header but didn't support WebP in data urls [1], etc.

WebP support is just an example; this applies to many of the optimizations we want to make that are dependent on browser support. If we want to be fast we need to maintain a mapping from UserAgent to supported features. Doesn't mean I like it, and yes it's ugly, but we do need to keep doing this.

(The gory details: https://code.google.com/p/modpagespeed/source/browse/trunk/s...)

[1] http://crbug.com/402514


The problem seems to be that everybody wants to avoid breaking anything. You should just serve webp to browsers that send an Accept:webp, and if those browsers get broken images that's their problem, and not the rest of the internet's problem. Website operators keep compensating for the brokenness of browsers, and then the browser vendors have no motivation to fix their shit.


This was what people said about Windows before Vista, too. "Broken apps should just break! It'll push the developers of broken apps to fix their shit!"

Then Microsoft listened to them and did that with UAC, breaking tons of apps that assumed (stupidly) that they would be running with administrator privileges all the time.

Who did the users blame for the breakage? Not the app developers. They blamed Microsoft. "It worked fine, until I installed Vista!"

The point: "just let things break" is one of those ideas that sounds 100% reasonable, until you try it.


People screw up. The people working on Chrome for iOS didn't mean to break WebP support, and when they discovered this they fixed it in the next revision. They're already plenty motivated and on top of things.

It doesn't make sense to punish visitors for the mistakes of browser makers, even if browser makers need more pressure, which they don't.

(And the situation with Opera is even trickier. They did support basic WebP and no one was even doing Accept-based detection yet. More communication would have been helpful between Chrome and Opera to coordinate what "Accept: webp" would mean, but it's not a situation punishment would improve.)


Yeah, sorry, i didn't mean to say that the browser engineers don't have other motivations to not break shit, or to fix it when it's broken, but knowing that all the websites aren't going to roll over and accept whatever the browser vendors break is a good motivation.


I would encourage our competitors to only support modern browsers and encourage them to use pure feature detection: because their solution will work poorly, or just not work at all (about 20% of our users are on IE8).

User-agent sniffing is a necessary evil, even with careful use of feature detection. I use pure JavaScript rendering on the client side, and feature detection is not enough.

Feature detection cannot detect many browser quirks: quirks with <input> elements; quirks with viewports; quirks with keyboards; quirks with focusing; quirks with display:fixed etc.

Examples * We detect Samsung phones because you can't type a decimal place into a numeric input on a Samsung. Users need to input decimal numbers fast (show numeric keyboard on focus) but we show the alpha-numeric keyboard to Samsung phones. * Android Chrome doesn't tell you if the backspace key was pressed (or what key was pressed in general, for good design reasons). * Firefox has event quirks when the Escape key is pressed in an input. * Webkit has quirks with selecting text and focus. * IE10 (Metro mode only) displays borders wierdly with disabled inputs

I admit that browser sniffing is a bad code smell. Most times a better solution exists that doesn't need sniffing or the design changed to avoid the problem... But it is not always possible (e.g. some keyboard bugs), not always desirable (e.g. critical high-usage areas that we don't wish to compromise), or not always practical (I don't always have the time free that it takes to discover the "perfect" solution).

Fundamentally, we try not to tell our customers what browser they "should" use. When they ask "we use X, will your solution work" we try to be able to say yes even if it is makes us work harder... X was IE6 a few years ago (competitor developers couldn't hack it), or X could be Android, or iOS4 on iPad1, or Windows Mobile (large customer happy to hear that we had app in mobile store).


Spoken like an engineer but not a business owner.

That only works until your customer spit in your face yelling "I don't want your excuses and just make it work."


> "I don't want your excuses and just make it work."

"OK, how much will you pay me? How much lost revenue does your anger represent, and is it more than the value of the time I'd lose in fixing it?"

People spitting in my face get a substantial penalty on those scores, BTW. They can make it up, but they'd better start well on the good side if they expect to.


Agreed. I'd love to use feature detection for everything, but I just can't - particularly on mobile. For instance, the iPhone has a weird caching bug with audio files - there's no feature detection for it, my only option is to sniff what browser we're on and change behaviour. Edge cases like this aren't going to disappear any time soon.


So, all the people that agree on the new "standard string" update their browsers to send that that, and find themselves unable to use all the websites that are still checking for some variant of "Mozilla/..." because their administrators haven't updated them yet (i.e., the entire internet). Nice idea.


Browsers can do "broken website" detection much better than websites can do "broken browser" detection. (Few browsers with frequent updates vs. many sites with most "dead", etc.)

Internet Explorer (of all things) already had the right idea with its "previous IE" mode: if the renderer emits standards errors while parsing the site, offer to retry with an older renderer (and send that renderer' original UA.)

The newest Mobile Safari has a similar idea: if a site seems to force certain mobile viewport constraints, then offer to redo the request with a desktop UA instead of expecting the site to know how to set a sticky-session parameter of "desktopness."

In fact, Chrome's "Translate this Page" popdown might be the best example: you get a page you might not understand, along with an offer to magically fix the issue (which is nevertheless optional, because the user might speak that language after all.)

To wit: let old UA-detecting websites break. And then notice that they did, and offer to send a "compatibility UA" only in those cases. Turn a cost browsers were eating into a light punishment (but not impairment!) of the websites responsible. A carbon tax on archaic coal-powered websites, in a sense.

(And note that this wouldn't have to be work repeated by every user, or even seen by many at all. Google could build a perfect pre-seed database for this just by doing comparative search indexing using Googlebots that send differing UAs.)


    Internet Explorer (of all things) already had the right
    idea with its "previous IE" mode: if the renderer emits
    standards errors while parsing the site, offer to retry
    with an older renderer (and send that renderer' original
    UA.)
IE dropping into various modes to interpret your document can be wonderful when it works, but can also be a huge pain. I work on PageSpeed, an open source optimizing proxy, and this IE behavior keeps us from making a lot of optimizations on IE. Consider this code:

    <link rel=stylesheet href="main.css">
    <!--[if IE 7]>
       <link rel=stylesheet href="ie.css">
    <![endif]-->
    <link rel=stylesheet href="more.css">
Combining multiple CSS files into one is a great way to speed up page load times because most of the time in loading these resources is waiting for the round trip. We would like to issue:

    IE7:
      <link rel=stylesheet href="main+ie+more.css">

    Other browsers:
      <link rel=stylesheet href="main+more.css">
But IE's impersonation keeps us from doing this. When we process a page requested with an IE UA, we don't know which version of IE it will decide to be when interpreting the page. Yes, it claims to be IE10, but if it sees something to drop it into IE7 mode then we shouldn't have removed ie.css.

My favorite part of this announcement is the link [1] to where MS says this detection is going away:

    The next major version of IE will still honor document
    modes served by intranet sites, sites on the Compatibility
    View list, and when used with Enterprise Mode only. ...
    Public Internet sites will be rendered with the new Edge
    mode platform.
[1] http://blogs.msdn.com/b/ie/archive/2014/11/11/living-on-the-...


That would be the standard string. It would just be an ugly legacy string that tries to be all classic UA strings. You just stop updating it past that.

The main goal isn't to break backwards compatibility, it is to make the point that there won't be FORWARDS compatibility. That UA string is static forever.


That's what "Mozilla/..." already is and always has been, and it obviously doesn't work. All you need is for one browser manufacturer to break the rule and advertise their new feature via something new at the end, and then everyone else who continues to play by the "standard string" rules suffers.

Don't get me wrong, I am not advocating UA strings - but to get rid of them, there needs to be an approach where there isn't a punishment for following the rules.


Hell, I'd be happy if browsers sent a content and platform dimensions headers... so that I at least know the screen size, and if it's in a window, the content size that will be filled.... would make it really nice to be able to do adaptive rendering of the markup.


Even removing"Mozilla/" could break UA detection that uses hardcoded string indexes to version numberz. :(


Opera since 9.0 up to 12.x had user-agent without Mozilla. I still use Opera 12.17 and mostly only Google's sites are broken.


Interesting! I did not know that, though I think many users would complain if "only" Google's sites were broken in browser X but not Y. :)


So, the one main use case that can't be handled by feature detection or a JavaScript API is returning a mobile or desktop site without extra round-trips.

If a server can detect that you have a mobile user-agent, it can directly deliver you an optimized mobile site, that avoids sending lots of extra data that is only relevant for the desktop version, and avoids extra slow round-trips to fetch data later once JavaScript based detection has had a chance to run.

UA strings are awful, but I haven't seen any other good solution for this problem.


Why are desktop and mobile sites so different? Why do they need to be different? Can't we just make responsive designs?

Having separate desktop and mobile sites is an anti-pattern.


I disagree, there are instances where having a completely different website for a smaller form factor is beneficial to the user.


It is! And I wish more sites would make it accessible like it used to be by just going to m.domain.tld rather than user-agent sniffing, because I often prefer the simpler mobile version I get when I spoof my user agent from the desktop.


Which is great.. until I do a google search for my closest movie theater on my phone to pull up the listings... the link to that theater's page is right at the top... "great" ... now I click the link and the site redirects me to m.theater.com/ (no path after the slash) ... now I'm stuck slogging through several pages to get to what I wanted.

This happens a lot.. there's a page on the desktop site that doesn't correspond. Responsive design still shoves a lot of extra resources at a client that the smaller devices especially don't need. Responsive rendering then loading an adaptive client in the browser is a better option... browser sniffing is the only way to do this.


A phone-optimized site will have less content, usually, so why send the extra data to phone that don't need it, especially when those phones are running at lower bandwidth and are often billed per gigabyte?


I usually find it frustrating when a phone site doesn't let me see what the desktop site does. If there's content only for one form-factor, load it with AJAX.


If you are trying to serve a separate mobile website, you are doing it wrong (as of ~2 years ago). Responsive design is the modern standard. (I am a frontend developer)


It would be great if we had a header that tells the screen size.

And it would actually help solve the problem, instead of just placing another layer of band-aids above it.


Absolutely. I use a filtering proxy which among other things enables modifying request headers, and I've experimented with a ton of different UA headers over the years; and it's astounding how many sites will fail in mysterious ways if your UA doesn't match one of the ones they're expecting (perhaps this has decreased a bit now with all the diversity in browsers.)

In particular, I've noticed that having absolutely no UA header produces the most horrible results: "500 Internal Server Error"s, "we think you are a bot so you have been denied access" (403s are common), and even IP bans.

UA strings were intended only as a form of branding/advisory, but I think they're now being used in an unfairly discriminatory way. Denying access because of a "foreign" UA is almost like racial profiling.


I don't think your analogy works.

Racial profiling is when law enforcement decides to target someone someone based on their ethnicity (an innate characteristic) instead of their behavior.

Whether or not an HTTP client sends a user-agent header is a part of its behavior when making a request. Its fair to observe that along with other aspects of the clients behavior when deciding how to handle it.

You're probably being blocked by sites running WAFs like ModSecurity. There's a lot of crap out there, and I don't blame people for setting a low threshold for blocking unusual requests. Here's the rules which would catch a missing user-agent: https://github.com/SpiderLabs/owasp-modsecurity-crs/blob/mas...


UA strings are increasingly atrocious, but they're still very useful for targeting platforms and reducing the amount of content sent. Eg, if an iPhone browser comes to my site, the FAQ shouldn't show s40 content by default. And when an s40 user comes, the download section shouldn't link to iTunes (and their browser probably doesn't support JavaScript in a meaningful way)

It's also helpful to put a splash page up when a specific browser release breaks your site, and users need to down or upgrade. (users who tweak their UA may be wrong targeted in this case, but whatever)


I agree with cbr, there are too many cases where feature detection either isn't possible or hinders the performance of the application.

Does anyone know if there is an authoritative database of user agent strings, their associated browsers and the features of those browsers? If this resource existed, it would be trivial to wrap it in an API that web developers could use in their apps.


The real problem is that IE & Safari doesn't update frequently enough, and so sometimes we have to create hacks that target certain browsers - it's unfortunate, but those browser vendors need to do a better job moving to a continually updating browser model.


IE updates every patch Tuesday. I know that in F12 tools we've shipped new features on an arbitrary Tuesday.


"UA strings need to die... just agree on a standard string... and eventually remove the standard string"


  Mozilla/5.0 (Windows NT 6.4; WOW64)
  AppleWebKit/537.36 (KHTML, like Gecko)
  Chrome/36.0.1985.143 Safari/537.36 Edge/12.0
This is getting ridiculous. Every browser is now pretending to be all browsers at the same time, and a simple version change apparently warrants an entire article.

Not sending the UA string would avoid this madness. But on the other hand a UA string is great for analytics. Is it possible to come up with some method that discloses information about the user agent which can be used for analytical purposes but cannot be used to change the bahaviour of a web page?


Is it possible to come up with some method that discloses information about the user agent which can be used for analytical purposes but cannot be used to change the bahaviour of a web page?

It's called "browser fingerprinting": https://panopticlick.eff.org/


It's useful to know the OS if you want to offer different binaries for each OS. Maybe just "BrowserName/Version (OSName/Version)" so

    Firefox/35 (Mac OS X/10.10.1)


One approach would be for browsers to send a string that is along the lines of a uuid generated at compile time, or an MD5 of the executable.

That way, analytics vendors or libraries could maintain a database that maps IDs to specific browsers releases, platforms, features, etc if they care about those things enough, but short-sighted web admins would have no hope of doing anything with primitive regex matching of the UA string.


You're welcome to do just that.. md5(useragent) and keep your own definitions... be mindful that in IE in particular plugins affect the string itself. :-(


You mean a super cookie?

Not only would this violate tracking laws in some areas, but who would ever want to use such a thing?


No. Generated at the time the browser executable is built (by the vendor, linux distro, etc), and hence shared by all the people using that browser executable. So, only good for identifying the browser, platform, OS and features, and much less so the users.


Isn't this a trademark infringement? They're using AppleWebKit, Chrome, Safari to identify their product when those terms are protected from use via trademark law.

I know this isn't a new thing per se but this seems to be an especially egregious misrepresentation.


No, because this is not in any way going to cause consumer confusion, but instead is simply used for compatibility purposes.


Confusion is the measure used when trademarks aren't registered and is much harder to prove. When considering registered trademarks it's not necessary for confusion to be actually present at all - 100% of customers can know that your product isn't an iPad but if you advertise it as an iPad then you're infringing: it doesn't say "AppleWebKit compatible" in that string and the string is not required to make the browser function at all.

If, for example, people offer alternative services to people with an Apple UA string (BBC iPlayer did this) then you're impersonating such tech use via the trademark - it might be now for example that people need to turn off specific iPad enhancements as they break IE11 - that's commercial interference using a trademark. The trademark isn't needed in any technical way for IE to operate as it should, so it's not a compatibility issue.

Website content owners can choose to cut out certain browsers if they wish, impersonating another browser is fine .. but if you do that as a company utilising someone else's trademark to do so .. it just has feeling for me of not sitting quite right with what I know of TM law.


I would think the courts would recognize a difference between advertising to the end user for the purpose of purchase/consumption, and advertising to 3rd parties as part of a compatibility protocol.

The UA string is not visible to the consumer, and wouldn't affect the number of people who download and install the browser.

Its only purpose is to ensure compatibility with the server after the browser has already been installed.

Also I wouldn't say the browser is actually impersonating another browser. If it was trying to impersonate, the UA string would need to match the other browser exactly and it doesn't attempt to do that. The string is still uniquely identifiable as an IE11 UA.


Is it any more of a trademark infringement than 'IBM PC Compatible'?


Sega and/or Nintendo tried to require games to display their trademarks before their system would run them, courts ruled that third parties could use the trademarks to overcome such systems.


Will anyone be tricked? No. So not infringement.


Infringement, as far as I know is a civil matter. Who is going to sue when everyone is doing it?


I hope not. The last thing we need is a new round of lawsuits. Maybe everyone can agree on a set of rules for communicating this data.


I think the more interesting bit is that the new UA value contains all the alternative browser strings as well.

  Mozilla/5.0 (Windows NT 10.0; WOW64)
  AppleWebKit/537.36 (KHTML, like Gecko)
  Chrome/36.0.1985.143 Safari/537.36 Edge/12.0


I also read like a history of web browsers: first there was Mozilla, then Apple made WebKit (actually KHTML), then Google created Chrome and ... there will always be a bleading Edge for you to cut yourself :-)


Actually, KDE made KHTML (for the Konqueror web browser), which was then used by Apple to create and evolve WebKit, and you know the story from there. But KHTML wasn't just a code name for WebKit, it actually came from outside Apple.


But KHTML always referred to itself as 'KHTML, like Gecko' to ensure people sniffing for Gecko would be displaying the same HTML for KHTML as Gecko. This is why 'Gecko' remains in all major browsers.

Edit: Apparently Internet Explorer also has 'like Gecko'.


Actually, first there was NCSA Mosaic which came out almost a full two years before Netscape Navigator. The first 6 versions of IE were also based on Mosaic.


There were web browsers before NCSA Mosaic. I a lot of people get the history wrong due to essentially marketing claims.

http://en.wikipedia.org/wiki/Web_browser

>''The first web browser was invented in 1990 by Tim Berners-Lee. It was called WorldWideWeb and was later renamed Nexus.''

>''In 1993, browser software was further innovated by Marc Andreessen >with the release of Mosaic, "the world's first popular browser"''

Note the word "popular" in the claim. It was mostly true due to Mosaic being popular, but it was also intentionally misleading.


But what UA did Mosaic use?


Here's what the current Mosaic-CK (http://www.floodgap.com/retrotech/machten/mosaic/) uses:

NCSA_Mosaic/2.7ck9 (X11;Darwin 14.0.0 x86_64) libwww/2.12 modified

Homebrew users should be able it install it on OS X easily:

brew install mistydemeo/formulae/mosaic-ck

(https://github.com/mistydemeo/homebrew-formulae/blob/master/...)


I actually have a working binary here…

User-agent: NCSA_Mosaic/2.7b6 (X11;Linux 3.2.0-4-686-pae i686) libwww/2.12 modified


"NCSA Mosaic/#.# (platform)" or "NCSA_Mosaic/#.# (platform)", apparently.


Looks like they used something along the lines of "NCSA Mosaic V# (OS/Machine info)".

I'm wondering if the last one listed on this page is correct though:

    NCSA Mosaic/1.0 (compatible; NCSA Mosaic; AmigaOS 5.9; FuckYou ver.0.9.3)
From http://www.webuseragents.com/browser/ncsa-mosaic/ncsa-mosaic...


I think "FuckYou ver.0.9.3" is some obscure (possibly joke) browser claiming NCSA Mosaic/1.0 compatibility, note the "compatible;", like IE pretends to be Mozilla. That specific string is only found on webuseragents.com:

https://encrypted.google.com/search?hl=en&q=FuckYou%200.9.3#...


Not sure if you're aware but all browsers do this.

http://www.useragentstring.com/pages/Browserlist/


We'd all be better off if they just stopped sending the UA string altogether


So much this. The UA string is flawed by itself. it shouldn't even be used anymore. The fact that browser manufacturers have to include all sorts of stuff is proof that this system doesn't work.


These strings make the parser guy in me just twitch. Do they expect many, many web developers to reliably extract accurate meaning from them?


Most developers would find and use a library rather than write their own parser. They pretty reliably extract the browser/version/platform/etc from the hundreds of thousands of UA strings you'll see in the wild.

https://github.com/tobie/ua-parser

https://github.com/browscap/browscap

A user agent parser is built in to the PHP language as well.

http://php.net/manual/en/function.get-browser.php


I think it's the other way around. This happens when a field that no-one ever defined a syntax for suddenly gets important and everyone tries to extract meaning from it using arbitrary contains() and substring() acrobatics...


So it's organic and ill-defined . . . and people are parsing it. Lovely :-)


I'll post here the same thing I posted on another HN post about user agent strings [0], because it's still relevant. I'll also add that the MS article link artfully dodges the whole "we suck at UA sniffing" issue their own products have.

Asp.Net 2.0 (and up to 3.5, which is still framework 2.0) sites use UA sniffing to determine which markup to emit for many built-in controls, like Asp:Menu. When Yosemite / iOS 8 / Safari 8 was released, anyone with a SharePoint 2007/2010 site had their navigation get all screwy. I know first hand how infuriating someone's badly designed UA sniffing can be; it happened to me, and I wrote up a fix for it [1].

Why did it happen in the first place? Where did the UA sniffing fall apart?

A poorly designed RegExp looking for "60" in Safari's UA without a word boundary or EOL check. The new Safari UA includes "600" in the relevant section, and suddenly SharePoint sites were sending markup intended for (now) ancient browsers - browsers that weren't even on the compatibility list for SharePoint in the first place.

UA sniffing does need to go away for determining what structure your markup sends to the user agent.

Edit: linked to earlier comment mentioned at beginning of this one

[0]: https://news.ycombinator.com/item?id=8647749

[1]: http://blog.sharepointexperience.com/2014/10/did-safari-or-i...


I alway like to link to this blog post when UA's come up:

History of the browser user-agent string - http://webaim.org/blog/user-agent-string-history/


I assume this also means the internal version will be 10.0? Although they're deprecating that API anyway, and telling you to use a bunch of helper functions instead, so maybe it's not too important.


This isn't just about the UA string.

The NT kernel version in the latest builds of Windows 10 has changed from 6.4 to 10.0 (as first revealed by Chinese leaks), and this article basically confirms it.

Make of that what you will.


Before mods renamed title to "User-agent string changes" it was "...Windows NT value in the UA string will change from 6.4 to 10.0..."


Part of the reason for the change is that they removed X-UA-Compatible. I would have suggested TridentEdgeOnly/x.x.


Microsoft. Your hubris is legendary!

Feature detection DOES NOT SOLVE THE PROBLEM!

The problem is poor, inconsistent, unreliable and incomplete adoption of features. Moderizr is no help here.

Unless Internet Explorer magically can be treated at the same level as modern browsers, we will have to find a way to target it. Without a clear way to do that, we have to use hacks. I hate hacks and workarounds. I hate Internet Explorer.


Will it be "Microsoft Internet Explorer/11.0"?


Do they support NaCl and WebRTC and WebGL and WineVine DRM like Chrome, now that they stuff "Chrome" in their User-Agent? If not, what happens if a site (poorly) detects these features from the user agent?


Why on earth would you detect those things server-side, given you can do the fallback client-side?

Also, no, they don't have those things, aside from WebGL and some form of DRM (not sure if it's the same as Chrome's). The reason they put Chrome in is for sites assuming only Chrome supports web standards (ugh).


It could easily happen client-side, with something like /Chrome/.test(navigator.userAgent).

I mean, there's probably sites using code like this already, why else would IE implement such a user-agent? But then they can't really know which Chrome feature the code is looking for.

I guess they are moving from being unfairly excluded from modern standards, to possibly overshooting their target and getting served experimental features they don't support.


Gee, just look for 'Gecko' in UA strings. That'll catch 'em.


A poorly written site will perform poorly.

The hope is that newer sites (and sites using WineVine DRM are newer, relatively speaking) aren't written as poorly as old ones. Well written sites use feature detection, not user-agent detection. This move by Microsoft addresses problems with older sites, and IE12 supports these older features that the site doesn't belive IE can support.


Hah, if only it were "older sites". People do user agent sniffing all the time, unfortunately. Thus the massive IE Mobile user agent string.


Poorly implemented features means you can't fully use them.

Building a website that has to work on Internet Explorer is like teaching.

Its like having to teach the class to the dumbest kid in the room. You can't take the dumb kid outside for extra instruction (because that would be browser targeting.. ).

Everyone has to wait, until the dumb kid gets it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: