Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google Now Forces Edge Preview Users to Use Chrome for the Modern YouTube (thurrott.com)
250 points by TiredOfLife on May 28, 2019 | hide | past | favorite | 200 comments


Most likely a user-agent check to ensure compatibility. I don't assume any ill intent.

That being said, what does annoy me is that we've known for years that feature detection is preferable to user-agent detection, and I would kind of expect one of the biggest web development companies in the world to be following best practices.

This isn't my local pizza shop, Google engineers should know better. Have they already forgotten the dumb "best viewed in IE" banners that magically vanished as soon as the user-agent changed? Heck, even more recently the original version of Edge had to literally start lying to websites about its user-agent just so they would stop serving alternative IE hacks. Yeah it's easier in the short term to do browser-detection and whitelisting for rendering quirks and bugs -- it's just unreliable and error-prone over the long term, and hurts the overall web browser ecosystem.

Google should be setting an example for ordinary developers here, not encouraging hacky shortcuts. If Google isn't going to take the time to think about progressive enhancement from the very start of their experiments, who else is going to?


Disclaimer: I work at Google but not on YouTube, also speaking just personally.

This isn’t practical at all in a latency sensitive environment. Feature detection works once JavaScript has loaded on the page, which means we either have to serve you a giant bundle of stuff you may not be able to use, or we have to degrade the experience of the latest browsers by detecting features and fetching more HTML/CSS/JavaScript once we know they will be able to handle it.

Feature detection absolutely is used for some things, but it’s unreasonable from a latency perspective to serve the bundle of all possible sites and make all these decisions on the client. We aren’t talking about polyfills here- A lot of these features mean we are shipping a totally differently lay out because this webkit uses flex box 2009, or mobile safari 8 has a layout bug, or IE has a quirk around event bubbling in video elements, etc.

Feature detection stops being better when a) your supported browser list stretches into the early 2000s around the world and b) you are in a latency sensitive environment where it is unacceptable to degrade performance of the latest browsers. If you aren’t bound by those constraints, or the detection has a cheap fallback so it isn’t meaningfully impacting latency, I agree with feature detection.


The modern version of Gmail has a loading bar that takes about 3 seconds to complete on every single page refresh.

This is subjective, and it's a bit unfair of me to bring up, but my experience is that whatever Google is optimizing for, it doesn't seem to be latency. At least not on Firefox for any of the products I use.


"Google" encompasses many different projects and teams. What does Gmail loading time have to do with what the GP said?


Because the GP said "I work at Google but not on YouTube", and was speaking of Google and the sorts of technical challenges Google has in general, not any specific product.

So danShumway pointed out that in one flagship product, they didn't seem worried about latency, so it could be argued that avoiding user-agent detection is more important than latency on youtube too.

But yeah, there are lots of competing things going on, I found both comments helpful.


> Because the GP said "I work at Google but not on YouTube"

That was a disclosure, not a credential. My reading of it was that GP was speaking of the tradeoffs in their capacity as an engineer, and said they are a Googler for fair disclosure and nothing more.


They weren't explicit that they weren't talking about YouTube specifically but generally about the kinds of challenges and tradeoffs Google faces, so that's why the reply was also talking about products other than YouTube in the context of the kinds of tradeoffs Google has been known to make. That's what it has to do with what original GP said, to answer your question directly.

Anyway, I found both original GP and the reply useful, and do not find this exchange to be, so I'll stop!


That's a very different product. Gmail basically never needs to refresh the page - you go through that loading bar once in the morning and there's no real reason to see it again the rest of the day. Every action in Gmail other than certain preference changes happen in the same page.

YouTube is reloading the page on almost every action you take. Latency is a completely different priority there.


> "YouTube is reloading the page on almost every action you take"

The current Polymer version of Youtube.com does not reload the page. It's a very heavy JS single-page-application. It's faster than Gmail but still slow.


What is it that Gmail is doing now that it wasn't doing when there was no perceptible loading time?


Attracting hipster framework devs to its internal team, wouldnt you say 3 seconds of wait x global Gmail user population x every day is totally worth it?


> YouTube is reloading the page on almost every action you take.

Correct me if I'm wrong, but I was pretty sure that Youtube was a Single Page Application. A while back I wrote some user scripts for it that were heavily reliant on MutationObserver because it was the only way I could detect page navigation.


Youtube build spfjs for loading a page in "Structured Page Fragments" https://youtube.github.io/spfjs/


Apart from the excellent reasons other people have given, it's important to realize that there is no entity that is optimizing all of these disparate pieces of software with some common utility function. Chrome and GMail are completely different products with very very different priorities. Just because they are owned by Google means nothing for this particular comparison.


ironically the 'slow browser' html version is faster, cleaner, easier on the eyes, and does all the same tricks as the in vogue ui version.


> This isn’t practical at all in a latency sensitive environment.

Other than Google Streaming / Stadia and some AMP sites I have never seen a Google site / app load in less than 2 seconds or so (which is kinda slow, all things considered). So I'm a little confused about this latency sensitive environment statement.

Sure, what you say is correct in that using agent strings can provide an optimized bundle to the browser the faster, but isn't there far better optimizations that Google could make first? I've seen and worked on very fast, sub second web apps before so this argument feels more speculative than any good reasoning.

Besides, you shouldn't be gating _so hard_ with agent strings anyway. The non-Chromium version of Edge is probably fine on YouTube as well. You likely want to gate away the obvious offenders, assume the best with the newer browsers but allow a fallback option that detects a few things on load so you can adjust and set cookies for the next time.


I wish you worked with me so I could pour over the metrics with you and try some experiments. I am not being flippant - I am genuinely interested in working with people who want to think through these hard problems. If you are at all interested in coffee and a potential referral, let me know.


I'm not the parent poster to your comment, but I'm genuinely curious: What is it about metrics that makes breaking a browser like Edge unavoidable?


Yeah, performance is a hard issue especially with well established applications.

Experimentation is fun. Good luck, wish I could help you more :)


youtube.com takes a bit over 3 seconds to load for me in Chrome with a warm cache. Minimizing latency is clearly not a top priority.


It's unreasonable if you feature detect on every load. why not store feature detected capabilities within a cookie that's passed on additional requests?


If you think about it, that’s kinda what is happening, except instead of a punishing first load that appears broken and slow every time the user loses the cookie or we change our feature detection code, we do that ahead of time and the “cookie” is in a map where the key is the UA


Interesting, so to be clear, the sequence looks like this:

1. User goes to website, UA communicates previously known features for this browser.

2. Browser does feature detection after load

3. Browser communicates any new browser features up to a backend service, crowd analysis determines if new features have been implemented for this UA.

If that's the case, that's a pretty awesome way of keying features, but I'd have to think about the implications in practice.


No, I haven’t seen anything that fancy. The “user” in step 1 is internal QA testing the site on the browser, maybe after seeing it a lot in logs. Exposing users to novel UA strings to a bunch of feature detection JavaScript is an interesting idea but it’s probably better to log them and then do this in a QA setting.


like this proposal


> Feature detection stops being better when a) your supported browser list stretches into the early 2000s around the world and b) you are in a latency sensitive environment where it is unacceptable to degrade performance of the latest browsers.

So whitelist “safe browsers” with server side user-agent sniffing and send the rest a bigger bundle with feature detection. There is no reason not to serve users of modern browsers the latest feature just because you fail to detect them in on the server.


This works great until browser vendors that aren't whitelisted but support the features of a whitelisted browser (e.g. Chromium derivatives) realize that they're paying extra costs they don't have to, and change their user agents to a whitelisted browser...

Posted from Chrome-

> Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36


It is better to get the feature along with a bigger bundle then not get the feature at all even though your browser supports it.

Note the parent is claiming that feature detection is not feasible for a site like YouTube because of the extra cost. Here I show how you can still bundle optimally for whitelisted (i.e. QA-ed) user-agents while still not blocking non whitelisted user-agents from accessing these features, and still not brake the page for user-agents that don’t support it.


I think disabling the all the user snooping would help reduce the latency too. Just saying :)


Nearly 15 years of youtube and the only new feature I've noticed that isn't directly copied from the unusually innovative explicit video sites are the ads polluting every video, 4 minutes apart. Brilliant engineering, really, the best talent working on this.

As an aside: is it possible to go back to a 2005 version of the site? I bet that site absolutely flies on modern hardware, and all I really want is the search bar and player anyway.


Well, I do think that delivering all the YT features to users on a global scale is an engineering challenge. Its just that I would much rather prefer they didn't actually succeed at doing that, especially when it comes to a few of those so called "features".

>As an aside: is it possible to go back to a 2005 version of the site? I bet that site absolutely flies on modern hardware, and all I really want is the search bar and player anyway.

On one hand, I dislike the SV venture capital thing about throwing money at companies and products that contineu to rack up giant losses simply because they're growing like crazy. On the other hand, YT would probably never have been a global phenomenon if it didn't run at a loss during the initial years and if google hand't acquired it. It is expected that Google is looking for a return on investment of their 6 or whatever billion dollars they paid. Its just too bad that Google is an advertising company and all they can do is shove ads and crapify the experience.

Aside: I find it hard to criticize Google without sounding like I'm personally antagonistic towards the engineers who work there.


Every single Google consumer app has gotten more bloated and added seconds in load time. What exactly are the metrics being used to measure UX and latency? What Google service loads faster than HN?


I mean, HN isn’t an apples to apples comparison at all. I bet Google search (where I work) loads much faster when we serve 10 blue links to an older phone or IE6, and that page is at least an order of magnitude more complex to assemble. When we serve you a live stock ticker, video results and carousel of relevant tweets though it’ll take a few more milliseconds to get that data.

My point isn’t that latency is the only thing worth optimizing for, it’s that you can spend your latency budget on something like feature detection, or, adding new features and better UX for newer browsers.


I should've been more specific. Other than the search page (which obviously is meant to be fast), the actual consumer apps are all becoming slower: Gmail, Maps, Youtube, Play, etc.

This is a well-known complaint across many users with all sorts of devices and connections. I understand there are budgets and tradeoffs but the general sentiment is that Google is making the wrong decisions here.


>This isn’t practical at all in a latency sensitive environment. Feature detection works once JavaScript has loaded on the page, which means we either have to serve you a giant bundle of stuff you may not be able to use, or we have to degrade the experience of the latest browsers by detecting features and fetching more HTML/CSS/JavaScript once we know they will be able to handle it.

It might sound naive, but why would fetching more JS later degrade the experience? I thought that bundle splitting was an accepted good practice. For example, you have your main core bundle, then some polyfill bundles (that get downloaded if the feature detector says it is needed), and then you have separate bundles for obscure and not-that-often-used stuff that gets loaded on demand. It is a genuine question, and I would like to learn what's wrong with this approach, because that's something I work with on a day-to-day basis.

P.S. Please don't go into the negatives of bundle splitting every little thing into its own bundle that gets loaded on demand, I am aware of downsides of that. I was talking about more sensible approach to bundle splitting (which I described in a somewhat oversimplified way).


>> It might sound naive, but why would fetching more JS later degrade the experience? I thought that bundle splitting was an accepted good practice. For example, you have your main core bundle, then some polyfill bundles (that get downloaded if the feature detector says it is needed), and then you have separate bundles for obscure and not-that-often-used stuff that gets loaded on demand. It is a genuine question, and I would like to learn what's wrong with this approach, because that's something I work with on a day-to-day basis.

There isn’t anything wrong with this approach at all - keep doing this! It just doesn’t solve the initial load problem. This kind of code splitting _also_ happens on all google properties I am aware of. The problem here is what gets into the initial bundle, and is it enough to make the page useful. If the initial bundle isn’t enough to make the page useful, then the user has to wait for two requests to complete serially before they can perform their task.


> If the initial bundle isn’t enough to make the page useful, then the user has to wait for two requests to complete serially before they can perform their task.

Since the alternative is just broken, I'm sure users with non-Google browsers would prefer a working page with one round-trip of extra latency.

Meanwhile, it causes Chrome users no harm, they are unaffected.

But anyway, there's no need for extra delay.

You can put a coarse-grain "shall we load part 2" test inline in the HTML, and/or have the server predict from User-Agent (as a performance heuristic - and this can be automated).

The above can provide the same, optimal latency to all different browsers, even when the site is beautifully well-tuned, 0RTT-optimised, TLS/3, QUIC service, with all the PUSH trimmings.

(For even better performance you can select from multiple builds using the same heuristics, optimising out compatibility branches for other browsers on all browsers, not just the favourite. But this is not necessary, just to prevent the delay being discussed. It's an extra boost.)

There is no latency excuse, just engineering deciding not to bother.

Something Google should be criticised for, since they can afford to do better, have a monopoly, and claim to be a thought leader on open web standards.


To be clear, I have no insight into the specific topic of the article (and I probably can’t comment on it if I did). I’m simply defending the process of segmenting experiences by user agent, which the original comment suggested could have been the cause, and also suggested that it was never best practice to do so. I think I’ve made a pretty good case that the original response from the server ought to be informed by the user agent string in an environment like YouTube, mainly for latency reasons.

As for whether or not this particular issue is best solved by whitelisting and testing the new edge UA for this experience, or adding feature detection and testing that behavior with the new edge, isn’t really the same topic. And if this is what is happening, I’m not sure this method is “broken” here, it’s serving a working page to users with a preview browser.


I hope I've shown that I agree with you, that "the original response from the server ought to be informed by the user agent string in an environment like YouTube, mainly for latency reasons".

But, differently from YouTube, it's possible to serve a latency-optimised, user agent-optimised response in such a way that it's just a performance heuristic, without changing the actual functionality, which is still determined by traditional feature detection to the extent that is possible to do.

When it's implemented that way, it's quite safe to serve the "wrong" response sometimes.

Because it's safe to serve the wrong response, that allows automatic latency optimsiation, instead of brittle, hard-coded rules that will be wrong sometimes.

Inputs are user-agent, client IP, client cookie state, and feedback about whether other resource bundles were fetched as well, or other kinds of feedback such as detected features. Output is which bundle to serve.

This may improve performance over a hard-coded approach, because it adapts continually to new user-agents out there. Yet it is still coded as traditional feature detection (for features where that's possible), and works the way feature detection has always worked.


Original comment author here -- we can talk about using user agents to detect edge-case bugs as something that's valid to do. We can also talk about whitelisting specific user agents that don't require polyfills so you can speed up initial page load. I think these are reasonable things to suggest.

So you're correct that there is some room for nuance and you're right to point that nuance out.

But Youtube isn't doing either of those things -- it's assuming that a user agent it doesn't recognize by default isn't supported, and rather than serve a feature-detecting version of the page, it's serving users a message to use a different browser. I still maintain that this is basically never the right thing to do (or at least it's so close to "never" that doing so should require some very good, app-specific justifications).

People are debating whether or not you can use JS to feature-detect every single specific bug, and whether or not you need to go the long path on feature-detection and do everything up front, or whether you can skip some steps for performance optimization, and they're missing the entire point of progressive enhancement. If you can detect Chrome or IE 6 and speed things up or avoid a bug, fine. But it's bad practice to rely on that detection.

It's bad practice to have a single list of browsers you support and to turn everything off everywhere else. Partially because it's bad for the web in general, and partially because from a more practical perspective it's not always safe to assume that just because you see a user agent every feature you want will be supported -- extensions exist, custom-compiled browsers exist, and sometimes user agents are just reported wrong for fringe browsers. It's also just plain bad for the user because falling back on feature detection and adding a few seconds to page load is nearly always preferable to being told to download and install a new program.

My original comment wasn't that user agent strings should be universally ignored, it was that it's unfortunate to see a company as respected as Google so fundamentally misunderstand something that's been widely taught as best-practice on the web for over a decade. I kind of feel like some of the people objecting are missing the forest for the trees -- talking about whether there's ever any valid reason at all to ever try to detect a specific browser instead of the underlying idea that the web uses a living standard and that it was designed to be client agnostic.


It's Youtube, not high frequency trading.


The logic is apparently that we can't do progressive enhancement because it would force us to make a second network request, but we can rely on unstandardized APIs that force us to ship polyfills that make our site slow on every other browser that's not Chrome.

Every decision, from progressive enhancement to supporting bleeding-edge features comes with tradeoffs. Youtube is still choosing tradeoffs, it's just choosing them based on what will work best in Chrome specifically.


I agree there are just a bunch of trade offs, and this shows you can’t please everyone. If you ship large bundles of polyfills people are sad they get some slow JavaScript, and if you give them fallback html and JavaScript that renders quick and supports their feature set, they are sad they didn’t get the same feature set as chrome latest. You have to minimize the fallback sadness as well as keep pushing better and better cutting edge experiences. This demanding environment is one thing I do like about the job even if it makes me feel constantly unqualified and doomed to fail :D

Speaking personally from my time before working at Google, the second network request is definitely the worse of the two options if you can’t do anything until it completes for most people- the round trip time kills you in the median US case I am familiar with.

At Apple for my small project we didn’t have these constraints so we just shipped a giant JavaScript bundle with feature detection and browser-side re-rendering that worked alright.


No, this shows you're not interested in pleasing anyone. There is no universe in which instructing the user to change browser software is preferable to adding milliseconds of latency at pageload. This attitude is so wrong it's almost sinister.


I am not defending YouTube redirecting a UA it hadn’t seen to download chrome when that UA explicitly went to /new for the new experience. I don’t know what happened there, and if I did I probably couldn’t say.

I am defending using the UA to tier responses based on the principle that it lowers the latency before the page is useful to the user, and I don’t see how that is a sinister practice.


You're right, Youtube is the largest content streaming website in history. HFT'ers would not even be able to host it.


Most of the big tech companies (amazon, google, facebook most notably) have discovered an extremely strong correlation between latency and $$$. Every saved millisecond of average load time tends to generate an incredible amount of money (like, 10s of millions of dollars a year per millisecond).


And subjectively, the latency on most google services (except for the search page) is pretty bad. Maybe Firefox Nightly isn't modern enough?


It's also true that many browsers appear to have features but they are actually broken features so sometimes in order to get the application to work 100% you have to do browser user agent sniffing... think Indexeddb and safari


That's all fine, but at the same time these conclusions seriously sound like an excuse that leads to cherry-picked metrics optimization. In other words: "we consider low latency across browsers we support an SLO, and to maximize that we just don't support that many browsers."


> This isn’t practical at all in a latency sensitive environment.

Google doesn't really care that much about latency, they only care about it so that they can hide from you that they are loading all that extra javascript that spies on you.

And btw, watching videos is not really latency sensitive, it is more bandwidth sensitive... (I don't really care if I have to wait 1 second to start watching a 2hour video, but I will be mad if it is buffering every 30 seconds)


>and b) you are in a latency sensitive environment where it is unacceptable to degrade performance of the latest browsers

As you don't work at YouTube specifically, you might not know they don't have this constraint. They are perfectly happy to let YouTube run like complete dogshit if the user is on Firefox.


The detection only needs to be done once on a given browser; only an extremely incompetent engineer would suggest performing that detection on every single page.

As such, I'm unsurprised to learn that a Google engineer thinks that the only possible way to solve this is to perform feature detection, repeatedly, on every single page.

The reality is that Google hires mediocre and lazy engineers, and that they also have a vested interest in pushing people into their proprietary browser. The result is that they do the thing that's best for Google as a business (pushing their invasive browser), and their incompetent staff line up to publicly defend this selfish and immoral decision.


Disclaimer: I work at Google but not on YouTube, speaking in a personal capacity.

Long story short, it doesn’t really work this way.

As an example, go to google.com on Internet Explorer 6. No, I’m serious. You will find that it loads just fine and even functions, over plain HTTP. It’s only a small subset of functionality, but it does actually work.

You can’t do that with feature detection.

YouTube may have crossed the threshold where feature detection should be usable for all of the supported platforms and fallbacks, but feature detection is not a panacea; it does take time to execute feature tests, and it can be difficult to detect certain bugs.

Most web developers are definitely not Google, and shouldn't copy what Google is doing without knowing that they need to. It’s generally ill-advised to copy things without understanding them anyways.

Not assuming malicious intent makes sense to me; the Edge user agent was already very tricky due to the fact that it has long pretended to be Chromium anyways.


    Mozilla/5.0 (Windows NT 10.0; Win64; x64) 
    AppleWebKit/537.36 (KHTML, like Gecko) 
    Chrome/64.0.3282.140 Safari/537.36 Edge/17.17134
Hmm... looks like it's entirely practical to look at the Chrome/version for feature detection. And that they are specifically adding Edge/* detection to disable Edge instead of detecting as chrome compatible.

Edit, sorry, wrong/old edge above.. new Edge below.

    Mozilla/5.0 (Windows NT 10.0; Win64; x64) 
    AppleWebKit/537.36 (KHTML, like Gecko) 
    Chrome/76.0.3800.0 Safari/537.36 Edg/76.0.173.0
And the new "Edg" instead of "Edge" means they likely had to ADD detection to disable it.


Yeah, Edge/* is going to detect separately from Chrome. The issue is, prior non-Chromium based versions of Edge would also have a user agent that looks like Chrome as well, so those older versions will need to be treated differently. :(

It's the same reason why everything still has Mozilla/5.0.

To be completely clear, I am suggesting this is a bug. It even looks like a bug, since it ironically tells you to upgrade to "...Edge" while saying Edge is unsupported.


This bug is specifically detecting the new version of Edge (which uses "Edg/") and allows users with other strings "E/", "Ed/", and even Spartan based "Edge/".

Someone intentionally added code to single out Chromium based Edge.


"malicious intent" is an interesting concept.

I agree it seems unlikely this particular feature/engineering choice was introduced with the conscious intent to penalize non-Chrome browsers or Edge specifically by withholding a UI from them.

However, when this kind of thing happens, it can be taken as indication that supporting all browsers as well as possible is not as high a priority as other things. Supporting Chrome very well may be a higher priority than supporting every/any other browsers as well as possible.

After all, doing it in a "feature detection" way may very well be very difficult and thus expensive/time consuming. But it can surely be done. It just means doing it isn't a high enough to priority to justify what it would take. The sort of decisions anyone developing software needs to make all the time, that's real. And the priorities and values explicitly or implicitly chosen are real too.

Is it "malicious intent" to not be willing to put an arguably inordinate amount of resources into supporting non-Chrome browsers, or to effectively prioritize the Chrome UX more than other browsers? I dunno, but it's a thing.

It's also worth remembering though that the reported issue is in a preview version of the browser (and on a brand new barely-non-preview version of youtube), and I wouldn't be too surprised if it's fixed before it goes out of preview. This is one of the points of having preview releases of course, to find bugs -- both bugs in user-agents and bugs in websites.


This was a dev/nightly build. This article is just outrage bait for the hacker news community...


>As an example, go to google.com on Internet Explorer 6. No, I’m serious. You will find that it loads just fine and even functions, over plain HTTP. It’s only a small subset of functionality, but it does actually work.

Why should this be surprising?

What at all about google.com requires all the modern fancy javascript? Theres the search bar, which only need be a basic html text input box, and thats 98% of the reason I'll ever be at google.com.

I can guess that there are all sorts of little features crammed in that I don't know or care about, and that without the most up to date everything, some of those features may not be possible or may not be as efficient or whatever. Except there's a key phrase as part of that "I don't know or care about" which means its not why I'm at google.com whatsoever.

There's an attitude that seems to be common in SV, especially Google, that seems to put way way too much focus on using the latest things and having everything be automated and as low latency as possible....for what? Why? Its cool you can cut so many ms off some request that it pushes the physical limits of the metal, but I don't really care at all, in fact I'll be prompted to get angry, if that request is part of some feature I'm not interested in, and the one I am, is clearly neglected.

I know everyone isn't me, but I find often that my peers have similar feelings. Just look at Android Auto for a myriad of examples of significant buggy behavior that Google certainly has the manpower to fix, but chooses not to.


> You can’t do that with feature detection.

This... is just wrong. If you start from that base feature set that works on all browsers, and then utilize feature detection to progressively enhance the experience, you can definitely do just what this statement says.

Its just easier to rely on UA sniffing, but don't spread misinformation that it isn't possible. It is a LOT more work, particularly to ensure that the experience is a good as possible on both ends of the spectrum.

It's possible that there is better performance using UA sniffing, as the server can decide what code subset to send. So I don't discount it as a potentially valid solution, but just not as "the only way". Thinking like that is how we end up in situations where "best viewed in ... browser" happens.


But there are genuinely things that can't be feature detected.

A bug that happens 1% of the time due to reasons still unknown, or features or bugs that need something to happen on the user or hardware side to detect.

For example, I had an issue once where when using WebRTC on some devices with 3 rear-facing cameras on android it would horizontally flip the "long range focus" camera. There is no way to feature detect that. Hell there wasn't even a way for me to feature detect that I should use the second of the 3 cameras on that device (the first was a fisheye-style wide angle lens, and the 3rd was a middle-ground lens, but the second worked best for our usecase).

I've also used UA detection on a browser bug that would cause fetch requests to fail if the user went offline for a split second inbetween requests. There is no way for me to detect that before the bug happens, and once it happened the "failures" looked like it just never came back online. UA detection was the only tool I had to fix that until it was fixed at the browser level.


You're still talking about progressive enhancement though.

There's a difference between using a user-agent check to find a very specific environment where a bug will occur, and using a user-agent check to determine which of 3 browsers you accept. Used properly, user-agent tests can be a kind of adjacent form of feature detection. Used improperly, and your site is best viewed in IE, even though the user is on a browser that they know will work.


UA detection for known bugs is very different from UA detection to enable standardized features.


@Klathmon

This is also true of your fallback features. They may be implemented incorrectly in an unknown browser too. But it's not reasonable to assume that. You should assume the feature works, unless the UA is on a blacklist.


No, when I say you can’t, I really mean you can’t. If you try visiting https://google.com in IE6, you get the other part of the problem. It doesn’t work. The cipher suites are too old.


A really annoying case where feature detection doesn't work is WebGL on a machine with a blacklisted GPU. Everything detection method will report that it works but it won't. The "feature detection" ends up being rendering something and seeing if you got an image or not.

There are edges where detection falls over.

Mind you, agent string sniffing won't help you either.


If you actually want to support IE6 you have to use some abomination of non-HTML as your base feature set, though.

But UA sniffing should only be used for absolutely broken browsers. For functional browsers baseline+enhancements is very doable, and unknown browsers should be assumed functional.


They're specifically looking for the string "Edg", not "E", not "Ed", not "Edge", they're specifically detecting Microsoft Edge.

Try it yourself:

Go to https://www.youtube.com/new

with

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3800.0 Safari/537.36 E/76.0.167.1

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3800.0 Safari/537.36 Ed/76.0.167.1

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3800.0 Safari/537.36 Edg/76.0.167.1

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3800.0 Safari/537.36 Edge/76.0.167.1

There should be no doubt this is malicious. They're specifically targeting Microsoft Edge. Varying the string away from the real Edge string (Edg/76.0.167.1) gives you the proper experience. They're badlisting Microsoft Edge (Edgium/Chromium based Edge), not goodlisting Chrome.


Not really. Tried myself using Edge and didn't work either. Looks more like a whitelisting of "supported" browsers (such as Safari or Firefox), more than a blacklisting of Edge.

https://postimg.cc/GTwV1P79

https://postimg.cc/r0F3yFM2

However, it's still suspicious they aren't adding the proper support for Edge Chromium.

Also tried using the Brave user-agent. Also failed.

https://postimg.cc/V0qTsSpg


I suspect you didn't try out the user agent strings. Only one of those 4 is blocked.


> Have they already forgotten the dumb "best viewed in IE" banners that magically vanished as soon as the user-agent changed?

They have not. It's just that Google is the top tog now, and doesn't care.

While Microsoft is the underdog, and linux is not a cancer and they love open source.

And in 20 years this may very well reverse again.

The bottom line is to not expect share holders driven multinationals to do what you think is right. They are nor good or bad, just without the need for moral, and only constrain by the market and the law (if enforced).


> They are nor good or bad, just without the need for moral, and only constrain by the market and the law (if enforced).

What do you mean by "if enforced"? That companies should be allowed to explore and exploit weaknesses in the legal system, and that we should think this is acceptable behavior?

Translated to humans: a person who constantly tries to find the boundaries of the law is most likely an asshole.


They aren't making any judgement about the moral quality of the law or enforcing it.

Just that the law can't really constrain you if it isn't enforced on you.

If you have some way of getting the law enforcer to ignore you than you don't really care what the law is


That's not true, the market and the low are based on common culture and moral. If stuff like this is seen as immoral, companies that do it will have trouble recruiting, raising capital, maintaining social presence, etc.


In my experience, while many people don't like to admit it, useragent detection is still one of the best ways to handle many bugs and edge cases in large applications.

Feature detection works well for new features or well documented failures, but it quickly begins to break down in many edge cases.

For example, mobile Safari shipped a broken implementation of IndexedDB for a while and detecting that was a big pain, but swapping the known bad versions to use websql or localstorage worked great. Edge (pre chromium) had some really weird bug with the JIT and the javascript `.call` function for a bit, but there was no realistic way that I could detect just that bug, so I fell back to useragent detection.

There are tons of bugs that I've found over just the last few years that are due to one bad version, or some specific OS/browser/hardware combo, or are just flakey and hard to trigger with code in the page, therefore they are hard to "feature detect".

I have a feeling the amount of those kinds of bugs that you see and their frequency only gets worse when you have monumentally huge amounts of traffic and users as someone like Youtube does, and while I think we should still hold them to a good standard, I can see why they would fallback to UA detection.

After all, it's better to have a well working experience for most users (even with a subset of the features) than one which breaks 1% of the time (and every time it breaks, like it or not, the users are going to blame the site not the browser).

What we really need (in my opinion anyway) is a way of doing UA detection that is much more scoped to specific versions, and remove the ability for those feature detections to accidentally impact new versions or other browsers (either accidentally or on purpose).

Something like setting the browser name/version to a hash of the version number and a secret for each version, and possibly doing the same with other parts of the UA (like hardware/device name and OS name/version).

Then people like youtube could block known broken or bad useragents from getting some features to avoid serving code that breaks to those users, and then the onus is on them to update their lists every time a new version comes out. There are obviously ways around it (UA checker as a service?), but it at least makes it harder for lazyness to take over.


To a certain extent I feel your pain and get where you're coming from. The problem is --

A) The user agent isn't always specific enough. I've run into rendering bugs that were specific not just to individual browser versions, but individual browser versions on specific platforms. I've seen bugs fixed and introduced that didn't trigger user agent updates.

B) Browsers lie about their user agents. Firefox fingerprint protection normalizes the browser version reported in the user-agent to reduce the amount of information leaked.

So yes, agent checks can be useful. I've used user agent checks to reduce CPU-heavy animations on older versions of IE, or target rendering bugs that literally can't be detected any other way. I get it. But when I apply those checks, I'm doing it in the spirit of progressive enhancement -- and recognizing that those checks are error prone and might fail in the future. I never gate an entire application behind a whitelist or blacklist.

Bear in mind, all of the stuff you're talking about falls under the category of "I've found a bug in a weird situation and I just want to try and figure out if I'm in that situation." Not, "I've found the X situations where my site works, and it will only work in those situations." Very different approaches to web development.


Yeah I don't support gating entire sites behind checks like this outside of some situations (like it's a tech demo using a new browser API, or a public beta where they want to limit it to a few browsers until it's stable enough to open up to the world).

But I also absolutely get the allure of whitelisting "known good browsers". I've literally never seen a user hit a bug, and then blame their browser. Hell, I've personally hit bugs and blamed the website only to find out later that it was a browser bug. It's a shitty situation all around, and I don't have a good universal answer. I try to avoid doing that at all costs, but I also have to admit that a piece of software I currently work on is gated to only Chrome and Safari due to the extreme number of bugs that were coming from alternate browsers and the fact that we simply don't have the time to chase them down right now.

It sucks, I feel bad about doing it, and I don't want anyone else to do it for many reasons (including the health of the web in general), but until I personally can find a good way out, I also can't in good conscience be mad at others for doing the same thing.


> Chrome and Safari

Not that I know anything but that's a weird set of browsers to support. Apple can't update Safari without an OS update from what I know and even then it is just on macOS and iOS. Worksheet it be cheaper and easier to force users to get Chrome?


The problem I see often is when sites would use whitelists,l so when you use a less popular browser you get the use Chrome page. At least put a banner that my browser is not supported but let me try the page.


Could you try the faulty operation and see if it failed?


Sometimes, but not always...

For example, the Edge bug was at [1]. It didn't reproduce when the devtools were open, it only happened if the optimizer tried to optimize it which was non-deterministic, and it would only trigger in some cases in pretty weird spots with a lot of supporting code.

Feature detecting that would be a nightmare, but UA detecting it (since I now know the exact versions impacted) is super easy and simple and the fix is extremely small. I don't have that in our codebase any more, but it was a perfect example of a bug which can't easily be feature-detected.

[1] https://github.com/microsoft/ChakraCore/issues/1415


The problem is you can't reliability detect non-deterministic bugs, so in those cases you essentially have to use the UA to decide if you can use them.


This isn't my local pizza shop, Google engineers should know better. Have they already forgotten the dumb "best viewed in IE" banners that magically vanished as soon as the user-agent changed?

Google engineers are now subject to the same forces that IE engineers faced in the IE 6 days. This also includes competition with other browsers and the desire to keep a leading position, which also includes legitimate concerns over latency and reliability. There's also the difficulty of understanding the end user experience from within a large company. The story isn't simple and one sided. It's the mess of plans hitting reality.

it's just unreliable and error-prone over the long term, and hurts the overall web browser ecosystem.

Power corrupts because it's hard to see one's own small abuses of power, and these form escalating habits over time. A goal like "Don't be Evil" doesn't fall to a single mustache twirling guy in a black hat and cape. It gets sandblasted by a million little things over time.


Is Youtube still using Polymer 1.0 with the deprecated shadow DOM v0 that's only supported in Chrome without polyfills [0]? I still use the old version because the new version was annoyingly slow in Firefox last time I checked, and I have no reason to switch to the newer one with lower information density and contrast.

[0] https://www.theverge.com/2018/7/25/17611444/how-to-speed-up-...


>Is Youtube still using Polymer 1.0

Yes if you are logged in. If you switch to private mode - yt will serve you v3.2

>>Polymer.version >"3.2.0"


I get Polymer version 3.2.0 when I'm logged in. They're probably staging a partial rollout which is why switching to private mode makes a difference.


You don't? And you think the fact that Microsoft also switched to Chromium now after "working together on this" is also a coincidence? Or just an attempt to snuffle out Mozilla and Safari from the market? After all, a duopoly is better than actual competition.


> I don't assume any ill intent.

You are being a bit naive.


> user-agent check ... don't assume any ill intent.

A user-agent check is ill intent for exactly the reasons you state:

> we've known for years that feature detection is preferable to user-agent detection

Wasn't Google the company that pushed this in the early days of Chrome?


> Wasn't Google the company that pushed this in the early days of Chrome?

The official party line had never changed. From what I know, Google still recommends that we mortals use feature detection.


User agent for Edge-Chromium is a bit of a mess. Usually it identifies as "Edg", sometimes as Chrome, and sometimes as Edge (the old one).

Specifically, that old Edge agent comes up for streaming sites because old Edge has different DRM systems and was able to serve 4K streams from Netflix and others, which Chrome did not do. New Edge still has those capabilities, so it doesn't want to be served the Chrome compatible version.

I wonder if YouTube somehow ended triggering legacy Edge user agent on account of being a video site?


This is probably it, the user agent detection library likely got updated to include the new Edge chromium build as "Edge browser" and so YT doesn't know the difference between Chromium-based and Trident-based edge.

For reference:

Automatic user-agent in Edge (Trident engine):

    Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.18362
Automatic user-agent in the Chromium version:

    Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3739.0 Safari/537.36 Edg/75.0.109.0


Oh man, what a mess! The UA string mentions almost every browser except the one it actually is...

I understand the reasons, but I think it might be time for a big "reset", and for browsers to stop lying about who they are.


It does mention it at the very end. Either "Edge/18.18362" or "Edg/75.0.109.0"

But yes, user agents are terrible.


> but I think it might be time for a big "reset", and for browsers to stop lying about who they are

The problem is, you can't do this unless every website also "resets" at the same time, because the bad UA detection code will still be out there. The vendors aren't any happier with this situation than we are, but they do it because they have no choice.


Who still does user-agent sniffing? I can't remember the last time I saw UA-detection code. Feature-detection and polyfills have been a thing for more than a decade at this point.

EDIT: Just for kicks, I'll try it. I just downloaded a User-Agent extension, and I'll set my UA to "Chrome 74", and see if anything breaks.


I was working on a product that did user-agent sniffing to figure out which image formats were supported (when to serve WebP). A feature-detection based approach is substantially slower because it breaks the preload scanner: https://www.jefftk.com/p/why-parse-the-user-agent


That was true in 2014! But since 2015 we've had the <picture> element to pick different versions of an image based on screen or capability.


After almost a day, the only website that broke is Netflix; they send me to a help page about the browsers they support.


I was actually suggesting it could be on Microsoft's side, sending the old "Edge" UA instead of the new "Edg" one. But it could easily be a screwup in Google's UA detection treating the new version the same as the old one.

Big question for my idea is whether Edge's "use old UA for streaming sites" is a hardcoded list of a few sites like Netflix, or if it's something that might have flagged YouTube as being a video site that should get Old Edge agent by mistake.

EDIT - MS's useragent overrides are hardcoded to specific domains, so it wouldn't be identifying as Old Edge for YouTube unless someone at Microsoft manually set it to do that. Probably Google's fault for miscategorizing the new UA.

>One section of the JSON configuration file is called EdgeDomainActions and is a series of rules that specify what browser Microsoft Edge should impersonate when visiting a particular site. You can see the EdgeDomainActions config section below.

https://www.bleepingcomputer.com/news/microsoft/the-new-micr...


Technical correction. Edge had its own engine: Spartan

> The Spartan rendering engine (edgehtml.dll) is a new component and separate from Trident (mshtml.dll). The new engine began as a fork of Trident, but has since diverged rapidly over the past many months, similar to how several other browser engines have started as forks prior to diverging. The new rendering engine is also being built with a very different set of principles than Trident - for example: a focus on interoperability and the removal of document modes.

https://www.neowin.net/news/whats-powering-spartan-internet-...


Thank god they forked it. Parts of Trident were older than unit testing was a best practice. The cultural best thing to do was wall off those parts and never touch them because regressions would be impossible to find.

I once did a bug fix for an assertion in the Trident. It’s still one of the hardest bugs I’ve ever solved. It was hundreds of stack frames deep in recursion (DOM layout code) in a function that was several pages long and had bidirectional goto statements. If I didn’t have the amazing features of Windbg it would have been impossible to solve. In the end it was a variant of the C++ slicing problem (if you store a subclass in an array of superclass you’ll chop off the member variables of the subclass).


Double checked and the windbg feature I alluded to is now publicly available. Time travel debugging allowed me to pop the assert and then reverse the program state to eventually find when things started to go wrong. If you’re a windows developer and haven’t yet tried out this feature I strongly recommend it.


Surely this is the nail in the user-agent-sniffing coffin


Wishful dreaming. Bad habits never die.


Except per https://news.ycombinator.com/item?id=20033057 going to the pagee with the "Trident-based edge" UA string gives the "modern YouTube" experience. Doing the same with the "Chromium-based Edge" UA string does not.


Aren't Mozilla and Chrome registered trademarks? Can you just go around pretending to be Chrome or Mozilla and not face any consequences??


> Mozilla/5.0 is the general token that says the browser is Mozilla compatible, and is common to almost every browser today. [0]

[0]https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Us...


My question was not about how they are used. My question was about trademarks. You have to defend them or you end up losing them.


They've lost them since the very beginning. Mozilla's agent initially included Netscape and Chrome's still says Mozilla.


Google employees in these threads: quit Hanlon razoring your own company. Google has done this repeatedly with Microsoft, from blocking Maps and YouTube on Windows Phone to this. The blocking is always specific and based on badlisting Microsoft version strings, and demonstrably has not affected Firefox (even ESR versions!), Safari, etc.

I've posted elsewhere in this topic (https://news.ycombinator.com/item?id=20033057) a way you can test this yourselves. They're badlisting Chromium based Edge, not, as so many seem to assume, using a goodlist of browsers they tested.

I'm sure it's difficult to imagine the "Don't Be Evil" company doing malicious, anti-competitive things, but that's what's happening here, as it's happened many times before. Punishing and deterring Microsoft Edge users is a thing that someone at Google made a conscious decision about. Test it yourselves.


Google has one of the best QA in the business.

As one of the Firefox developers said a few weeks ago, Google will keep "accidentally" introducing lots of pointless little changes to create glitches and irritations into the experience when using rival browsers.

It happened savagely to Windows Phone, no one cared. I remember trying to browse Gmail on a Windows Phone it was like going back to the 1993.

It happened to EdgeHTML: youtube videos just simply didn't play on Edge.

And i'm sure this will happen to EdgeChromium. Why? because EdgeChromium is a small threat.

Only a small threat because everyone uses google search and not Bing. And astonishingly in 2019 Google search is still even Microsoft's own sites than Bing... in 2019. So the moment you try EdgeChromium you are confront with that


> Ironically, the same page states “We support the latest versions of Chrome, Firefox, Opera, Safari, and Edge.”

it shouldn't be that difficult to understand that the latest version of edge means the latest production version, not the brand new complete overhaul of the browser on a different platform that's currently in preview mode.


The completely new overhaul browser should be detected as Chrome, or the new Edg, they had to add code to disable it.


The edge preview sends a different user agent string depending on the website. Maybe edge has just started sending a new UA that YouTube doesn't recognize?


It shouldn’t be that difficult to understand that stopping a user from using your website based on the browser the user is using is bad.

Just immediately from the top of my mind: a new browser developer testing site compatibility, a search bot indexing the website.


From https://twitter.com/gus33000/status/1133402355267977216 .js for youtube has html5_vp9_live_blacklist_edge=true Chromium Edge is fully detected and has flags assigned to it https://t.co/9TLibSDHqS


Another example to show why we need to get Firefox at least on the 20% market-share... Google is doing with Chrome exactly what Microsoft did with IE...


The abuse by the Google Chrome team never ceases to amaze me. I feel trapped on chrome simply because of my reliance on Google drive/docs, which are handicapped on browsers like Firefox.


On OSX, Firefox and Google Drive work fine, but Gmail is a whole other kettle of fish. I don't know what UI change they implemented last year or so, but it is nearly unusable on Firefox, from absurdly long load times, to a 2-3 second wait for contextual options to appear when you right click.

Switched to ProtonMail's free tier and will gradually migrate everything there aside from newsletter and software beta signups. Those emails are mostly trash anyway.


You can change email forwarding in Gmail to forward everything to the Protonmail. That's how I initially transferred. A year and still changing email addresses, I have a ton of accounts to go.


The free tier has 500MB storage, so this gives me a chance to think about what's worth migrating and what's not.


I have no problem at all using Google Drive and Docs on Firefox, what specific issues have you run into?


There are a couple minor things. For example: inside of a Doc, in Chrome you can right click -> edit menu -> copy/paste, but in Firefox it tells you that you must use CMD+C/CMD+V. Non-Chrome browsers also used to be unable to upload folders, but this has been fixed for a while.

Calling the experience Handicapped is extreme, but there's clear preferential treatment toward Chrome.


If I recall correctly, this is actually due to a security restriction that Google has circumvented since it built both the browser and the application.

If you could right click to paste, that would mean it would be possible for Javascript to access your clipboard on any site you go to. Firefox enforces the appropriate security setting here, Chrome simply has some sort of workaround built in by Google to allow them to make the UX slightly better on their own site.


> Chrome simply has some sort of workaround built in by Google to allow them to make the UX slightly better on their own site.

The optics on that are potentially terrible.

The dominant browser from the dominant search engine company deliberately breaks a standard it enforces elsewhere only on its own sites.

Ouch.


Copy/paste on Docs at least used to be a feature you got from installing an extension or "app" in Chrome... I think maybe that now is preinstalled? (Or I've just had it for so long I don't notice)

On the other hand copy from the Docs menubar and custom right-click menu works for me right now in Firefox, with no addons that I'm aware of, so maybe this whole discussion is outdated.


It didn't use to be possible to manipulate the clipboard at all using Javascript, so sites would embed tiny flash applets to make "Copy to Clipboard" buttons.

Now it is possible to copy text to the clipboard using the HTML5 Clipboard API [0], (I think it might still have to be triggered based on a user input event, but I'm not positive), but reading the clipboard is still not possible, for the aforementioned security reasons.

[0]: https://developer.mozilla.org/en-US/docs/Web/API/Clipboard_A...


What are you talking about? Google Docs works perfectly fine in Firefox. What issues exist?


My main annoyance is that all Google apps on iOS will refuse to open links in Firefox, but offer options to set Safari or Chrome as default browser…


And they never retain the choice for those options for long. I’ve lost count of how many times I clicked “use Safari”.


Drive works fine for me in Safari (other than using an unreasonable amount of CPU). It sure beats installing drive sync or whatever it is called.


Yet it's easy to just keep a chromium installation for Google docs and Gmail and do everything else in Firefox.


Basically Google doesn't allow development build Edge users to browse the preview YouTube?

Doesn't seem like someone at Google hates Edge, just looks like they didn't test their brand new YouTube on every browser yet so just limited it to heavy Chrome users.

Or am I wrong?


Quite possibly they have their whitelist of 'tested with' browsers for the new experience, and Edge Preview wasn't one of those, so got shown the 'use a supported browser' modal.


Edge Preview runs on Chromium


I thought it was only "based on" and distinct from Chromium?


It uses the Blink web engine, pages themselves will render exactly the same.


Almost certainly unintentional. They're likely confused by a new User-Agent string or some feature detection gone wrong.


That's almost definitely the case. But there is such a thing as maliciously unintentional, and this seriously keeps happening with every browser that isn't Chrome.

For example; Google is a technically advanced company, so they must have a CI check somewhere in their Youtube release process which confirms that a suite of browser tests pass before a greenlight is given on release. Do you think Chrome is one of the browsers included in this test? Do you think Firefox, Edge, and Safari are as well? Given all of the external evidence we have that they continually break the experience for non-Chrome users, even if they were just oversights.

Maybe Edge released an update that changed a UA string, Youtube is too strict, and it had nothing to do with a new release. Now ask yourself: Is there any chance something like this could happen with Chrome? Of course not; they coordinate and ensure the experience is Perfect on Chrome. Is that less evil, given Google's overwhelming control over the internet?


The new useragent has a later Chrome/version, and changed from Edge/version to Edg/version ... in this case Chrome and Edg align. In any case, they'd have had to ADD additional detection to not detect as chrome in this case.


Having a whitelist of tested browsers you officially support is understandable.

Having a blacklist of tested browsers you know to be so incompatible that the site is unusable for them would be even better.

But the optimal course would be that if a browser is not on your whitelist, you show a warning but allow the user to decide to give it a try nevertheless and have an option to go back to the degraded but more compatible legacy version of the site.

I understand building your intranet tool to only work in a small range of handpicked browsers or devices. But we're talking about a public video platform by a multi-billion dollar company with billions (well, 1.9 bn last I heard) of monthly active users.


The currently agreed best practice is to detect if a feature is supported and gracefully degrade if it isn’t. Do this on a feature by feature bases so that users gradually get better experience as they upgrade their browsers.

However, as other comments have pointed out, feature detection is not always optimal—or even possible—so a whitelist is sometimes needed. This is where YouTube fails. It wrongly doesn’t add the new Chromium based Edge browser to their whitelist. And even then the site doesn’t progressively enhance as it detects supported features.



https://files.catbox.moe/8a4psl.jpeg

Wonder if they see the irony of blocking adblockers.


They're not blocking adblockers, they're blocking users (or browsers) who use adblockers. I don't see it as irony, they're just trying to protect their revenue stream, which I think is fair game, they're free to decide if they want to serve the data to some client or not.


> randomly disabled the modern YouTube experience

Sounds like an experiment designed to measure the effect of the new version vs. old version for the new browser.

> it’s very odd that Google would prevent users... from using the modern YouTube experience

Not odd; experimentation is very common.

> This is most likely an error on Google’s part

More likely author doesn't understand what they're talking about.


It's really time for Anti-Trust regulators to start looking into Google, YouTube, Facebook and Twitter already.


Why on earth is Twitter on this list? Including YouTube and Facebook is already questionable, but Twitter? Anti-trust? What?


This is the fifth or so site that Google has broken for Chromium Edge in the past month. It's clearly no coincidence - it's the same constant stream of "oops" that plagues Mozilla, never ending "teehee we don't support beta browsers (while working fine with chrome canary)" or "teehee accidental UA detection mistake (when Microsoft specifically made the new Edge user agent different from the old)" incidents.

Microsoft should make the new Edge pretend to be Chrome, full time. Just never mention Edge or Edg or Microsoft in the UA at all.

I'm sure they can tell Netflix to detect their fancy 4K DRM differently. Of course that can be used to detect new Edge still, but then at least it's just that much harder to say "oops" when you have to go out of your way to detect a specific feature.


This is really ugly in terms of practice (get MS to use Chromium and then blacklist them from new YouTube) but I specifically disable the polymer (new) YouTube theme. It makes it look bloated. Some features aren't even available in new YouTube. Only downside is no dark theme.


I am using Version 76.0.167.1 (Official build) dev (64-bit). I can watch YouTube videos no problem. Chat does not work. In my opinion: YoueTube, please disable chat for all browsers. It is a useless feature that causes problems. Thank you.


Chat on YouTube is incredibly important. Maybe you don't see value in it, but it's integral to the community feel of the site.


Update from YouTube:

"We're aware that users of a preview version of Chromium-based Edge are being redirected to the old version of YouTube. We’re working to address this issue. We're committed to supporting YouTube on Edge and apologize for any inconvenience this may be causing"

https://www.bleepingcomputer.com/news/google/google-says-the...


Shame I don't have the screenshot anymore, but Facebook used to actually blacklist links2. Changing the user agent to a random sequence of expletives allowed me to view the content...


Many here claim that this is just an innocent mistake by the Youtube team. If it is, it will be corrected. I'm going to set myself a reminder to revisit this story 6 weeks from now. If you send an email address to the email address in my HN profile, I will send you exactly one email 6 weeks from now describing briefly what I found out about the evolution of this story over the next 6 weeks, then delete your email address from my records.


I'm going to assume this is accidental, not intentional.


This is what Google is counting on, former Mozilla exec has a strong take on Google's constant use of "Oops": https://www.zdnet.com/article/former-mozilla-exec-google-has...

To me, this story (and the previous ones about all of the other Google apps breaking for new Edge, despite being Chromium, and not an issue for almost any other company on the web), is going to show Microsoft what happens when they get in bed with a scorpion.

Switching to Google technologies is almost always a mistake.


How would continuing to use their own browser engine have helped them in any way? You imply that other options would have better outcomes for Microsoft without having thought through what they would be.


I suspect that's true but also raises an interesting question: how much effort should Google be expected to expend supporting other browsers? Not testing / optimizing in anything other than Chrome has lead to numerous issues in the past which were not deliberate sabotage but it feels like there's something dodgy about negligence which aligns with business goals, too, just as there was when Microsoft didn't support anything but IE or consistently broke things like MSDN on non-IE browsers.


> how much effort should Google be expected to expend supporting other browsers?

Given that Chrome is dominant, the answer is "a lot of effort". Anything less, and it looks like abuse of dominant position.


Given a comment on the article that user-agent switching seems to fix it, I'm going to assume the same. Just another overly-aggressive user-agent check, instead of using feature checks.


Can you explain? To me it looks like YouTube intentionally, not accidentally, updated their browser block list to include Microsoft Edge Preview.


They likely have a white list of browsers, not a black list, so new browsers may simply not be supported until they update the list.

Edit: And the new Edge version likely identifies itself differently.


The change in useragent would indicate that if they didn't Add specifically to exclude it, it should detect as Chrome.


It sends 3 different useragents in different circumstances for some reason. Edg, Edge, and Chrome. This case is definitely the Edge team's fault.


The circumstances are domain-specific and it's been clear from the beginning that "Edgium" has (consistently) only been sending `Edg` to youtube.com from the beginning of the public release. Since the site in question's behavior changed, it likely wasn't the Edge team's fault.


Of course this is all an accident and Google will later promise to change their workflow to ensure it never happens again (until next week).


At what point does a company consistently make enough "mistakes" that you just take them at their word that they are incompetent and stop trusting them?


Obviously not before they have completely taken over the market with no effective recourse remaining!


The polymer UI does seem to work perfectly if you set the user-agent to Chrome Windows in devtools -> network conditions.


I'm surprised Google still uses useragent parsing for browser detection.


Everyone still uses browser detection. There's no other way to determine a specific rendering engine, feature detection solves a whole different set of problems, it doesn't solve for rendering quirks. In this case the browser is unknown, so they use the most compatible version of the page.


If by "the most compatible version of the page" you mean they deny access, you have an unusually generous definition of compatibility :)


It doesn't deny access. I just tested it, it just gives you a version of YT with fewer features and a dated UI.


While they advocate against doing just that.


I wonder if they are using browser detection instead of feature detection. I don't know when YouTube started feeling so sloppy with it's coding (Gmail has also started feeling this way as it went very JS heavy). I am sightly biased against most JS in the presentation layer


It didn't help that much of Google went all-in on Polymer when it was originally built on standards that nobody else was interested in implementing, ensuring that non-Chrome browsers got a laggy polyfilled experience. It remains an awful framework, and YouTube's tooling has been incredibly slow since migrating.


This appears to be a simple user agent issue. I doubt this was intentional and/or conspiratorial.


I spoofed my user agent as chrome, and it works fine. I'm using the latest Edge Canary


This kind of shenanigan is why everyone should have a user-agent spoofer on their browser. I even think Firefox should ship with it natively.

Far too many sites ban non-Chromium (sometimes even non-Chrome) browsers.


The irony that Microsoft sold out the web and jumped into bed with Chromium because they didn’t want to fix their video playback, only to have YouTube kick their new browser back to a worse experience is beyond amazing on this.


I am surprised we still rely on browser agent, if HTTP2 is supported on the client side, i'd consider that as a modern browser and the rest can be done in javascript.


Isn't this obviously anti-competitive?


Google has a track record of "uuups, it wasn't intentional" issues.


Looks like a new browser war is beginning.. I wonder if Brave can work with it


I hate this, though it is a kind of poetic justice for the way Windows 10 fights tooth and nail to keep you from switching your default browser away from Edge.


As a Chrome user, you might not realize this, but Microsoft's behavior here is no worse than how incessant Google is about trying to get you to switch to Chrome, and this started like a decade before Microsoft got more aggressive about Edge marketing. Microsoft's behavior is arguably only trying to match parity with the competitor who has bullied them behind the dumpster in the schoolyard for too long.

Every single Google page has popups about switching to Chrome. Sometimes Gmail has had both one on the bottom and one on the top at the same time.[1] Besides the general breakage for anyone not using Chrome, as seen in the article, Google's just constant advertising for Chrome in every single one of their products is obscene, and no matter how many times you click "No thanks", it comes back.

I don't want to say "No thanks, Google". I want to say "F--- off, Google".

[1]Actual screenshot of repetitive popup behavior in Gmail: https://pbs.twimg.com/media/DoEPgo2V4AA4Ql5.jpg:large


I'm a Firefox user, and I also hate the way Google abuses its power with Chrome. It hits a little closer to home, though, when it's the OS itself telling you how to use your computer and not just a website.


I just see that as each company operating from their respective positions of strength. Or in antitrust terms, using the market they're dominant in to improve their standing in another market.

I also would argue that the difference between operating system, web browser, application, and website has increasingly blurred, as we have browsers that are the whole OS, apps that actually are embedded websites, websites that act like apps, etc. I think bright lines between various areas of the user's vision are largely gone, particularly on any OS or app which arbitrarily downloads updates from the web anyways. (This might be a subtle ad for "get Linux if you expect your OS to be yours and do what you tell it", I suppose.)


The difference is Google provides me free services in exchange for advertising. I've already paid Microsoft. How much do I have to pay them to stop constantly trying to pick my pocket?


What exactly do you mean by "fights tooth and nail"? I had to reinstall my laptop a few months ago, and the experience was "navigate to Firefox download site, download, install, set default browser"... Same as it has ever been.


The Bing search results beg you not to install it, then on the Default Apps menu Edge gets a "recommended for Windows 10" label unlike the other options, THEN if you try to select something else you get a popup that says "Before you switch, try Microsoft Edge-it's new, it's fast, and it's built for Windows 10." and a "Check it out" button in a bold black box, with "Switch anyway" in light grey-on-grey text underneath.

It's scummy and it's insulting.


And some things ignore your default browser and force-launch Edge anyway (see https://www.ctrl.blog/entry/edgedeflector-default-browser.ht... for more details and a workaround).


Off the top of my head:

* when switching default browsers, Windows suggests you give Edge a try instead

* at some point I got a Windows notification suggesting that I try Edge instead of the browser I was using

IIRC they used to nag users more in the past but I think the notification I got also included an option to not be reminded again.


Literally every Google website tries to get me to download Chrome whenever I use firefox or ie. Every. Single. Time.

2 attempts from Microsoft... not so bad.


I agree that two attempts isn't so bad. I think most users are upset because the notifications happened a lot more frequently in the past and using notifications from the OS for ads comes across as pretty slimy (because you only expect the OS to tell you when something important happens).


iOS, at least, sends just as many "Tips" notifications that also border on being ads for Apple and trusted third party applications, and I've never seen anyone near as harshly complain about "ads in iOS".


They have been doing this to Firefox and Edge for ages. See https://www.theverge.com/2018/12/19/18148736/google-youtube-...

And check out the embedded links to similar accusations by the Firefox CEO among others. It's hard to believe this isn't intentional.

What's hilarious is that the new Edge IS essentially Chrome, making their BS more obvious than before. Chrome is the new IE




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: