Edgium pretends to be Chrome towards Gmail, Google Play, YouTube, and lots of non-Google services; on the other hand, it pretends to be Classic Edge towards many streaming services (HBO Now, DAZN, etc.) because it supports PlayReady DRM, which Chrome doesn't.
[Edit] Here is the full list: https://pastebin.com/YURq1BR1
I see lots of people who have to use edge on order to get 4k content from Netflix; presumably because of the DRM issues.
Chrome uses Widevine, but one of Chrome's philosophies is that you should be able to wipe a Chrome install, reinstall Chrome, and have no trace that before/after are the same person. That means no leveraging machine-specific hardware details that would persist across installs. "Software-only DRM", essentially.
Edge on Windows (and Safari on OSX) are able to leverage more hardware-specific functionality --- which from a DRM perspective are considered "more secure", but the tradeoff is a reduction of end-user anonymity (i.e. if private keys baked into a hardware TPM are involved).
Last I checked, Chrome/Firefox were capped at 720p content, Safari/Edge at 1080p, though it looks like Edge is now able to stream 4k.
When I use netflix, I have a much better experience.
If you're on Linux, you won't be able to stream at 1080p, let alone 4k. Netflix even went out of their way to disable workarounds that users developed.
I don't know what the resolution of my TV is, but I highly doubt it's over 1080p, if that.
The actual DRM limitations also vary by content (and region) - with some titles I get 720p on Linux while some other titles are limited to SD, while I get 1080p on Windows Edge on those same titles.
> Mozilla Firefox up to 720p
DRM on streaming and BluRays made it so that any usage outside basic consumption on prescribed devices is better served by illegal means.
After you pick your torrent, it takes the seedbox a few seconds to download the content. Then you can stream your download using emby, vlc over http, or whatever you prefer.
With torrents you can get a film in minutes. With popcorn you are exposed the entire time you watch the film.
In Germany they monitor peer connections and send a payment demanding an out of court settlement. After two years they escalate to a court appearance in a remote town. If you don't show, you lose and they turn it over to debt collection.
But I have to say Birdman was a great film.
Why did you use piratebay unless that was your goal?
There are plenty of torrent streaming and download clients that work just as well and are just as convenient as Netflix, without needing to rely on a central authority.
Why would that be their philosophy? It sounds like some kind of privacy-motivated idea which seems contrary to Google’s typical philosophy. Or is it more about portability?
The evercookie project doesn't appear to leverage EME, for what that's worth. https://samy.pl/evercookie/
Another popular choice for high quality is Safari on macOS because it implements Apple's FairPlay.
I'm surprised, because Fairplay is publicly crackable.
Currently only Microsoft itself even try to implement it on their own Chromium-based browser.
Yesterday I saw a HN comment saying you can add the (?|&)disable_polymer=1 parameter to the end of YouTube URLs to make the site much faster - iirc Polymer is extremely slow on Firefox only. This extension was also linked: https://addons.mozilla.org/en-US/firefox/addon/disable-polym...
Unfortunately there doesn't seem to be any workaround for ReCaptcha on FF. I generally end up opening the website in the GNOME or KDE (Falkon) browser which use something like WebKit/Blink - there it works on the first try every time.
On Firefox only. Obvious solution to which being...
Pretend firefox as chrome makes it works perfectly.
They lock the community thread and fixed that after several days I found the finding and post it there.
Shame on you, google.
(That's not snark - I really don't get it. They don't appear to mind people talking negatively about a lot of other stuff they get up to. Maybe lingering antitrust fears from the 90's MS suit?)
They have been a hypocrite from the start, but people got too consumed with its free Gmail, RSS Reader along with its Do no Evil they decided to trust them blindly.
The current browser, and Web Tech scenario is pretty much Google's way or the highway. So I am glad Apple kept Safari as the only option on Apple platform. Not allowing them to dictate everything.
They do, I'm not sure that should be called as a 'fix' though.
Because it even works perfectly on firefox as long as you spoof your useragent to chrome.
Installing an extension to spoof your user agent? Since we wouldn't want to reward Google being anti-competitive.
are you actually asserting that Google is purposefully adding code/"tweaking" their web apps to run slowly on browsers other than Chrome?
do you have any evidence at all for this other than anecdotes about people experiencing Google web app clunkiness on Firefox?
That said; if it's possible to measure firefox/chrome performance (with altered user-agents) it would make for a good blog post.
However, I interpreted it differently, to basically mean that because of objective fact, of all the explanations you can think of, only one of them is defensible. In other words, it's similar to saying "admit it, this is the only reasonable conclusion".
The meaning is inverted by swapping the "to" and "not".
Slack, Skype, and Zoom video calls don't work in Firefox, even though WebRTC is an open standard. But Google Hangouts works perfectly.
I'm loath to give Google credit for something that ought to be standard practice, but of the major (key word) free video conferencing options, they seem to be the only one that's Firefox-compatible.
Gmail is painful compared with OWA (work) and zoho (home), I stopped using my gmail account for new stuff about a year ago.
I'm trying maps.me because unlike google it does offline walking directions but I haven't used it enough.
I really want to like duck duck go but it feels like google still provides better results.
I find I get better results with ddg than google. YMMV I guess.
It’s hard to escape the conclusion that Google’s front-end development process is completely incompetent and has no respect for customers’ battery or bandwidth.
But I remain astonished there's an apparently very successful startup who's entire effort is a "pro" webmail client for gmail.
It would have never occurred to me to create a better webmail.
As I reach my greybeard years, I'm increasingly aware that I've been doing everything wrong.
>I'm increasingly aware that I've been doing everything wrong.
Any theme to this?
I frequently consume web articles with a combination of newsboat + Lynx, and it's astounding how many websites throw up HTTP 403 messages when I try to open a link. They're obviously sniffing my user agent because if I blank out the string (more accurately, just the 'libwww-FM' part, then the site will show me the correct page.
I'm pretty sure that the webmasters responsible for this are using user agent string blocking as a naive attempt to block bots from scraping their site, but that assumes that the bots that they want to block actually send an accurate user agent string the first place.
That is exactly what they are doing, and it works really well.
We blocked user agents with lib in them at reddit for a long time.
Any legit person building a legit bot would know to fake the agent string.
The script kiddies would just go away. It drastically reduced bot traffic when we did that. Obviously some of the malicious bot writers know to fake their agent string too, and we had other mitigations for that.
But sometimes the simplest solutions solve the majority of issues.
What, that's totally backwards. Anyone using a bot to do things that might get blocked by publishers fakes the string, legit purposes should really show who / what they are.
This seems like a pretty good reason in itself why they might be interested in phasing out User-Agents.
I'm saying, the hypothetical flow from Google is:
1. Our Chrome detection relies on the User-Agent header.
2. But people can just lie in the User-Agent header.
3. Let's get rid of it and use something that's harder to lie about.
Closing any feature discrepancy isn't a goal here, as far as I can see. The whole point is to lie to the user that a feature discrepancy exists when it doesn't.
You can make the argument that Google is free to do their browser detection however they want (and therefore doesn't need to solve this problem by eliminating User-Agents), but this is still an obvious example of the User-Agent header causing problems for Google.
Many people assume Google, as an upper-level business decision, purposely makes products work better on Chrome in order to vendor-lock users to the browser. Maybe that's true; or maybe it's developers being lazy and using User-Agent detection. Removing their ability to do so might actually improve cross-browser compatibility of Google products.
So Google developers don't need to improve feature detection - that part is working fine already.
The attitude of “it works on Chrome, I don’t care about anything else” is fairly widespread anyway. Just to stem the tide a little bit I’ve been developing on Firefox and Safari first, and then checking Chrome last.
I got bitten before when I made a browser game, and then noticed that it was all sorts of broken on Edge, even though Edge supposedly had all the features I needed. It turns out that Edge did have all the features I needed, but I had accidentally used a bunch of Chrome features I didn’t need. The easy way out is to turn things off when I detect Edge. The hard way is to find all the broken parts and fix them. So nowadays, I don’t do any web development in Chrome.
But I'll admit I will also poke around outside of the tests, and I'll usually only be doing that in chrome, unless I've had a bug report about firefox in particular. And I'll only really open up Safari when I'm testing VoiceOver. ChromeVox just isn't good enough.
You can see that they should have fought harder and escalated, but issues like this are probably not the ones most upper-middle management want to potentially damage their career for.
Some examples I've seen using the latest Firefox on *BSD:
Facebook won't let you publish or edit a Note (not a normal post, the builtin Notes app). I think earlier they wouldn't play videos but they might have fixed that.
Chase Bank won't let you log in. Gives you a mobile-looking UI which tells you to upgrade to the latest Chrome or Firefox.
In these cases if you lie and say you're using Linux or Windows it works flawlessly.
Did I mention it's the same code as a working configuration?
I think it's more likely somebody did not know how to properly parse user-agent and they blocked more than they intended to.
It sounds a lot like you are making excuses for them and bad/lazy/poorly thought out code.
UA has a lot of limitations and is fairly easy to work around giving data to for power users. I would imagine Google didn't want to keep playing around with that.
Chrome includes a unique installation id in requests to Google owned domains. They don't need any cookies or user agents to guess who you are and best of all they don't have to share that information with their competition.
A lot of stuff gets blocked for this reason. The company doesn't want you calling them because HD video doesn't work on Firefox even though you pay for HD quality, they do not test or guarantee Firefox compatibility in the slightest and yet they have to talk to an angry customer now. It makes business sense to redirect people to supported use cases when you know your product probably won't work as intended otherwise.
You don't have to agree with the decision (and you can always cancel your membership if you do) but they had their reasons.
Even knowing what they were doing, I fielded at least two support requests asking what was going on. I can only hope I wasn’t the only one.
Now that everything plays nicely I just happen to have no interest in Netflix for other reasons...
Or we could build for Firefox. There's always that.
Google isn't a singularity ️
So, basically, Microsoft using user-agent to detect Chrome....
Which is probably why Google wants to phase out the user agent.
For sure whatever Google invents to replace it will not be so easily circumvented.
Gee, I wonder how this is going to end: https://webaim.org/blog/user-agent-string-history/
I don't really understand how this will result in any real difference in privacy or homogeneity of the web. Realistically every browser that implements this is gonna offer up all the info the server asks for because asking the user each time is terrible UX.
Additionally this will allow google to further segment out any browser that doesn't implement this because they'll ask for it, get `null` back and respond with sorry we don't support your browser, only now you can't just change your UAS and keep going, now you actually need to change your browser.
And if other browsers do decide to implement it, they'll just lie and claim to be chrome to make sure sites give the best exp... so we're back to where we started.
It does a little: sites don't passively receive this information all the time, instead they have to actively ask for it. And browsers can say no, much like they can with blocking third party cookies.
In any case I'm not sure privacy is the ultimate goal here: it's intended to replace the awful user agent sniffing people currently have to do with a sensible system where you query for what you actually want, rather than infer it from what's available.
(Disclosure: I work at Google, speaking only for myself)
(Still speaking only for myself)
Why are we creating redundant headers?
Sure a lot of developers abuse the feature but I fear this might create another set of problems.
Lets run through that scenario:
sites that don't need this info still aren't gonna ask for it or use it. sites that want it will get it this way and even if you respond with "no" that's useful to them as well for fingerprinting and as a way to fragment features to chrome only. So, what's changed?
To an extent, sure. But to follow the model of third party cookies, let's say client hints are used extensively instead of user agent and all cross-domain iframes are blocked from client hint sniffing. All the third party iframe is going to be able to detect is whether user has a client hint capable browser or not. That's a big difference from the whole user agent they get today.
The idea is that this won't be a Chrome-specific API. It's been submitted to standards bodies, but Chrome is the first to implement. For example, Firefox have said they "look forward to learning from other vendors who implement the "GREASE-like UA Strings" proposal and its effects on site compatibility" so they're not dismissing the idea, they're just saying "you first".
I have a feeling Google won't do it that way, because they intentionally gimp most of their apps on non-Google browsers for no reason other than to be dicks.
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27 Safari/525.13
How did they get so confusing? See: History of the browser user-agent string
Also, last year, Vivaldi switched to using a user-agent string identical to Chrome’s because websites refused to work for Vivaldi, but worked fine with a spoofed user-agent string.
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
Sec-CH-UA: "Chrome"; v="74"
Isn't moving this information to a separate Sec-CH-UA headers going to make things _more_ messy? Especially if it's in _addition_ to the frozen User-Agent header?
Aren't we still going to have the issue with needing to fake even the new Sec-CH-UA header?
If we're going to freeze the User-Agent header, that's fine, but don't just move the unfrozen info to a separate header. Now you have 2 problems.
Aren't we just making the problem worse?
The individual headers, on the other hand, tell you EXACTLY what the system and browser is.
Ideally, web browsers should attempt to treat the content the same no matter what device you are on. There shouldn't be an iOS-web, and a Chrome-web, and a Firefox-web, and an Edge-web; there should just be the web. In which case, a user-agent string that contains the browser and even the OS only encourages differences between browsers. Adding differences to your browser engine shouldn't be considered safe.
Which brings me to privacy. It's not as if there aren't other ways to try and fingerprint a browser, but the user agent is a big mistake for privacy. It'd be one thing if the user-agent just said "Safari" or "Firefox", but there's a lot more information in it beyond that.
If the web should be the same web everywhere, then the privacy trade-off doesn't make much sense.
Right now when they go out and make their own API changes without consensus (which already happens), it's possible to distinguish the "for Chrome" case and still support the standard. But if there were no User-Agent, and Google wanted to strongarm the whole group into something, and 90% of browsers are Chromium-based, devs will likely just support the Chromium version and everyone else will have no choice but to fall in line.
The web suffers a ton from the “red queen” rule in so many different ways anyway—you have to do a lot of work just to stay in the same place.
I still see a lot of contradicting benchmark and, apart from some Google Apps, personnally, I have not seen a lot of sites actually really leveraging HTTP2 (including push).
But maybe you did put and leverage HTTP2 on your own website ? At your company ? Did you use push ? Do you use it with CDN ?
Yes, unequivocally. It’s amazing, even without push. The websites that use it are faster, and the development process for making apps or sites that load quickly is much more sane. You don’t have to resort to the kind of weird trickery that pervades HTTP/1 apps.
> Or is it just an improvement for Cloud provider that keep pushing the Kool-Aid ?
I don’t see how that makes any sense at all. Could you explain that?
> But maybe you did put and leverage HTTP2 on your own website ? At your company ? Did you use push ? Do you use it with CDN ?
From my parent comment,
> Speaking as a consumer of the web, as an individual who runs their own website, and at a developer working at a company with a major web presence.
My personal web site uses HTTP/2. It serves a combination of static pages and web apps. No push. HTTP/2 was almost zero effort to set up, and instantly improved performance. With HTTP/2, I’ve changed the way I develop web apps, for the better.
My employer’s website uses every technique under the sun, including push and CDNs.
I've seen a few CDN having a page loading a grid of image in HTTP/1 at page load, and then load the same stuff with HTTP/2 on a button click. It indeed shows you a nice speed up.
Except, when you block the first HTTP/1 load and start with loading with HTTP/2 first and flush cache between loads, the speedup vanishes. The test is disingenuous, it is not testing HTTP/2 but DNS cache velocity.
So, those type of website makes me rather cautious. And the test, for the small scale workloads I work with, have not been very conclusive.
Do you have serious articles on the matter to recommend ? Preferably not CDN provider trying to sell me there stuff.
The demos I’ve seen use different domain names for the HTTP/1 and HTTP/2 tests. This makes sense, because how else would you make one set of resources load with HTTP/1 and the other with HTTP/2? This deflates your DNS caching theory.
I didn’t rely on tests by CDNs, though. I measured my own website! Accept no substitute! The differences are most dramatic over poor network connections and increase with the number of assets. I had the “privilege” of using a high-RTT, high congestion (high packet loss) satellite connection earlier this year and difference is bigger.
What I like about it is that I feel like I have more freedom from CDNs and complicated tooling. Instead of using a complicated JS/CSS bundling pipeline, I can just use a bunch of <script>/<link>/"@import/import". Instead of relying on a CDN for large assets like JS libraries or fonts, I can just host them on the same server, because it’s less hassle with HTTP/2. If anything, I feel like HTTP/2 makes it easier to make a self-sufficient site.
Finally, HTTP/2 is so dead-simple to set up on your own server, most of the time. It’s a simple config setting.
Are you actually seeing good results from push? I have seen many projects try to use it, but am not aware of any that have ended up keeping it.
(Disclosure: I work at Google)
Push isn’t worth it, from what I understand. I think that’s the conclusion at work.
I _think_ it's working pretty well as far as I can tell.
Nobody today expects identical rendering: people are used to responsive websites, native widgets etc. The problem people are actually experiencing (far less now than in the past) were more serious, such as z-axis ordering differences resulting in backgrounds obscuring content.
If I'm connecting to a site with Lynx, I sure as heck don't want them to try to serve me some skeleton HTML that will be filled in with JS. Because my browser doesn't support JS, or only supports a subset of it.
User Agent being a completely free form field is the real mistake IMO. Having something more structured, like Perl's "use" directive, might have been better.
I can understand why this is a good thing for privacy. Like many things to do with security on the web though, it's just a shame that bad actors have to ruin so many things for legitimate uses. (The recent story on Safari local storage being another example of that...)
There are a variety of scenarios where this comes up (e.g. we ship a site that is rendered, by another vendor, within an iframe; so we have to set SameSite: None on our application's session cookie so that it's valid within the iframe, thus allowing AJAX calls originating from within the iframe to work based on our current auth scheme.. BUT only within Chrome 70+, Firefox but NOT IE, Safari, etc).
I would not personally rely on this as a substitute or replacement for User Agent by September (Google Chrome 85).
Of course, the reality of the web meant they had to do a bunch of compatibility hacks to get pages to display well.
(Gecko appeared in the original Safari on iPhone UA, IIRC)
(Also kind of silly in that even real browser-fingerprinting setups can be defeated by a sufficiently-motivated attacker using e.g. https://www.npmjs.com/package/puppeteer-extra-plugin-stealth, but I guess sometimes a corporate mandate to block scraping comes down, and you just can't convince them that it's untenable.)
Best I've ever been able to do is implement server-side throttling to force the scrapers to slow down. But I manage some public web applications with data that is very valuable to certain other players in the industry, so they will invest the time and effort to bypass any measures I throw at them.
Of course, sometimes obfuscating how your website works can make it needlessly more complicated, so it's a trade off.
What a bunch of turds.
Thank you for the Nintendo Switch pro-tip.
For example, I believe they REALLY want us to use the youtube app:
- Viewing youtube.com on a new iPad pro, Goolag lies and says "your browser doesn't support 1080p."
- Ok, change to desktop version in app. Goolag once again lies and says "your browser doesn't support full screen." They also lie and say they've redirected you to the "desktop version", and nag you with a persistent banner that you should return to the safety of the mobile website.
If I were to use the app, they would have FULL CONTROL.
The same is true for server side applications of user-agent. There are plenty of non-privacy-invading reasons to need an accurate picture of what user agent is visiting.
And a lot of those applications that need it are legacy. Updating them to support these 6 new headers will be a pain.
I'm all for preventing tracking, but I can't imagine a time where all browsers behavior so similarly that we won't have to write workarounds for browser bugs and differences. As a developer I can't imagine caring about Edgium vs Chrome, but it's important to know what the underlying engines are.
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
AppleWebKit/537.36 (KHTML, like Gecko)
Sec-CH-UA: "Chrome"; v="74"
It's already bad infrastructure design to have the server do different renderings depending on `User-Agent` value.
Try browsing the web without any UA header for a week or two, and you'll understand. You get blank pages, strange server errors, and other weird behaviour --- almost always on very old sites, but then again, those also tend to be the sites with the content you want. Using a UA header, even if it's a dummy one, will at least not have that problem.
(I did the above experiment a long time ago - around 2008-2009. I'm not sure whether sites which expect a UA have increased or decreased since then.)
I agree with getting rid of all that new noise, however.
...and effectively block access to a bunch of existing content on the Internet, still very valuable, whose owners may not have the effort to spare to make any changes.
> for best performance this website would like to know what type of device you are using?
While requesting every single "hint" and there is "ok" button and greyed out "read more or declide this request" tiny line.
I remember when I was writing a library around the audio API, and the ways it behaved on Chrome were different across Macs, Windows and Android. Detecting the OS with the user-agent string was literally the only way to build code that would work.
This pattern keeps repeating itself, freeze "Mozilla/5.0", start changing "Chrome/71.1.2222.33", freeze that, start changing "Sec-CH-UA", etc. Browsers will start needing to fake "Sec-CH-UA" to get websites to work properly, etc.
What the heck is the CPU architecture good for?
3 months ago: https://news.ycombinator.com/item?id=21781019
1 year ago: https://news.ycombinator.com/item?id=18564540
> Blocking known bots and crawlers
Currently, the User-Agent string is often used as a brute-force way to block known bots and crawlers. There's a concern that moving "normal" traffic to expose less entropy by default will also make it easier for bots to hide in the crowd. While there's some truth to that, that's not enough reason for making the crowd be more personally identifiable.
This means that consumers of the Google Ad stream have one less tool to identify bots, and will pay Google for more synthetic traffic, impressions and clicks; this could be a huge revenue boost for Google. A considerable amount of their traffic is synthetic. I doubt this was overlooked.
i wonder what Amazon will do. they serve completely different sites from the same domain after UA-sniffing for mobile.
is the web just going to turn into blank landing pages that require JS to detect the screen size and/or touch support and then redirect accordingly?
or is every initial/landing page going to be bloated with both the mobile and desktop variants?
that sounds god-awful.
there's not a "right" and a "wrong" here; it's about trade-offs.
you're either stripping things down to the lowest common denominator (and leaving nothing but empty space on desktop) or you're wasting a ton of mobile bandwidth by serving both versions on initial load (the most critical first impression).
you frequently cannot simply squeeze all desktop functionality from a 1920px+ screen onto a 320px screen - unless you have very little functionality to begin with. Amazon (or any e-commerce/marketplace site) is a great example where client-side responsiveness alone is far from sufficient.
https://www.walmart.com/ does it okay, but you can see how much their desktop site strips down to use the same codebase for desktop and mobile.
and how do these grown up developers feature-detect when js is disabled? or are they too "grown up" to deal with anything but the ideal scenario?
> I'd be surprised if that's how Amazon is doing it still.
why don't you go there and open up your "grown up developer" devtools.