Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Trying to be charitable here: could this be a debug/test artefact that inadvertantly got into production?


Unlikely. Google has been breaking non-Chromium (or sometimes even just non-Google Chrome) browsers for years on YouTube and their other websites. It was especially egregious when MSFT was trying their own EdgeHTML/Trident-based Edge. Issues would go away by faking user-agent.


> It was especially egregious when MSFT was trying their own EdgeHTML/Trident-based Edge. Issues would go away by faking user-agent.

Why is there more than one user-agent? Does somebody still expect to receive different content based on the user-agent, and furthermore expect that the difference will be beneficial to them?

What was Microsoft trying to achieve by sending a non-Chrome user-agent?


User agents are useful. However they tend to be abused much more often than effectively used

1. They are useful for working around bugs. You can match the user agent to work around the bugs on known-buggy browser versions. Ideally this would be a handful of specific matches (like Firefox versions 12-14). You can't do feature detection for many bugs because they may only trigger in very specific situations. Ideally this blacklist would only be confirmed entries and manually tested if the new versions have the same problem. (Unfortunately these often end up open-ended because testing each new release for a bug that isn't on the priority list is tedious.)

2. Diagnosing problems. Often times you see that some specific group of user-agents is hammering some API or fails to load a page. It is much easier to track down if this user agent is a precise identifier of the client for which your site doesn't work correctly.

3. Understanding users. For example if you see that a browser you have never heard of is a significant amount of traffic you may want to add it to your testing routine.

But yes, the abuse of if (/Chrome/.test(navigator.userAgent)) { mainCode() } else { untestedFallback() } is a major issue.


Only option 1 is something that users, who are the people who decide what user-agent to send, might care about. And as you yourself point out, it doesn't happen.


I'm pretty sure that users care that websites can fix bugs affecting their browser. In fact option 1 is very difficult to actually implement when you can't figure out which browser is having problems in the first place.


Why do you think users wouldn't care about sites diagnosing problems that are making pages fail to load (#2) or sites testing the site on the browser that the user uses (#3)?


It is normal practice for each browser to have its own user-agent, no? But the fact that Google intentionally detected it and used polyfills or straight up invalid JS at the time was insane. A similar spin today is "Your browser is unsupported" you see here and there. When a major platform such as YouTube does it, it is really impactful.

It would never do feature detection, would give lower quality h264 video, etc. Back then, there was really nice third-party application myTube which had made this less of an issue but it was eventually killed through API changes.


It may have been intended to be a normal practice, but as far back as IE vs Netscape everyone has been mucking with user agents for non-competitive (and counter-non-competetive) reasons


> Trying to be charitable here [...]

There is no reason for charity with such a large power difference. For Firefox, "bugs" like this can really end up being a lost one-shot game.

It's like people walking by and casually reaching for your phone. It's always meant as a joke, unless you don't pull it away fast enough. Then suddenly it wasn't a joke - and your phone is gone.

This is not rooted in any reservation against Google in particular. If you are a mega-corporation with the power to casually crush competitors, you should really want to be held to a high standard. You do not want to be seen as the accidentally-fucking-others-up-occasionally kind of company.


Without studying the minified code I wouldn't assume malice just yet, this could be just an inexperienced developer trying to lazily fix some browser-specific bug, or something that accidentally made it to production like you say


You think they let inexperienced developers touch the YT code base without proper code review? Even if that were the case, which is an extremely charitable assumption, that itself would be malice in my opinion.


> You think they let inexperienced developers touch the YT code base

Uh, yes? We were all inexperienced at some point. Just the linked file is like 300k lines of unminified code, I doubt it's all written by PHDs with 20 years of experience


Some would argue that owning a PhD degree does not necessarily guarantee half decent engineering skills.


It's the "without proper code review" part that I consider malice, not being inexperienced.


> You think they let inexperienced developers touch the YT code base without proper code review?

Yes


YouTube is way too stable for that to be the case.


lol

This reply is for everyone who has ever worked on the codebase...


Should be: LOL LGTM


there is such a thing as overextending the benefit of the doubt, to the point that malicious actors will abuse it.


It could even just be a timeout as part of retry logic or similar. A lot of people seem to be saying that there is no reasonable reason to have a `sleep` in a production application. But there are many legitimate reasons to need to delay execution of some code for a while.


As the saying goes: "we like naked girls, not naked sleep". Even the interns should know that, naked sleep is just bad - not fixing anything.


If, with Youtube size, they do not test on Firefox, this is as much malice as doing this deliberately.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: