Hacker News new | past | comments | ask | show | jobs | submit login

From reddit discussion (https://www.reddit.com/r/firefox/comments/17ywbjj/comment/k9...):

> To clarify it more, it's simply this code in their polymer script link:

> setTimeout(function() { c(); a.resolve(1) }, 5E3);

> which doesn't do anything except making you wait 5s (5E3 = 5000ms = 5s). You can search for it easily in https://www.youtube.com/s/desktop/96766c85/jsbin/desktop_pol...




That is not correct. The surrounding code gives some more context:

    h=document.createElement("video");l=new Blob([new Uint8Array([/* snip */])],{type:"video/webm"});
    h.src=lc(Mia(l));h.ontimeupdate=function(){c();a.resolve(0)};
    e.appendChild(h);h.classList.add("html5-main-video");setTimeout(function(){e.classList.add("ad-interrupting")},200);
    setTimeout(function(){c();a.resolve(1)},5E3);
    return m.return(a.promise)})}
As far as I understand, this code is a part of the anti-adblocker code that (slowly) constructs an HTML fragment such as `<div class="ad-interrupting"><video src="blob:https://www.youtube.com/..." class="html5-main-video"></video></div>`. It will detect the adblocker once `ontimeupdate` event didn't fire for 5 full seconds (the embedded webm file itself is 3 seconds long), which is the actual goal for this particular code. I do agree that the anti-adblocker attempt itself is still annoying.


For the completeness, the omitted Uint8Array is the following 340-byte binary (here in base64):

    GkXfo59ChoEBQveBAULygQRC84EIQoKEd2VibUKHgQRChYECGFOAZwH/////////FUmpZpkq17GD
    D0JATYCGQ2hyb21lV0GGQ2hyb21lFlSua6mup9eBAXPFh89gnOoYna+DgQFV7oEBhoVWX1ZQOOCK
    sIEBuoEBU8CBAR9DtnUB/////////+eBAKDMoaKBAAAAEAIAnQEqAQABAAvHCIWFiJmEiD+CAAwN
    YAD+5WoAdaGlpqPugQGlnhACAJ0BKgEAAQALxwiFhYiZhIg/ggAMDWAA/uh4AKC7oZiBA+kAsQEA
    LxH8ABgAMD/0DAAAAP7lagB1oZumme6BAaWUsQEALxH8ABgAMD/0DAAAAP7oeAD7gQCgvKGYgQfQ
    ALEBAC8R/AAYADA/9AwAAAD+5WoAdaGbppnugQGllLEBAC8R/AAYADA/9AwAAAD+6HgA+4ID6Q==
VLC somehow refuses to play it, but its nominal length can be verified with a short JS code like:

    v = document.createElement('video');
    v.src = `data:video/webm;base64,<as above>`;
    await new Promise(resolve => v.onloadedmetadata = resolve);
    console.log(v.duration);


I couldn't reproduce the 5s wait in multiple scenarios in Firefox (various combinations of being logged in / not being logged in / without adblocker / with adblocker) and couldn't reproduce a 5s wait time in any of them, it played back immediately in each case (when without adblocker, using a second video to have one start without ad). I tested on Linux.

What exact combination of circumstances is required to trigger the multi second wait time?


I just tested this in firefox on ubuntu. Three subsequent new tab tests.

Load: 4.34s, 5.14, 2.96, 3.35

DOMContentLoaded: 3.65s, 4.56, 2.92, 3.33

Finish: 13.14s, 10.77, 8.49, 12.02

So it's getting a bit faster over time, but still heinous, and crucially, it isn't hanging on requests. Individual asset GET/POST requests are taking tens of ms, worst was a few parallel 254ms GETs on a cold start. Usually 50-70ms. But there is a flurry of requests, then a period of very few requests until 5s after init, then another flurry.

Firefox 119.0 Ubuntu 22.04 uBlock Origin, Privacy Badger

Same OS, chrome 115.0.5790.170, no blockers, youtube is much snappier, it still definitely takes a few seconds to paint thumbnails, but it's basically done by 5s. DOMContentloaded is never more than 1.75s, finish <8s.

Firefox private window with blockers off has similar time statistics. But actually doubleclick.net is still getting bounced.


I tested in Firefox (uBlock), LibreWolf (uBlock), Safari (AdGuard), and Chromium (no ad blocker), and the initial home page load takes a couple seconds, but I never witnessed a 5s delay. I would say it was actually fastest in Firefox for me, but that may have just been a result of some caching. I am a premium subscriber and have never seen a warning for using an ad blocker, so I'm not sure if premium subscribers get a pass.


I can't reproduce this either. YT on FF plays immediately for me


I am experiencing delay on both Firefox and Waterfox


It is still better to wait 5s without ad than with ad.


It has to be a background check, otherwise you can't explain cases (like me) where the code is running but users never noticed any delay.


I wonder if it is just a coincidence that 5s is the time before a skippable ad becomes skippable?


Either wait 5 seconds without ad, or get served an ad about switching to Chrome


Okay, I'm sold on the delay, but where's the code that detects non-chrome?

Do they serve different js based on the user agent header? If they delay chrome too there's no foul.


Just going off this tweet, it seems to be user-agent based: https://fixupx.com/endermanch/status/1726605997698068630


If YouTube are going to go down this path, then perhaps Firefox devs should set the user agent to Chrome for YouTube?


They delayed chrome too. At least for me.


Why is it only trying to detect ads when the user agent is Firefox?

https://old.reddit.com/r/firefox/comments/17zdpkl/this_behav...


Probably because there are other methods for Chrome that don't apply to Firefox.

Like when I noticed that some sites did some URL rewriting trickery on Firefox and others browsers, but not for Chrome. The trick is to show you the proper URL the link points to, but as you click, it is substituted for one that is a redirection, for tracking purposes (ex: "https://l.facebook.com/l.php?u=http:://actualsite..."). On Chrome, they don't need to use these tricks as the browser supports the "ping" attribute of links, so they can do their tracking without rewriting the URL.


This kind of BS is why I don't ever click on links directly. I copy/paste them instead, so I can examine and trim them. Often, the actual link is through some sort of redirection service and I need to copy/paste the text the browser shows for the link rather than the actual link.

There's so much trickery and nonsense around this stuff that no link is safe to just click on.


Check out the Privacy Badger extension. I believe it removes the tracking stuff from (some) links.


There are also dedicated link cleaning extensions (I use Neat Url).


You actually don't need to use any dedicated extensions for that, as this functionality is built into uBO, you just need to find a filter list (just search for "ublock origin clearurl list" or whatever)


Does it work on links copied from the page for chats?


I use privacy badger, but it doesn't cover everything so I end up having to manually check all links anyway.


I've also noticed this behavior popping up a lot lately, but I had no idea why. The URL with tracking included was still blocked by uBlock Origin, but having to manually copy-paste the relevant portion was an annoyance.

Thanks for the context!


Check out ClearURLs extension.


Wow, that is pretty disgusting behavior.


The web developer interprets missing features as damage and polyfills around them.


I have no idea because I didn't experience anything like that both in Chrome and in Firefox (both with uBO though). But I'm confident that this particular code is not related to the actual slowdown, if it did happen to some Firefox users, because I received the same code even in Chrome.


Does Firefox allow a wider range of plugins, including adblockers?


Yes, Chrome is severely hobbled in this by comparison.


Yes, there are plenty. You can have a look here: https://addons.mozilla.org/en-US/firefox/


This is just anecdote, but sometimes (especially when I'm on slower internet) Safari + AdGuard will have glitch [0] on YouTube. Never happened with Firefox + Ublock Origin.

[0] Unable to press play and showing image with Ad instead.


I experience the same glitch and i like it because you can just reload the page (cmd-r) and then the video starts so if you're used to it you can skip ads within less than a second and you dont get annoyed by the ad sound/video, just an image.


I would suspect because Google can do the detection in Chrome itself, but not in Firefox.


if it’s anti-adblock, does it run even with premium?


[flagged]


This gives us some background, but it's still slowing down firefox with no relation to the video content.


It's radically different.


How come switching to User Agent to Chrome fixed it for that Reddit OP? Does it omit this if UA is changed?


When they first introduced anti-adblock crap, you could evade the banner by switching UAs. I'd say it's fair to assume that switching UAs triggers some other code path and this function never gets called.


I'm not even mad about Google making my artificially wait 5s for using firefox.

I'm mad that such a big company with suposelly decent engineers, are making me wait 5s with literally a sleep, how is even possible to do such thing in such a rudimentary way? I would be like damn that was smart, this feels like, seriously this is the level?


IMHO, this kind of things are not done by engineers.

    * Marketing/Sales asks engineers to add a feature flag to sleep N milliseconds for their research: "how slowing down impacts your revenue"
    * engineer adds a flag, with different control parameters
    * Some genius in Product figures this out and updates the experiment to slow down for competitors
When company gets a backlash from public: "oops, we forgot to clean up all parameters of feature flag and it accidentally impacted Firefox"



Google stopped testing stuff in Firefox, that is all they did afaik. We all should know how many bugs and "oppsies" you get when you don't test before releasing new features. Test code snippets being pushed to prod etc.

Engineers tend to create paper trails on what they work on, code reviews and bug logs etc are everywhere, so I doubt there is any of those where they say "Make things shit for Firefox to hurt our competitors", that would net them an easy loss in court. But not testing in browsers with small userbases will hold in court.


Firefox has a small userbase partly because of the early "oopses" described in the article I linked. Those happened a while ago, when Firefox had more users than Chrome.


Chrome was bigger than Firefox by 2012, the accusations that Google intentionally made things worse for Firefox came many years after that.


But they referred to behaviour that was present pretty much from the start. It's just that Mozilla folks were extremely tolerant and assumed good faith for a very long time.

Google have been disgustingly anticompetitive for a very, very long time at this point.


Yeah, one of the biggest examples being the HTML 5 video push and Chrome’s claims around H.264: Google promised they were going all in on WebM and would remove support soon, but never did. That meant that Firefox users got more errors outright but also that for years even sites like YouTube would leave Firefox using 100% CPU with your laptop fans on high doing software WebM while Chrome users got hardware accelerated H.264. That became moot after Mozilla and Cisco struck that deal and video hardware acceleration for other formats shipped but there was a multi-year period where Firefox suffered badly in comparison to other browsers.


Another person is claiming that Google writes custom code for Firefox (or other browsers) to enable tracking, because of the feature difference between Firefox and Chrome [1]. Only one of you can be correct.

[1] https://news.ycombinator.com/item?id=38347364


The company is big enough for both of them to be correct.

I have firsthand knowledge that Cloud, for instance, did not test regularly directly on Firefox. Team couldn't justify the cost of setting up and maintaining a FF test suite to satisfy 1 in 50 users, so they didn't (and nobody up-chain pushed back on that). Testing was done regularly on Chrome, Safari, and Edge, as per the usercounts and "top three browser" guidance (at the time, we didn't even test regularly on mobile, since there was a separate mobile client).

But the analytics team? I'm sure they test directly on Firefox. They're just testing an entirely different piece of the elephant and their end-to-ends won't include how, for example, changes they make interoperate with Firefox in the context of Cloud. Or YouTube. Or etc. Not unless they have a specific reason to be concerned enough to.

Google's like any other megacorp in the sense that costs of cross-team coordination are combinatoric.


Nah, they're totally incentivized to make sure tracking works while still having plenty of oopsies that could cause people to switch.


This should be a top level comment on news like this. Everyone needs to be reminded that this is neither a new behavior nor something unintentional.


Very good point. It's important to recognise that developers in many companies are often not fully aware of the intended use of features they're asked to create.

Another example that springs to mind is Uber, who used a tool called "Greyball" to avoid contact between drivers and authorities: https://www.reuters.com/article/uk-uber-greyball-idUKKBN16B0...

My initial reaction was astonishment that the engineers would happily implement this. And maybe that is what happened. But the alternative possibility is that product and senior management assigned different parts of the feature to different teams e.g. one team develops a pattern recognition system to detect users' professions, another team develops a spoofing system for use in demos, etc...


Why would you be surprised that they'd implement this? It's their job to implement things.


They were using it to evade law enforcement while flouting regulations. It's highly unethical and almost certainly illegal.


Oh I thought you were referring back to the YouTube issue


Tbh even that is ethically very questionable, if the engineers knew that the outcome would be a delay specific to Firefox.


> * Marketing/Sales asks engineers to add a feature flag to sleep N milliseconds for their research: "how slowing down impacts your revenue"

“Research”


They have done such research before, Google published this at a time when developers were all "100 ms more or less web load time doesn't matter". Since then webpages has gotten much more focused on performance.

https://blog.research.google/2009/06/speed-matters.html


The dog slow load times of ad infested AMP pages would suggest otherwise.


The prevailing developer discussions going from "Load speed doesn't matter, stop complaining about useless stuff" to "load times matters, but here we choose to make it slow for other reasons" is a massive improvement though. Today speed is valued, it wasn't back then.

There are many such tests being written about in blogs today. So now a developer can get time to optimize load times based on those blog posts while before managers would say it was worthless.


Untrue. I optimized pages pre-2000, and it had always mattered.

It's always, always mattered. If anything, people care less today, with the entire ridiculous 100 loads per page.


Of course it always mattered. But at the time lots of people argued it didn't matter, which is why the headline is "Speed matters". You thinking it did matter at the time doesn't mean the general community thought so.


But the general community did care about speed. Everyone worked towards small load times, optimized (for example) image size for optimal load time, everyone cared.

Whomever didn't care was weird.


AMP pages load way, way faster IME


Not as fast as with 90% of JS blocked. That's how the web was supposed to work, not downloading 50 MiB on every hyperlink.


Researching how best to fuck with your competitors.


Next: researching regulatory capture?


This doesn’t add up.

In order for someone to slow down the by browser they need someone to have coded the following:

- UA Detection

- Branching for when the flag is on or off

- a timeout that only runs when these two things are true

That takes an engineer to do the work. Marketing and product managers are not writing this code certainly.

If they’re abusing a differ t flag, then the real question I have is what the flags purpose is and why is it screening Firefox.

Either way there is an intention of UA checking and throttling based on the UA and that takes an engineer to do it


Not so hard to believe tho. I work on a product that has parametrized feature flags. This means that, from a web interface, someone can say things like "activate feature X, on machines running operating system Y, at version Z, and are running product version W with license type Q". This is not a hard thing to build, and once you have it you can mix and match filters without being a software engineer or knowing how it works behind the scenes.


Because it works.

Good engineering isn't about being obtuse and convoluted, it's about making stuff that works.


when the purpose is to abuse your monopoly to further your business interests in another area, being obtuse and convoluted to get plausible deniability is good engineering. This is just sloppy.


I think this is a good example of corporations being made up of people, rather than being contiguous coordinated entities as many of us sometimes think of them.

An engineer doing "good engineering" on a feature typically depends not only on them being a "good engineer" but also on them having some actual interest in implementing that feature.


I would imagine that in a well coordinated company engaging in this kind of thing, the order wouldn't be "slow down firefox", but something along the lines of "use XYZ feature that firefox doesn't support and then use this polyfill for FF, which happens to be slow". Something that doesn't look too incriminating during any potential discovery process, while still getting you what you want.


That's assuming a degree of engineering competency at the product decision making level that is usually absent in companies that are structured as Google is, with pretty strong demarcations of competencies across teams.


Nah, that's got a risk profile. They could implement whatever your strategy is in the next release. You aren't going to necessarily get the longevity of the naive approach.

Plus a Firefox dev would discover that more easily as opposed to this version which they can just dismiss as some JavaScript bug on YouTube's part


that's the beautiful thing, you make the polyfill contingent on the browser being firefox rather than probing for the feature and then you forget to remove it once they implement the feature


But why do you have to be that clever? If you're caught the consequences are the same regardless and both implementations would exhibit equivalent behavior.

The only superior approach here would be one that is consistent enough to be perceived but noisy enough to be robust to analysis.

Also it should be hidden on the server side.

Who knows, maybe there are a bunch of equivalent slow downs on the server side in the Google property space.

Given this discovery it would probably be reasonable to do some performance testing and change the user agent header string of the request.

Google docs, image search and Gmail operations would be the place to hide them.


I dunno. How long has it been there without anybody noticing?

5 years? 7? Longer?

No matter how they approached it, you could demonstrate the pattern through the law of large numbers regardless. Might as well make the implementation straight forward.


Using an idle timer, like window: requestIdleCallback [1], is good engineering. If anything passes that's not good engineering, it's laziness.

I'm not even a JS programmer but I know about timers, idle wait in UI programming is a common pattern. It's the attitude of mediocre engineers not bothering to lookup or learn new things.

If every OS/browser/stock market dev did what they want "because it works" we don't have a working system. We'll have systemic lags making the system sluggish and eventually unusable as more engineers follow the same mantra.

[1]: https://developer.mozilla.org/en-US/docs/Web/API/Window/requ...


Nah, then it doesn't work.

"It works" is The high engineering bar and it's the hard one to hit.

Oftentimes it's replaced these days with imagined complexity, ideological conformity or some arbitrarily defined set of virtues and then you get a really complicated thing that maybe works some of the time and breaks in really hard to understand ways.

Transcompiled frameworks inside of microservices talking to DBMS adapters over virtual networks to do a "select *" from a table and then pipe things in the reverse direction to talk to a variety of services and providers with their own APIs and separate dependencies sitting in different microservices as it just shepherds a JSON string through a dozen wrapper functions on 5 docker containers to just send it back to the browser is The way things are done these days. This is the crap that passes for "proper" engineering. Like the programming version of the pre-revolutionary French Court.

A simple solution, fit for purpose, that works as intended, easy to understand, remove, debug and modify with a no-bus factor, that's the actual high end solution, not the spaghetti stacked as lasagna that is software haute couture these days.

Sometimes, in practice, the dumb solution can also be the smart one. True mastery is in what you choose Not to do.


I agree with the spirit of your comment; I too hate over-engineering. Choose your battles is an important step in mastery, yes, but being lazy can't be chalked up to mastery.

In this particular case I disagree with using `sleep`; using the idle timer it's not as roundabout as you put it: _Transcompiled frameworks inside of microservices talking to DBMS adapters over virtual networks_. It's a straight-forward callback, some lower-level timekeeper signals you and you do your thing: it's nowhere close to the convoluted jumping through hoops you explain.

Mastery comes with balance: putting in the optimal effort, not more, not less either. Of course, depends on what one's trying to master: job or programming. Former means do the minimum and get maximum benefits from your job/boss, latter means enjoy learning/programming and arrive at the most optimal solution (for no reason, just because you're passionate).


Speaking as someone who only very occasionally does browser related programming, what is the supposed sin committed here by implementing it this way?


In programming in general, sleeps are generally considered....(I'm lacking the word)...distasteful?

If your code needs to wait for something, it's better done with some sort of event system or interrupt or similar; the reason being that a 5s wait is a 5s wait, but if, say the thing you're waiting for returned in 10ms, if you're using an alternative solution you can carry on immediately, not wait the remaining 4.99 seconds. Conversely, if it takes longer than 5s, who knows what happens?


Sure, but assuming we take it as face value that this is a straightforward attempt to force a UX-destroying delay, I don't see what makes this so terrible. It's meant to force a 5 second wait, and it does it. Problem solved.


The 5-second wait is the issue, not the means it was obtained -- a fixed wait time either wastes the user's time (by making it take longer than necessary) or is prone to bugs (if the awaited task takes >5 seconds, then the end of the timer will likely break). The better question is _why_ a 5-second wait was necessary, and there's almost certainly a better way to handle that need without the fixed wait time.


OPs point, I think, is that wasting the user's time is part of the point of the code. This specific code seems partially meant as a punishment of the user for using an adblocker.


*for using firefox instead of google's own browser.


That's somewhat in debate, the last I saw. The initial report was it affected a user using Firefox, and it didn't when they switched useragents. Since then, there have been reports of users not seeing it in Firefox, but seeing it in other (even chromium-based) browsers. So it seems likely they are A/B testing it, but less clear if they are intentionally targeting non-Chrome browsers.

Their goal, quite clearly, is to prevent (or at least heavily discourage) adblockers. This is one attempt to detect them, and maybe in Chrome they have a different detection mechanism so it doesn't show the same behavior.

It would be a particularly foolish move on their part to push Chrome by punishing everything else right now, while they are in the middle of multiple anti-trust lawsuits. It makes me think that is unlikely to be the intent of this change.


> In programming in general, sleeps are generally considered....(I'm lacking the word)...distasteful?

Hmmm.....

In programming in general, Javascript is generally considered....(I'm savouring the word)...distasteful?

Yea, nah. I put a sleep in a Javascript/Dart bridge the other day.... We can do better, I can do better,


they are a lazy man's solution to race conditions that does not actually solve the problem of race conditions, only makes them less likely to cause a problem at an often extreme cost to responsiveness as seen here.


I don't know if this is what was meant, but my assumption is that it is quite brazen and crude.

But then I think of some alternative method where they send an ajax request to "sleep.google.com/?t=5" and get a response like "" after five seconds.


Yep, curious to know the same thing myself.


For one, they didn't use React.


You're mad that they're using a function for its intended purpose?


It is not literally a sleep though, isn't setTimeout more like a creating a delayed event? (I am not a webdev)


You can't directly do a sleep in Javascript because it runs in the same thread as the UI - it would block the user from interacting with the page. This is effectively a sleep because after 5 seconds it's running the code in the passed-in function (not firing an event). The code in the function then resolves a promise, which runs other functions that can be specified later by what called the one using setTimeout.


That's Javascript for you. Don't want to block the one thread from doing other things in the meanwhile.


Maybe the engineer that was tasked with implementing was annoyed with the task and did it on purpose this way.


I'm more mad about the complete failure of regulators to break up an obvious monopoly than I am with the engineers (though they're not saints either)


At least they didn't rewrite the sleep code to do crypto mining.


Reminds me A Ticket to Tranai by Robert Sheckley where they deliberately asked to slow down robots in order for people to be angry and destroy them.


Google employs 30000 engineers, it's impossible for them all to be decent.


follow the money

employees will follow orders, orders are made by people who control the money


This is interesting as I had noticed this happening to me (in Chrome) when the anti-ad-blocking started. I assumed that it was YT's way of "annoying" me still while no ads were shown... It was eventually replaced with the "You cant use Adblockers modal" and now I just tolerate the ads.

So I wonder if that 5s delay has always been there.


When I ran into the adblocker-blocker (Firefox + uBlock Origin), I noticed that I could watch videos if logged out. So I just stayed logged out, and haven't seen an anti-adblock message since. Or an ad.

Added bonus, I'm less tempted to venture into the comments section...


Same, I use Firefox + uBlock Origin + YouTube Unhook for a cleaner interface. I also always watch videos on private navigation windows (my default way of browsing the internet) and I manage subscriptions with the RSS feed of the channels, much better to track what I have watched since the default homepage of YouTube does not display the last videos of your subscriptions.

Edit: I have forgotten to add sponsorblock to the list of extensions


I've been randomly getting the situation where the video on Firefox doesn't work, but the sound does. It says something like "Sorry something's gone wrong", but for a brief second I can see the video. I think it's connected to the ad-blocker changes, but it doesn't actually have a message about having an ad-blocker on.


One of the benefits of ublock origin for me is blocking the youtube comments section, along with all of the video overlay elements.


I'm using Firefox + uBlock Origin logged in and it works totally fine. Maybe Youtube removed the anti-adblocker on select accounts? I remember I once entertained myself with writing a report in which I sounded like I'm sitting in a retirement home and have no clue what's going on with "ad block." Did perhaps someone actually read this?


I think you have simply been lucky, the full story is that uBlock Origin and Youtube have been tying to outpatch the other, with uBlock rolling out a bypass to the filters every one-two days since late October (https://github.com/stephenhawk8054/misc/commits/main/yt-fix....).

Depending on if you've set up uBlock to auto-update and when you've watched youtube relative to when the block filters got updated you might just not have been hit with the latest detectors while they were active. Personally I know I got the "accept ads or leave" modal with firefox + uBlock, locking me out completely on one of my devices.


It seems to be something which is randomly deployed. Not everybody gets the warning.


I got it in the past for weeks, though.


Same here . No problem with anti Adblock. It was shown twice to me and I googled „YouTube alternatives“ then tried Vimeo and it was nice. Maybe they did register this ? :D


It's weird but I saw the anti-blocker modal a week or two but them it stopped appearing and never saw it since shrug


Might be because of the EU ruling, if you're in the EU.


I'm in the US, and had the same experience.

I got the you can't use an adblocker message, but was able to close and/or reload the page to watch videos without ads. After a week or so it stopped popping up.

US, Firefox, uBlockOrgin.


Another way I noticed is good at skipping ads when adblocker fails is to refresh the page. When it loads again it does not play the ad.


It's still trivial to block ads, but the delay has recently started for me, after never happening before. So presumably a very intentional volley in the ongoing war to own your attention.


I still use adblockers perfectly fine on Youtube. There was never a real interruption in adblocking either. You just need ublock origin + bypass paywalls.


I think they only disabled adblockers to logged users probably because non logged users don't have to agree to terms of services.


I'm always logged on and using adblockers. So no, that's not it. I also use Youtube probably every day and am a very active user.


Blockers work with my throw away Google accounts that I use for this and that. So maybe it's restricted further still to very entrenched users.


ABP also still works just fine. I prefer the armsrace being taken care of someone else


Just install adblocker?


Or Freetube / Newpipe


no need to go that extreme, the fix is to just update ublock orgins filters

Go into ublock origin addon > click filter lists > purge all caches then update now

all done


Meh.. I could but I have to tolerate them on TV anyway. I may look to install pi-hole one day.


If you have an Android TV, You can use SmartTube[1] that has Adblock + Sponsorblock

[1] https://github.com/yuliskov/SmartTube


Pi hole doesn't help, but there are various Android TV apps that do block ads. I still prefer the Roku eco system but I switched after they started putting ads in the middle of music videos.


pihole doesnt work for youtube because ads and content are served from the same domains.


How is this not blatant anticompetitive behavior?


Capitalism as it exists is, at its core, anticompetitive.


This is happening to me in Chrome as well so I don't think it's tied to the browser you use.

Curiously it happens only on one profile, in another Chrome profile (which is also logged in to the same Google account) it does not happen. Both profiles run the code in your comment, but the one that does not have the delay does not wait for it to complete.

The only difference I spotted was that the profile that loads slowly does not include the #player-placeholder element in the initial HTML response. Maybe whether it sends it or not is tied to previous ad-blocker usage?

What does piss me off is that even if you clear cookies and local storage and turn off all extensions in both profiles it still somehow "knows" which profile is which, and I don't know how it's doing it.


Is the use of the "E" notation common in JS? I can see that it (could be) less bytes, obviously more efficient for bigger values... Looking at the script I can see it is minified or whatever we call that these days. I guess my question really is: did someone write "5E3" or did the minifier choose it?

(Sorry this is heading into the weeds, but I'm not really a web developer so maybe someone can tell me!)


Because 5E3 is shorter than 5000, just like you can often see !0 to get "true" in minimize code because it saves two characters.


In js I thought 1==true, and 1 is shorter than !0 ??

Never seen the use of exponential notation for numbers in js though (not a surprise, I'm not really a programmer), it seems sensible to me from the point of shifting the domain from ms to seconds.


> In js I thought 1==true, and 1 is shorter than !0 ??

`1==true` but `1!==true` (`===` and `!==` check for type equality as well and while `!0` is a boolean, `1` is not.


!0 === true, but 1 !== true. I don't recall ever needing the strict comparison, but it seems to tickle the fancy of most js programmers.


Double-equals behaves differently than triple-equals. Minifiers probably can't swap them safely.


I wonder if this actually decreases the byte over wire. 5000 compresses a lot better.... sorry for OT


Interesting question. Has anyone tested this?


Almost certainly the minimizer


Totally possible that the minifier did this, yes.


How/When does that script get loaded? It’s not showing up in my network tab. Videos also load instant as usual.


Trying to be charitable here: could this be a debug/test artefact that inadvertantly got into production?


Unlikely. Google has been breaking non-Chromium (or sometimes even just non-Google Chrome) browsers for years on YouTube and their other websites. It was especially egregious when MSFT was trying their own EdgeHTML/Trident-based Edge. Issues would go away by faking user-agent.


> It was especially egregious when MSFT was trying their own EdgeHTML/Trident-based Edge. Issues would go away by faking user-agent.

Why is there more than one user-agent? Does somebody still expect to receive different content based on the user-agent, and furthermore expect that the difference will be beneficial to them?

What was Microsoft trying to achieve by sending a non-Chrome user-agent?


User agents are useful. However they tend to be abused much more often than effectively used

1. They are useful for working around bugs. You can match the user agent to work around the bugs on known-buggy browser versions. Ideally this would be a handful of specific matches (like Firefox versions 12-14). You can't do feature detection for many bugs because they may only trigger in very specific situations. Ideally this blacklist would only be confirmed entries and manually tested if the new versions have the same problem. (Unfortunately these often end up open-ended because testing each new release for a bug that isn't on the priority list is tedious.)

2. Diagnosing problems. Often times you see that some specific group of user-agents is hammering some API or fails to load a page. It is much easier to track down if this user agent is a precise identifier of the client for which your site doesn't work correctly.

3. Understanding users. For example if you see that a browser you have never heard of is a significant amount of traffic you may want to add it to your testing routine.

But yes, the abuse of if (/Chrome/.test(navigator.userAgent)) { mainCode() } else { untestedFallback() } is a major issue.


Only option 1 is something that users, who are the people who decide what user-agent to send, might care about. And as you yourself point out, it doesn't happen.


I'm pretty sure that users care that websites can fix bugs affecting their browser. In fact option 1 is very difficult to actually implement when you can't figure out which browser is having problems in the first place.


Why do you think users wouldn't care about sites diagnosing problems that are making pages fail to load (#2) or sites testing the site on the browser that the user uses (#3)?


It is normal practice for each browser to have its own user-agent, no? But the fact that Google intentionally detected it and used polyfills or straight up invalid JS at the time was insane. A similar spin today is "Your browser is unsupported" you see here and there. When a major platform such as YouTube does it, it is really impactful.

It would never do feature detection, would give lower quality h264 video, etc. Back then, there was really nice third-party application myTube which had made this less of an issue but it was eventually killed through API changes.


It may have been intended to be a normal practice, but as far back as IE vs Netscape everyone has been mucking with user agents for non-competitive (and counter-non-competetive) reasons


> Trying to be charitable here [...]

There is no reason for charity with such a large power difference. For Firefox, "bugs" like this can really end up being a lost one-shot game.

It's like people walking by and casually reaching for your phone. It's always meant as a joke, unless you don't pull it away fast enough. Then suddenly it wasn't a joke - and your phone is gone.

This is not rooted in any reservation against Google in particular. If you are a mega-corporation with the power to casually crush competitors, you should really want to be held to a high standard. You do not want to be seen as the accidentally-fucking-others-up-occasionally kind of company.


Without studying the minified code I wouldn't assume malice just yet, this could be just an inexperienced developer trying to lazily fix some browser-specific bug, or something that accidentally made it to production like you say


You think they let inexperienced developers touch the YT code base without proper code review? Even if that were the case, which is an extremely charitable assumption, that itself would be malice in my opinion.


> You think they let inexperienced developers touch the YT code base

Uh, yes? We were all inexperienced at some point. Just the linked file is like 300k lines of unminified code, I doubt it's all written by PHDs with 20 years of experience


Some would argue that owning a PhD degree does not necessarily guarantee half decent engineering skills.


It's the "without proper code review" part that I consider malice, not being inexperienced.


> You think they let inexperienced developers touch the YT code base without proper code review?

Yes


YouTube is way too stable for that to be the case.


lol

This reply is for everyone who has ever worked on the codebase...


Should be: LOL LGTM


there is such a thing as overextending the benefit of the doubt, to the point that malicious actors will abuse it.


It could even just be a timeout as part of retry logic or similar. A lot of people seem to be saying that there is no reasonable reason to have a `sleep` in a production application. But there are many legitimate reasons to need to delay execution of some code for a while.


As the saying goes: "we like naked girls, not naked sleep". Even the interns should know that, naked sleep is just bad - not fixing anything.


If, with Youtube size, they do not test on Firefox, this is as much malice as doing this deliberately.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: