Hacker News new | past | comments | ask | show | jobs | submit login
Firefox 57 delays requests to tracking domains (janbambas.cz)
472 points by bzbarsky on Dec 19, 2017 | hide | past | favorite | 261 comments



I work for an analytics company, this will affect my snippet – I don't think it is a big deal, but I'm going to benchmark it on a few sites. From what is described here, it may actually help us. When folks throw tons of tags into their site, we're all competing against each-other as well as the site's own loading – it makes sense to prioritize the site's rendering.

I think of this as no more invasive than pre-caching stuff in a page when you see links that are likely to be clicked, or any other browser optimization.

I may be pounding my fist later if the results don't look great, but ultimately, I think Mozilla's heart is in the right place.

I can think of tons of ways to work around this if I thought it was a real issue, I don't. People who want to block analytics, trackers, ad networks, etc... know how, and this doesn't feel like it is targeted at those people, or even squarely at the tracking pixels.

HN-pre-emptive-defend-myself: My company doesn't share or sell data with third parties (it's in our TOS – and could obviously change in some Orwellian future, but we'd have to update the TOS), it's the customer's data, they can use it in the same way they'd use any other data they collect. We don't do cross-domain tracking, or aggregate behavioral information across customers, and the way we've built it, it makes it pretty hard to tie any two visitors together across sites if you were to hypothetically acquire our dataset. We're also working on building tools to comply with GDPR and wipe out data per-request. Anyways, I think a lot of very bad actors have made the space have a bad reputation and I don't blame anyone for blocking everything, or even for hating my company, or generally all analytics.


> My company doesn't share or sell data with third parties (it's in our TOS – and could obviously change in some Orwellian future, but we'd have to update the TOS), it's the customer's data, they can use it in the same way they'd use any other data they collect. We don't do cross-domain tracking, or aggregate behavioral information across customers, and the way we've built it, it makes it pretty hard to tie any two visitors together across sites if you were to hypothetically acquire our dataset.

Are you the CEO and do you currently own controlling interest? Because if not, then everything that you are saying is worth about as much as a slice of pizza someone dropped in the NYC subway.


Nope, happy to fall on a sword if this happens though.

Also – really bummed you got downvoted – this point is very valid and other folks in my position should take note.


@codezero, my point was not about falling on a sword. Rather it was that NH has this extreme naivete related to statements coming from those that do not have any real say.

How many people here think that data is removed from BI data lakes when a user deletes the account ( or at +N months )?

[Edit: Downvotes on this one also?]

Ask what happens to customer's data used by business intelligence when customer deletes the account in your company. Do it in email so it is in writing. It would be extremely illuminating.

Ask if what when a customer or your app acting on a behalf of your customer executes a DELETE against your API your API deletes the record or:

1) sets a deleted_at to now which your ORM treats as "deleted"

2) send a message to a queue (Customer XYZ deleted record P) which has multiple subscribers, one of which executes a deletion while the other one dumps the new state into a BI lake/archive dataset/what-if-a-customer-comes-back-would-not-we-want-to-be-able-to-give-them-all-their-data-back database?


I get where you are coming from but you are also making some assumptions that are a bit broad. It’s a laudable goal to educate naive folks on HN, I support you in that, but you should try a different approach.

I’m not naive. I’ve been working in tech since 1996 when I managed a dial up ISP. I worked at Red Hat the day they IPOed and have been with Heap for almost four years. I was employee number 8, and personally wrote much of the code to remove data from our databases.

We have strict controls and logging around who can access our infra, as is necessary for SOC2 compliance. We have rolling backups (that are read only except by very specific individuals) so even if I got real mad one day and decided to ruin my career, we could restore the data, however, we don’t keep backups indefinitely so when data is deleted or removed with intent, it will eventually be gone in backups too.

I’m not on the board, but I do have influence as I am the head of our solutions and support team. I am a privacy advocate and take a firm stand on protecting my customers’ data and executing their own desires to protect their customers, and I do my best to not only instill these things in my team and company, but also to make policies and controls to make them real.

When I delete data, I don’t rely on a message being sent to a queue. I do not “soft” delete. When our customers ask for their data to be deleted, we delete it without delay.

I will say that sending a message in a queue would be pretty efficient and if we went that route I’d have the consumer of that message verify that the data was deleted.

Ultimately our backing store is Postgres, it’s not rocket science to understand how deleting that data works, we could get into a discussion about how it’s not really deleted until it’s been reallocated, at the DB and at the OS and hardware levels, but I don’t think you’re trying to make that argument.

Is everything perfect? No, that would be naive to say, but we don’t assume it is and work hard to remain accountable.

I’m happy to discuss this or about anything else here or over email, Skype, hangout, in person, etc, but I do want to say that you should challenge your own assumptions a bit more. Not every person or company is inherently bad, naive or ignorant. A lot of people on HN are extremely tenured, bright, and thoughtful, they just usually know better than to comment here, if anything, that is my most naive action.


Jesus. Really? Downvotes? Pretend this person worked for facebook and made some statement regarding what the company does.

(a) he would be fired on the spot by the company itself ( as he is not authorized to speak on behalf of the company )

(b) he would be laughed out the room because he does not control company decisions

The same applies to every single company. Every single one.


Agreed, they make a good point, however, I think the terms of service are binding, and if what we're doing changes, we _must_ update them. This, of course, does not change the fact that we've collected a lot of data and it would suddenly fall under the new terms of service.

With that said, I actively delete customer data on request, and a terms of service change like this _might_ (very reasonably) prompt someone to request their data to be deleted, and since terms of service are not typically immediately effective without consent, I feel strongly that I would make sure I could remove data for people who do not agree with the new terms of service. What I'm saying, is that I would fight very hard for my customers, like, ridiculously hard.

As said above – I am not the CEO – I don't have controlling interest, and there is absolutely some future where the things I said above are impossible to execute. I have not felt that way yet, and I am very conscious of the possibility if that in the future, and will stay vigilant both for my customers and for my own peace of mind to make sure we're always doing the right thing.


Yes, since new ToS have always been retroactively applied to all previously collected data they offer zero privacy protection over the long term.

Google Has Dropped Ban on Personally Identifiable Web Tracking | https://news.ycombinator.com/item?id=12760003 (October 2016)

An effective opt-out may help most of the vigilant, but it is in the best interest of those changing the ToS to keep it as confusing/hush-hush as possible.


I'll have a wild guess there: all devs and most managers have access to the full data set. Any disgruntled one could ex-filtrate it anywhere with not trace.

That's the kind of things which make me for the vision of data as liability.


Absolutely not. We have strict control over our data and access to infrastructure.

Every person with access can only get it via a bastion VPN with their own key. Access is logged to an external host which they do not have access to. We are SOC2 compliant (just waiting on final certification) and we have regular pen tests both against our code as well as our employees with mock phishing.

As a total aside: it would cost a serious amount of money to exfiltrate the data in bulk, and would cause an obvious strain on our infrastructure. Assuming they get by all the above protections, and are really clever, sure, never say never, but I think we can not assume the worst but prepare for it none the less.


They should have internal security teams that monitor this


Damn it. You owe me a new keyboard.


I have no idea what you're talking about. People from big tech companies say their opinion about what their company does all the time.

Your comment was super rude.


Really? People at Google say that they know what Google does with data?!


You are right. But speaking out the truth isn't compatible with HN users as of late. So see downvoting and getting flagged as a sign you are right, just the other try to hide your post and keep you from posting. Unfortunately the atmosphere got really toxic, with many things got hidden from users, the down-voting/flagging does no good.


[flagged]


I get what you're saying, but I also want to assert: we're not in the thieves' guild. I mentioned that analytics as a broad category, gets bunched up with a bunch of very nefarious folks – it's like conflating Tor to The Silk Road and their ilk (of which I have no strong objection, but I am using as an example, since many might see selling drugs as "bad", but Tor as "good").

With that said, the comments above this thread, which point out that I have little control over our "Tor" becoming "The Silk Road" are valid and are often on my mind, and I can only say that I will not go gentle into that good night, if it does come to pass.


Unfortunately for the field , analytics is equated with advertising and user profiling lately.

We'll see how this one pans out, but it may be somewhat career limiting.


In fairness, much of the modern data analytics field was born out of the need to quantify advertising effectiveness, so I'm not surprised the two have become conflated.

Google Analytics is (mostly) free in part because it supports and validates users of Adwords, and this has contributed heavily to the ubiquity of GA.


notyourday is insulting the guy based on a supposed lack of "authorization" to speak. That has absolutely zero to do with ethics of the company or the contents of their post.


When pointing out that the king is naked is considered an insult to and by the people clapping about those amazing invisible clothes, one needs to seriously examine if he or she wants to be the person clapping


Naked vs. amazing clothes. Note that those are the same topic, and conflict with each other.

But what you did was the equivalent of shouting someone down for not being the king. You weren't calling codezero wrong, you were saying they weren't in charge.

When they're merely reporting the policy, who cares if they're the one in charge?

Hell, the logic of "mock the guy for not being in charge" would apply even if they were also calling the king naked! That should show how flawed it is when compared to your actual goal of... being anti-tracking or something?


A couple of years ago I went to a conference where one of the speakers outlined his company's web page optimization methods. The big one was to delay executing JS until at least the visible page content was loaded, and all JS that actually implemented customer-useful bits.

His example page loaded 94 scripts, which he claimed was representative of many pages.


> His example page loaded 94 scripts, which he claimed was representative of many pages.

I know this is the norm, but can we just agree that this is insane?

Some sites I go to, I just see the block badge on UBlock Origin ticking up like a Bitcoin price tracker.


> ticking up like a Bitcoin price tracker

Fitting comparison - we can also expect performance degradation similar to Bitcoin's.


One of the consequences of installing Ghostery was that it shows a counter of how many scripts are loaded on the page. Turns out I have severely underestimated how many scripts average site loads - and I work in the industry!

As an example, I just loaded CNN site - it has 28(!) trackers and uses 38(!) different domains. And that's a low number because some of these scripts would load other scripts if I enabled them to run...

That's a super-mainstream site, imagine how many more shady ones have.


I'm looking forward to the EU regulations kicking in and you having to get consent from everyone ou save data on.


Do you agree for us to save your data?

[Yes] [Read more]


The other option, Closes tab, is simply not gonna happen so why annoy the user.


legal obligation.


I'm not sure why you feel the need to defend yourself. HN has a low opinion of the concept of selling user data, but in practice, there are plenty of ad-supported startups represented here.

Google is selling your data to marketers. Facebook is one giant data-collection/data-sales empire. In theory, the HN collective hates data selling, but in practice, they love the companies that are doing it and their products.

I find it ironic that I come here and see people worshipping the big adtech companies and their tech but then when someone posts a comment about not selling user data, a thousand tin-hat naysayers jump out of the woodwork to flog them to death.

It's OK. It's why I can read the NY Times and a thousand other online newspapers - FOR FREE. It's why I can go to Google and type in "why does my butt burn when I poop" and get an answer - FOR FREE. It's why I can use dozens of adtech and user-data-supported services - FOR FREE. I pay for those things with data about me, my habits, and my browsing history.


I appreciate your willingness to share your perspective here. The problem is that marketing/PR is a profession dedicated to finding whatever just-barely-technically-true weasel wording is required to make what they're getting paid for sound at least "ok" and preferrably "great!"

If you have time (definitely not expected) I would like to hear more specifics regarding this aspect of your statement:

> We don't do cross-domain tracking, or aggregate behavioral information across customers

vs. https://heapanalytics.com/terms

> We may also use data in an aggregated form for our own purposes


Yeah, very good point, we should clarify what we mean by "for our own purposes."

What that means right now is that we use this data to improve query performance, and for internal monitoring. We do not use that data for commercial purposes, outside of the fact that we are ourselves a business.


I also work for an analytics company too, or more accurately a data router that integrates with other analytics companies, including yours. I'd be interested in seeing your benchmark results as it's unclear to me whether / how this potentially impacts us.


> I can think of tons of ways to work around this if I thought it was a real issue, I don't. People who want to block analytics, trackers, ad networks, etc... know how, and this doesn't feel like it is targeted at those people, or even squarely at the tracking pixels.

A slow and shitty experience using the web shouldn't be the accepted baseline.

People who know how to block spyware can't block everything. There is a lot that causes the page to hang if not blocked in a certain way, provided that certain way is available via some kind of tooling.


It's a grey area. But I am interested, do your ToS forbid your customers from on-sharing the data with cross-site aggregators?


"on-sharing the data with cross-site aggregators"?

The ToS and Privacy Policy of an analytics company discloses what the that company may do with their client's users' data, not what the client can do with their own users' data.

There's another Privacy Policy between the client and the user where the client discloses what they may and may not do with user data and the user decides whether to become / remain a user.

There are also laws which dictate the data handling further on behalf of large groups of users, e.g. GDRP for EU citizens and, on this side of the pond, interesting Supreme Court cases being decided about what kind of data users could have property rights to: https://en.wikipedia.org/wiki/Carpenter_v._United_States


Nope – not explicitly, that would be something within _their_ terms of service since it's their data. Their own customers would have to hold them accountable.

I know that's a cop-out – but I can't imagine a system where we could enforce some kind of downstream compliance.

Things like GDPR are a good way to make companies accountable and I look forward to that becoming more broadly accepted.


I am not the OP but from looking at their profile - https://heapanalytics.com/terms you can check there


> wipe out data per-request

Why would you do that? You are providing a service to the people that is a pure benefit for them and doesn't harm them at all. Why would anyone want to delete such helpful data?


If they don't want a € 10 million fine they have to comply with GDPR, even if no one is going to use it.


AFAIK, GDPR doesn't tell you there need to be an automated way to delete the data. If people can send you a mail with the request and you manually delete it, that's also fine. So if no one will use it, you basically don't have to do anything and be compliant (to this specific GDPR rule).


Sure, but if people use your service and some 0.01% of them might send a request just as a matter of principle once GDPR kicks in, then it's quite plausible that the cost efficient way to handle this is to have an automated solution rather than a manual process that doesn't scale.


From what I understand, as long as you can say there's a process in place (and could actually follow it) then you're compliant with the GDPR.


Interesting approach. I hope more browser vendors will adopt it. It comes with an additional advantage: if a website still needs to function when tracking domains are delayed a second or so, they likely will function too with said domain completely disabled (e.g. using extensions). Sounds like a win from a user perspective.


Do you mean that browsers implementing this will force website owners to make their sites work with tracking domains delayed? This does seem like a nice side benefit.


Yes, that's my hope. For the income of website owners it is better if they all collectively block their site until the trackers are fully loaded. But if only a few do so, users might avoid their site, so individuals are still encouraged to support delayed loading.

I imagine Mozilla is in a similar situation: if too many websites block, they will have to disable the delay or start whitelisting trackers if they do not want to loose users. That is, unless other browsers follow their lead here.

Prisoner's dilemmas all around. It's going to be fun to see what we'll end up with, although I'm optimistic as I haven't heard complaints about this yet, but then again I wouldn't notice any difference myself as I'm already using NoScript.


I think it's not quite a normal prisoner's dilemma, because it's not boolean. The browser can be less aggressive initially, pushing against only the worst cases, then become more aggressive over time to make site behave better.


With tracking domains disabled!? Website operators will be annoyed by the delays, so they have an incentive to make the display of their website independent from tracking, which then also should make the website work just fine if tracking is blocked completely.


He meant that no browser with a significant market share will implement thi, the sites will remain broken which will only be evident in firefox. The side effect will be people visiting those sites will switch a different browser.


Given the OP's reply, I don't think that's what he meant so I would be careful about making statements on behalf of others.

Apparently Firefox is sitting at 6.1% market share right now[0]. Far behind Chrome and Safari but still quite significant.

[0] https://amosbbatto.wordpress.com/2017/11/21/mozilla-market-s...


Trouble is the amount of popular websites that simply don't care how long the page takes to load, 30 seconds would be acceptable to many as long as they get their advertising dollars.

Users won't know who to blame. They might blame the site but they're just as likely to blame Firefox, Microsoft, their ISP or a virus. There needs to be a display saying "hey, this site is slow because it's tracking you in 15 different ways, would you like to disable this permanently?".


You're not thinking about this from a game theory perspective.

Slow down pages -> more users bounce -> fewer ad impressions -> less ad revenue.


> You're not thinking about this from a game theory perspective.

> Slow down pages -> more users bounce -> fewer ad impressions -> less ad revenue.

Game theory is the theory of how agents deal with each other. You left out the most crucial part of agents affecting each other!


Yet, many big, profitable websites take 10-30 seconds to load on even the fastest connections. We can reconcile the theory later, but the fact is, much of today’s internet doesn’t optimize for load time.


Also possible scenario is

Slow down pages -> more users frustrated with Firefox -> fewer user share -> Mozilla closes and donates all code to Apache Foundation.


That would be great ! Where do I sign for this ? I cannot wait for this to happen.


Or eclipse which gets unloved projects


how is this game theory?


and lower rankings organically and for ppc get a low quality score on your add and it can really hurt your AdWords account


> Trouble is the amount of popular websites that simply don't care how long the page takes to load...

They absolutely do care, though. The longer a page takes to load, the less time users will spend on the site, and the more likely users will give up and close the window.


I think "load" here is subjective as well eg a spectrum from time to first paint and a page being visible and/or interactive, up until completely loading all async and deferred content, some of which might only happen after the user crosses the fold.

I think there's a lot of interest in optimizing that initial time but less so for the full load.


I block ads, analytics and 3rd party fonts. Almost every site works just fine.


I do the same (uBlock Origin) and almost every site works, but there are a still a few I need to whitelist because they do weird stuff, like triggering their own scripts to run from analytics callbacks. Airline and banking websites are a couple of common categories of sites I usually need to whitelist to work properly - that's not really an issue though as they don't usually have 3rd party ads.


Yeah I have a clean backup browser for the few sites that don't work and I still want to use. I'm always reminded how CRAP the internet has become.


I block everything too and sites break so rarely that it takes me a while to link the breakage with blocking of trackers.


I do the same thing too, blocking all 3rd-party junk using uBlock Origin. As one specific example, I've had to let motortrendondemand.com fire off Omniture, because apparently their video player (through Kaltura) depends on it.

It's just lazy design, or possibly malicious, I'm not sure.


As I'm sure you're aware, many content creators that you are reading the work of fund their efforts via ads. I think this is a happy medium to attempt to improve performance of these pages without denying them compensation.


I used to feel bad about it, but it has become evident that any content creator still primarily funding themselves through third-party ads in 2017 is the digital equivalent of a streetwalker.

It's a low-class business model that is rife with disease. There's no barrier to entry-- literally anybody can do it, there is no filtering of who they'll do business with, and it's a race to the bottom to earn pennies at the expense of their integrity. Despite having actual content of value, they share a business model with a blog farm full of Markov-generated malware-laden shitposts.

Given the now-well-known threats to health and safety that third-party ads present, one does not have the right to get upset when clients insist on using protection when dealing with them.

Set up a Patreon for donations, set up affiliate links or offer subscriber perks if adblocking is cutting into revenue. Either option is far more lucrative, ethical and honest than whoring oneself out for the benefit of some shady ad firm that leaves the user with a nasty case of Bonzi.


Fine. Except figuring out a business model is the content creator responsibility not the user's. Business models that involve hostile 3rd party adds/tracking are bad and if your users are trying to evade those, you need to figure out something better. Blaming the user reaction won't help.


> As I'm sure you're aware, many content creators that you are reading the work of fund their efforts via ads.

Not our problem if they bet on a brittle business model and don't want to adapt. Even more: the web ecosystem was fun and a with a lot more freedom before the invasion of ad based companies. I don't see any problem with some culling of the politically correct parasites.


> As I'm sure you're aware, many content creators that you are reading the work of fund their efforts via ads.

I cannot make myself care about that, at all.

I am not the person responsible for finding a viable revenue model, that is the responsibility of the content creator. I refuse all ads and tracking, because I value my own attention span, my time, my bandwidth and my privacy.

I am not obligated to watch ads online, just as I am not obligated to stay on a TV channel and stay in the room while ads are running, just as I am not obligated to pay any attention at all to ad billboards.

As an aside, I support several high-quality content creators through Patreon, and several open source projects through donations.


Their failed business model is not our problem.

I think you are aware that ads are the emerged part of the iceberg, the big part is the profiling, tracking and data collection going hand in hand with ads.

If ads were unintrusive and not gobbling privacy, then I would not block them as I would not mind them.

Excuse me but I have to go now, local girls want to give me a free ipad that I won by clicking the fart button after subscribing to the newsletter that was blocking the content of the page I finally chose not to read because fk it.


If the ads didn't feel abusive then I might feel bad. But they do.


I have an even better approach:

uBlock origin + noscript


uMatrix is also great. It does a lot of the same things NoScript does, but at a greater granularity. I still use NoScript for its XSS protection features but have the main JS-blocking feature disabled because it's redundant.


It is. It does. uMatrix is probably not for the average user out there, but personally, I won't go anywhere on the web without it these days.

Keep the blacklists up to date, and have scripting completely off by default. You get granular control on a wholly different levet than offered by NoScript (which I really no longer see a need for).


>have scripting completely off by default

Does this mean you have the "scripts" column red and turn them on for individual sites?


That is what it means, yes. And click them on as needed, first local scripts, and then various external ones if that's not enough. Can be a bit of a hassle for a while, but once you get permanent rulesets established for sites you frequent and more or less trust, it works like a dream. I hardly ever see an ad, and various sniffer-services (or "analytics") must somehow learn to live without a lot my traffic data.


Not parent but probably yes. That's how I roll as well.


Interesting. I never got around to playing with uMatrix. Is it worth it considering i have a boatload of configs for sites i use with NoScript?


You can convert NoScript configs to uMatrix with [1]uMatrix-Converter.

[1]: https://pro-domo.ddns.net/umatrix-converter


An even better approach is to block all ad-serving domains at the DNS layer. If you associate with my home access point and use my DNS cache, you don't get ads period, and you don't have to install an ad blocker or lift a finger.


Can you explain how you do that?


> Can you explain how you do that?

Probably Pi-Hole running on. Raspberry Pi. https://pi-hole.net


Yea, not exactly Pi-Hole, but the same concept. dnsmasq running on router, with a big list of ad-serving domains and hosts resolving to 0.0.0.0


Wow, nice and simple. Is that list available in some repository? would like to implement that since I run a dnsmasq bearing router at my home border as well.


I do the same - my approach was to take some of the popular /etc/hosts files (like http://someonewhocares.org/hosts/zero) and sed them into dnsmasq format:

    # before:  0.0.0.0  evil.biz
    # after:   address=/evil.biz/0.0.0.0
    sed 's#^0\.0\.0\.0[[:space:]]*\([^:]*\)$#address=/\1/0.0.0.0#'
which goes in a file in /etc/dnsmasq.d/, with this line in /etc/dnsmasq.conf:

    conf-dir=/etc/dnsmasq.d
The results can be trimmed a lot, e.g. if you have rules for a.evil.biz and b.evil.biz, you can usually reduce those to a rule for just evil.biz. I wrote some scripts to help with this, which are now at https://petedeas.co.uk/dnsmasq/. I might write something up about the process later.


Here's a nice repo with a "starting point" for hostnames and domains to block: https://github.com/notracking/hosts-blocklists


I loathe ads and tracking. I run ublock origin/https everywhere/privacy badger (the latter of those 2 are from the EFF).

I run a dedicated pfsense machine (old optiplex 755 with an old ssd) I added a nic to it. All network traffic must physically flow through it (1 nic goes to lan 1 goes directly to the cable modem). It's running pfblockerng and DNSBL with a bunch of sources. It's amazing. I can watch youtube videos on my smart tv in the living room streaming with 0 ads.


Most recently: November Workshop: Running the Pi-hole Network-wide Ad-blocker, and more | https://news.ycombinator.com/item?id=15608052

Related discussions (click the 'comments' links): https://hn.algolia.com/?query=netguard&sort=byDate&type=comm...


Is there much different between uBlock origin vs Ad Block Plus?


Yes, Adblock Plus lets website operators pay them money to be whitelisted, if the ads comply with the "Acceptable Ads" policy.


adblock plus is commercial operated and is paid by advertisers to get whitelisted, akin to an extorsion mafia business. It's an ad blocker coming with tainted default settings. It used to significantly impact performances (not sure if this is still the case).

ublock origin is a one man personal project to improve his online browsing experience that he shared with the public and has become the goto reference. It has no whitelist or acceptable ads policy, is a general purpose blocker (blocks trackers, remote fonts, etc.) coming with sane default settings. It does not have the same performance issues adblock plus has.

In short: ditch adblock plus and switch to ublock origins.

You'll find more details here: https://github.com/gorhill/uBlock


"Unexpected results: Adblock plus does not help with system resource utilisation"

https://twitter.com/adildean/status/936183316134416384


[flagged]


There's an opt-in "tracking protection" checkbox in Firefox you can click that doesn't load them at all.


Hmm doesn’t it send a do not track cookie, tracker JS is still loaded and discretionary


No, it's basically a content blocker like Adblock or noscript.


I'm convinced it's not. noscript is blocking scripts and xss and does all kinds of things, while mozilla intently removed the option to disable scripts in firefox.

I do not know about adblock as I stopped using it a long time ago and now use ublock origin a general purpose blocker, but I certainly remember how mozilla was vocal about never adding a blocker in firefox as this would be contrary to <insert BS PR> (actually their business model).


Well, yeah, "Tracking Protection", as it's called, is far from what NoScript does (which does a boatload more than just blocking JavaScript). But it is quite a lot like AdBlock Plus or uBlock Origin, except not focused on blocking ads and rather focused on blocking tracking scripts (but with how many ads contain tracking scripts, it is pretty much an ad blocker, too). If you know the extension Disconnect, Mozilla uses the blocking list from that.

Tracking Protection is default-enabled in Private Browsing, can be manually enabled in normal browsing.

The "BS PR" reason for them not necessarily wanting to block trackers/ads, is that webpage owners want to make money. If they don't have a way of making money off of Firefox, then they likely won't bother testing, fixing or even optimizing their webpage for Firefox. It would make it a lot more likely for webpages to be broken in Firefox.

Default-enabling Tracking Protection in Private Browsing already caused a huge outcry and by now there seems to have developed an entire new business around privacy-respecting porn ads, so that's always nice.


That is an available option as well. In preferences, under "Privacy & Security", set "Tracking Protection" to "Always".


This shouldn't be up to the browser. He even notes edge cases where this performs unexpectedly -- unacceptable.

If this is such an issue and can actually be a performance increase, then someone should release a script with the same functionality. Or optionally make it an opt-in option in settings.


They do not violate the relevant specification. They just implement it in a way that has not been done before with the user's convenience in mind.

That aside, your position is unrealistic: browsers regularly break non-spec conforming websites. They actually monitor such cases (telemetry) and try to work with popular websites to fix the issue before they ship the breaking update, but it's a tradeoff that is regularly made nonetheless.


While the actions were beneficial in this case, that argument would be more convincing if the spec itself weren't a constantly moving "Living standard" maintained by the exact same organisations that also develop the browsers. And even that spec is sometimes consciously broken in an "intervention".


It shouldn't be up to the browser to implement web standards? What should browsers be doing then?


What is the web standard about delaying scripts deliberately encoded into the page?

Why downvote and no explanation? I want to know if I missed something.


Per the standards, scheduling and prioritization of downloads is up to browsers.

The standard also defines that scripts that are not async have to be executed before later content can be parsed (because they can document.write()). They can still be loaded with any priority the browser wants; they just need to block observable DOM construction.


Thank you.


While I like the idea, a potential problem comes to mind. A list of what domains are tracking domains will need to be maintained. The need to maintain that list will possibly move the cat-and-mouse game of ads vs ad-blockers to the tracking list as well. I could totally see Alphabet paying Mozilla to remove analytics.google.com from the tracking domain list (because "it's just analytics; it doesn't track you or infringe on your privacy at all") similar to advertisers paying to be on Adblock Plus' "Acceptable Ads" list.


> using data of the Tracking Protection database

This database is already being maintained and used for private browsing. I find it unlikely that an analytics CDN will play cat and mouse with Mozilla just so it's script will load a few milliseconds sooner.


Where can we find this list?


The list is supplied by Disconnect.me


Firefox's Tracking Protection just uses the list provided by Disconnect. I imagine this will do the same.


Google analytics tracks you and infringes on your privacy.


That's the reason I put that statement in quotes. A </sarcastic> tag seemed like it set an unduly negative tone to the rest of the post.


OT.

I think we need to standardize the sarcasm tag on the Internet.

Not one of those RFC that no one implements, but something concrete, perhaps baked into the HTML standard. (If the W3C include DRM, they can include sarcasm).

Or maybe something like ISO, but people ignore them too.

/S


There’s actually a special character for this, unfortunately not in Unicode, called SarcMark.

http://www.sarcmark.com



Thanks.

There was recently a NYT article on trying to get a ballet shoe emoji (high heel shoes & sexism related) into Unicode.

Sarcasm should be in Unicode!

---

>Sarcasm, Inc. was formed in 2006 to pursue this idea, and with a great deal of effort and undying support from family and friends, the punctuation mark for sarcasm came to life.

This is why America is the greatest country in the world!

Sarcasm Inc even has (sarcastic?) shareholders and apps!


I'd prefer everyone on the internet just figures out that sarcasm doesn't work well with text. Make a podcast or video if you must use it. Also "[something], err, I mean [something else]", doesn't work when we all know you have a backspace key.


I'm not going to record a video just to reply to HN comments;)

And we all know what is meant by "X, er, that is, Y"; it's even used intentionally in verbal language.


> sarcasm doesn't work well with text.

I think it's more that effective sarcasm in text requires the audience to know the viewpoint of the author. So, sarcasm among friends over email can work. Sarcasm in the concluding paragraph of an article can work.

Sarcasm between strangers in comment sections, less so.


> Scripts are delayed only when added dynamically or as async.


> Google’s A/B testing initially hides the whole web page with opacity: 0

Sounds like a violation of Google's own policies.


Now try Google's sweetheart project https://www.ampproject.org/ or any amp-powered site. They contain this gem of a boilerplate hiding page content for 8 seconds or until whatever script is finally loaded that overrides it:

    body {
      animation: -amp-start 8s steps(1,end) 0s 1 normal both; 
    }
    @keyframes -amp-start {
      from { visibility: hidden; }
      to { visibility: visible; }
    }


Isn't it anti-net-neutrality? Is it ok to build it into a major browser? I actually block all the tracking, malware, ads, social networks, fraud, gambling, porn (except a couple of porn sites I like :-)) etc domains I could find information about but this is my personal conscious choice, I have manually installed extensions for this and built the lists. Shouldn't other people do the same themselves too if they choose to or leave everything as is if they don't actually feel they need to change anything?


For me, using Firefox is the personal conscious choice you refer to. Their brand is that the browser is on your side, I'm happy to have the defaults set accordingly.

(Edit: I should add that I think you're asking a very valid philosophical question that my opinion as a user of Firefox doesn't fully address)


> For me, using Firefox is the personal conscious choice you refer to.

Firefox marketing is effective but to be clear they have begun changing direction, pursuing a more practical "attracting the masses" approach (a la Signal) rather than paranoid/hyper-vigilant/engineer-style idealism (something some misinterpret Firefox marketing as still claiming), and as a result Firefox has been forced to course-correct several times due to public outcry.

https://news.ycombinator.com/item?id=15940144

A recent eye-opener for me personally was discovering telemetry's switch from opt-in to opt-out as of September.

https://www.mozilla.org/en-US/privacy/firefox/#firefox-by-de... (3 months ago) vs. https://wiki.mozilla.org/Telemetry/FAQ#Is_Telemetry_enabled_... (3 years ago)


I would not be surprised if the move to opt-out spying is consequences of the dropping ALSA and adding a hard dependency to pulseaudio fiasco.

Telemetry has been held as one of the main justification for the change as it supposedly showed only a fraction of users used ALSA. It turned out to be a combination of misinterpretation of collected data (installed libpulse does not mean not using ALSA) and a complete ignorance of the real world (most linux distros disable telemetry because privacy, people using ALSA are often the same that take extra measures to protect their privacy hence disabling mozilla spying mechanisms).

As a consequence mozilla pushed hard for package maintainers to enable telemetry in linux distros and sadly most did, see: https://bugzilla.mozilla.org/show_bug.cgi?id=1233687 https://bugzilla.mozilla.org/show_bug.cgi?id=1285195 https://bugzilla.mozilla.org/show_bug.cgi?id=1285201 http://pkgs.fedoraproject.org/cgit/rpms/firefox.git/commit/?...

Compare to this from 7 years ago: https://bugzilla.mozilla.org/show_bug.cgi?id=667577

The fun part was when mozilla people started patronizing users for protecting their privacy and told them that it's their fault if ALSA got dropped and that they should relinquish their privacy instead or they will be ignored:

  Second, continuing to opt out of telemetry will just make problems
  like this worse. As was stated in this thread, one of the
  justifications for removing ALSA support was that the telemetry
  numbers showed a very little ALSA usage. If more ALSA users had
  telemetry enabled, perhaps the outcome would have been different.


  In any case, running without telemetry means not having a say in
  data-driven decisions about what configurations Mozilla should
  support. It's OK to disable telemetry (that's why it's
  user-controllable), but both users and distros that make decisions on
  users' behalf should to take into account that if don't let Firefox
  send info about your system config to Mozilla, your system config is
  invisible to Mozilla's decision making about what to support. 
source: Rationalising Linux audio backend support - https://groups.google.com/forum/#!msg/mozilla.dev.platform/j...

There was no course correction here, ALSA is still out despite the issue being entirely to mozilla implementation and not ALSA abilities, despite this being an edge case for linux netflix users with a specific 5.1 setup and despite someone coming forward to offer to fix and maintain their broken code.

Did I mention that they failed to mention this change in release notes, that firefox displayed a message inviting people to click a link to learn why their browser suddenly lost the ability to play sound but the link was broken and that this was an ESR release?

So when I read some mozilla marketing PR about them pretending to champion user privacy, I'm nonplussed at best.


When I said course-correct I didn't mean complete reversal, it's usually been just enough to satiate temporary outrage in early-adopter/tech circles before it goes mainstream.

Thanks for taking the time to document the details of this particular situation which I've never heard about before. I wonder how many other mildly upsetting (?? - per "nonplussed at best") decisions are currently accumulating under the radar. Each time another pops up I'm amazed at how much Mozilla has successfully swept under the rug.

The two I knew of are including Paper by default for some time (after taking heat for this, then acquiring it a year later... though I didn't see it when I just checked) and sticking with opt-_out_ for Google Analytics on the add-ins page. After digging recently thanks to the Mr. Robot stuff I discovered the opt-out Cliqz add-on for Germany and now your anecdote.


From the practical point of view, I am afraid the trackers will respond a way if such a move is made on such a global level. Nobody was fighting AdBlock+ when it was only used by geeks and now as is it has gained so much attention there are a lot of sites that won't work if you use AdBlock+. Introducing te "do-not-track" header was a great idea but it has been completely ruined by major browsers turning it on by default so the trackers have legitimately chosen to ignore it.


> Introducing te "do-not-track" header was a great idea but it has been completely ruined by major browsers turning it on by default so the trackers have legitimately chosen to ignore it.

This seems to imply that doNotTrack would have been successful for its intended purpose had it not been a default setting at one point, which feels like wishful thinking.

Sure, that aspect made it impractical for uncharacteristically privacy-minded companies in the space to support the header, but the vast majority of companies in the business of tracking and advertising would have ignored it anyways, because there's zero consequence for not doing so.


Sites that do not work when using AdBlock+ are fighting back their money extorsion scheme (Adblock+ has this commericla program where if you pay them money they will whitelist your ads). Geeks ahve long moved to ublock origins or umatrix.

Anyways 95% of the time you just have to disable scripts from the websites to unbreak them.

do not track was never a good idea as respecting it was on a voluntary basis, meaning it was mostly an additional metrics to use to track and profile people who do not want to be tracked. It's a no brainer really that if you did not want to be tracked you should not enable it. Remember ghostery ? same thing all over again.


Branding and marketing and PR are just that, and the objective is to get a bigger market share.

Actually it is to be the dominant browser on the market, mozilla name means mosaic killer, as in dethrone NCSA Mosaic and take its place as the n°1 browser by market share and was the internal name for netscape.

They kept the name for the continuation of the effort though the target had moved to IE who actually took over Mosaic. Now the target is google chrome and the historical meaning is lost on most people but the intent is still to have the bigger market share because monies.


I think you're conflating not shaping traffic at the ISP or Backbone level with something else. The something else in this case is making requests to URLs a consumer of a page may or may not be vetting. If anything it's not treating all contents of a page as equal, and I think that's okay. We already do this with prefetching hints, ordering elements on a page (e.g. styling before scripting), and sometimes consumer driven more invasive approaches (ad blocking, monkey patching sites -- e.g. Reddit Enhancement Suite, etc.). Sure this is pretty heavy handed, but it's quite likely that the other vendors will follow suit (though perhaps with different priority, I doubt Google would do this to their own services).


I mean, not any more than certain kinds of code being faster than others in browsers. There's tons of stuff out there on how to optimize your site for browsers, and browsers make conscious decisions on what kind of code will be faster all the time.

Also I think tracking protection is off by default except in private browsing mode, and if it's something you need to explicitly enable that's no different from the "neutrality" issues caused by installing an adblocker (there are no issues).

This also ties into the whole concept of "a browser is a user agent" (http://words.steveklabnik.com/user-agent-moz-a), i.e. it advocates for you. Here it's trying to get the actual content loaded faster. ISPs are not user agents, they are more like a utility.


> Shouldn't other people do the same themselves too if they choose to or leave everything as is if they don't actually feel they need to change anything?

Your browser is no longer imposed on you, so switching to Firefox IS doing that thing yourself.


This is a good question. I’d say it is not net-neutrality related as it doesn’t influence the “pipes.”

On the other hand, if it silently influences internet traffic, it does in a way censor content and could influence their bottom line.

I would expect this to be an explicit switch.

Edit: clarity


Client software can be chosen by the user. Net neutrality is about protecting the user's choice to use the network how they wish. Choosing which piece of software to use is a part of that. In addition - unlike ISPs - browser vendors are not a natural monopoly, and there is established case law preventing it, so it's not about net neutrality, just anti-trust.


I support this view because many will have justifications to explain : "well, since it's not FF, then well, it's ok, you know, not exactly like the hardware pipes...".

But on the principle, yep, FF shouldn't have a neutral treatment for everything. And that, of course, is impossible to define : what is "neutral" ?


That was my thought: should browsers be content gatekeepers?

(Honest question, this is a very tricky area)

If Comcast decides to "block" 3rd party tracking and analytics content, would it be ok?


network neutrality refers to network providers being neutral to the data they transmit. applications running over the network can do as they please.

by this argument ad-blockers violate net-neutrality.


> network neutrality refers to network providers being neutral to the data they transmit. applications running over the network can do as they please.

Not exactly, network neutrality can mean almost anything. The way it is discussed in the FCC regards ISPs, but you can expand it to anything.

> by this argument ad-blockers violate net-neutrality.

No. And ad-blocker isn't any different from a child filter. It is a tool specifically designed to block certain content explicitly enabled by the user.

And to expand: by your logic, it would be ok then for Chrome to block all links to Firefox?

Or for Microsoft to block all links to other browsers?


> And to expand: by your logic, it would be ok then for Chrome to block all links to Firefox?

No, that wouldn't be OK. But not everything that is not OK is a network neutrality violation.


> No, that wouldn't be OK. But not everything that is not OK is a network neutrality violation.

If you consider browsers an integral part of the internet infrastructure, it is. If not, it isn't. Happy to discuss naming, but it will mostly be useless.

But glad that you agree that Firefox preemptively delaying those sites is not OK.


> If you consider browsers an integral part of the internet infrastructure, it is. If not, it isn't.

No, it simply doesn't have anything to do with whether it is internet infrastructure. Prioritizing stuff according to the user's wishes/under the user's control without price discrimination is not a neutrality violation, even when network operators do it.

> But glad that you agree that Firefox preemptively delaying those sites is not OK.

Idiot.


>But glad that you agree that Firefox preemptively delaying those sites is not OK.

that kind of snarky comment is not helpful


i mean if you want to redefine network neutrality you are welcome to do so. I'm not sure what the definition you are using is or where it comes from.

i think your definition muddies the waters. if firefox shipped with a built-in default-on child filter would that be a net neutrality violation in your eyes? if it just slowed down adult content would it violate net neutrality? where even is the line here and is the line meaningful?


> if firefox shipped with a built-in default-on child filter would that be a net neutrality violation in your eyes? if it just slowed down adult content would it violate net neutrality? where even is the line here and is the line meaningful?

I don't know (that's why I'm asking). On one side, you could argue that they are providing a service to users, on the other hand, they become a content gatekeeper if they do it by default.

Chrome, for example, blocks suspicious sites, but that's a very clear line you're drawing. With ads and tracking, it becomes muddled.

It would be similar to adult content in the UK, where AFAIK you have to proactively enable it through your ISP, it comes blocked by default.


ok then I suggest you go back and write a clearly usable definition before trying to apply that definition to "net neutrality." or else make up a new term

it's hard enough to get the public to understand this concept without also waffling over the meaning in the tech community.


Only if ad-blocker are set up by a third party such as your ISP. Anything run by the end user for her own use is not net neutrality.


yes that is my point


Does Google count as a 3rd party? It is launching a default-on ad-blocking service on Chrome.


it's not a third party when it's the same person that made the browser. it's the same second party whose software you opted to use. there's no net neutrality issue here.


Much of this discussion is missing the point that the stated goal of this change is actually to _help_ sites that use lots of tracking scripts, not to penalize them.

It has become common to use so many tracking scripts that the perceived page load time (time to display/interactivity) is actually significantly slowed down. I actually first installed an ad-blocker myself when I realized some sites were taking like 5+ seconds to load, and loaded quicker with the ad-blocker. (But this change isn't a _blocker_ of scripts, it's trying to change order and timing of execution to speed up page load while _keeping_ the scripts).

The intent of this change is to delay load of those scripts (which are already being loaded with code that loads them async, that is, without spec guarantees of load order or timing) until after the page UI is loaded and operative, to _improve_ perceived load time.

I'm not sure if people are missing this point, or don't believe the stated goal and think it's secretly a plan to hurt these sites instead. I believe the stated goal. (whether they like or hate that idea!) As the OP says though, there are certain pages that _may_ be unintentionally harmed by the change, if they were relying on quick load of scripts that they should not have been relying on because they were already being loaded async (that is, with no guarantees of load order or timing, already).

If this ends up being a non-trivial number of pages, and those pages/tracking frameworks don't fix themselves to accomodate, then I predict the change will be considered unsuccessful and unfortunately rolled back.

It is meant to _help_ pages that use a lot of tracking scripts, not hurt them. Although I guess the assumption is that actual load time of interactivity is prioritized over making sure your tracking scripts are in immediately. If site owners actually prefer to slow down their pages non-trivially in order to guarantee tracking scripts immediately, then I guess they wouldn't see it as help. shrug.

I think the OP author is probably regretting his post title. It maybe should have been "Firefox 57 speeds up load time to interactivity of pages with lots of tracking scripts", heh.


Does anyone know where I can find the list of sites that are affected?

EDIT: Seems to be here: https://github.com/mozilla-services/shavar-prod-lists


Here it is newline-separated in case you want to import it into the Adblock DNS filter for iOS: https://gist.githubusercontent.com/anfedorov/1fa7dc8871b20da...


Thanks! I wasn't but I'm sure others will find it useful.


Just realized the list contains facebook.com as well as mail.google.com, so perhaps a little overly broad there.


Is there any way you can make this work for Chrome on iOS?


I don't think Chrome supports extensions on mobile, but Firefox does, so you could try that if you really want an adblocker.



:(


I love to see software that is actually acting on behalf of the users running the code rather than companies serving it.

This does point to some problematic directions though.

If web standards were simpler and it were easier to build competing browsers (and/or if we could trust plugins more), then it wouldn't be much of a concern if some browsers choose to experiment with these kinds of protections.


This raises the question: Why delay instead of block?

Assumption is that user wants page to load faster but does not object to tracking.

What if user wants page to load faster and objects to tracking?

Source of Firefox tracking protection is list at disconnect.me?

credit: eco https://news.ycombinator.com/item?id=15964393

Basic gethostbyname()->HOSTS file blocking:

   #!/bin/sh
   exec curl https://disconnect.me/trackerprotection/blocked \
   |sed -n '/<\/br>/!{/\./s/^/255.255.255.255 /;};/\./p' >> /etc/hosts;

   User may want to add these too:
   https://disconnect.me/trackerprotection/unblocked
Beyond HOSTS file, authoritative DNS gives more flexibility, e.g. logging all requests, using wildcards, etc. For example, tinydns:

   #!/bin/sh
   curl https://disconnect.me/trackerprotection/blocked \
   |sed -n '/<\/br>/!{/\./s/.*/.&\
   =&:255.255.255.255:1/;};/\./p' >> _root.zone/root/data;
   if cd _root.zone/root/data;then exec tinydns-data;fi
Or dnscache:

   #!/bin/sh
   curl https://disconnect.me/trackerprotection/blocked \
   |sed -n '/<\/br>/!{
   /\./s/.*/echo 255.255.255.255 > _dnscache\/root\/servers\/& /;};
   /\./p' > block.sh;
   if sh -c ./block.sh;then exec rm block.sh;fi
Note I am not recommending this particular block list. Compared to the one I use, it seems incomplete. I prefer to create own list from DNS logs from own network traffic. Use of wildcards can shorten a long list like this substantially.


Blocking is already supported: https://support.mozilla.org/kb/tracking-protection#w_how-to-...

Mozilla doesn't want to enable it by default because it is the funding model of the web, and no viable alternative funding model for most of the web's content has arrived yet. (At least, that's the public statement. I don't know if e.g. potential lawsuits, etc. also play a role.)


Tracking is the funding model of online advertising and investor storytime but it's not the funding model of the web. I think there's no such thing as a funding model of the web.

the web was exclusively non commercial and later was opened to commercial activity and later advertising which added tracking because their business model was broken and they needed something to show their customers and investors to gein/keep their trust that online advertising was not a total scam as physical world mostly is.

The unexpected side effect is the creation of a global totalitarian surveillance state just to show ads: https://www.hooktube.com/watch?v=iFTWM7HV2UI


It's also indirectly their funding model. The bulk of Mozilla's revenue comes from search partnerships, who earn the bulk of their revenue from advertising.


They've always opposed including an adblocker and tracking blocker claiming it's not their choice to make, but it folled no one, we all know it is because their business model and funding relies on tracking and ads.


correction to example 2

   sed -n '/<\/br>/!{
   /\./s/.*/.&\
   =&:255.255.255.255:1\
   =*.&:255.255.255.255:1/;};/\./p'


Ive wondered if it would be possible to tumble the requests via P2P network. Completely destroy the analytics


Browsers leak all kinds of information that can identify you via browser finger printing (which is already widely used): https://en.wikipedia.org/wiki/Device_fingerprint

You can check out how unique you are here: https://panopticlick.eff.org/


I don't have much to add, but I think you have an interesting idea and it is worth exploring further.


Use TOR?


That doesn't accomplish my goal,. I don't want to hide, I want to ruin the data collected by making it innacurate and useless

EDIT: Judging from the low quality of online marketing, it isn't hard to do.


Something like adnauseam ?

https://adnauseam.io/


I’d prefer to be paid to be tracked. Eg the browser should send my wallet address as a signed header, and upon payment verification the delay (or any other penalty) lifted for 60 minutes


I just block tracking domains. And ad domains. My browser, my rules. The sites that can't handle it aren't worth the bother.


Tracking scripts should be blocked by default, not just delayed. If a user has some strange desire to be watched, give them the option to opt-in to their browser unblocking tracking scripts/domains.


> Tracking scripts should be blocked by default, not just delayed.

There is in fact a preference to do that: Settings -> Privacy -> Tracking Protection, set to always.

It breaks some reasonably popular sites, which is why it's not on by default across the board right now.


Yes, there's an option to enable blocking. I think it should be the other way around, which was kind of my whole point. That's the only way to encourage site owners to stop designing sites that break when they can't spy on users.


If the tracking protection thing were being created before the sites, I agree.

As things stand, it's a hard sell to have a browser do something that breaks sites that used to work. Most users tend to not be happy about it. This is definitely an area worth pushing on; the question is how best to do it.


Without tracking scripts, many webpage owners will be making much less money off of Firefox. If they don't make a whole lot of money off of Firefox, they won't test and optimize for Firefox much. So, we'd be seeing a lot more breakage of webpages in Firefox.

Mozilla could maybe afford to pull something like that, if they had the majority marketshare, as then webpage owners would have to deal with their decisions in order to still make any revenue at all, but well, they don't.


Then mozilla risks losing 90% of its revenues because it comes from online tracking and ads.


> To conclude on how useful the tailing feature is – unfortunately, at the moment I don’t have enough data to provide (it’s on its way, though.)

I wonder if this post was rushed to publication, to manage Mozilla's public image and reestablish it as a friend to user privacy, after the 'Looking Glass' fiasco a few days ago.

(I'm not exactly opposed to such PR efforts, as long as they're accompanied by actual internal change in the company.)


This post is on the personal blog of an engineer. I somewhat doubt it was posted in response to the Looking Glass thing.


Much of mozilla communication is PR and marketing.

Maybe this is damage control, but somehow I think they are not well organized enough to have this kind of thing happen.

Besides how is this helping with privacy ? Trackers are still loaded and tracking.


"By Deeds, Not Words", Mozilla. They've had a few too many strikes lately. Their actions seem particularly greedy and tone-deaf given their supporters, and no longer aligned with what I want out of a company. Despite their constant words and apologies, they keep pulling this stuff; as a 10+ year user of Firefox I'm pulling the plug on it, just can't be trusted any longer.

Silently installing a plugin and doing an end-run around any policies in place etc. is just clown-school level.


Any idea if a detailed notice from Mozilla will be issued about this at https://developer.mozilla.org/en-US/Firefox/Releases/57 or similar?


This is not something new as I use chains of local proxy and cache proxy (Privoxy and Squid) on my PC with Firefox to block ads and tracker. The problem is Firefox has memory leak issue as it consumes ~2GB RAM while browsing Facebook or using addons like Ghostery or Adblock.


Why would you use ghostery which was a tool to study people who want to black ads so they can be better served ads ?

Why would you use adblock which is outdated when ublock origin has been available for a while ?

It's strange that you take extra steps to block ads and tracker but peruse facebook whose sole purpose is tracking and profiling you and your family, friends, acquaintances through you.


I don't use Ghostery and AdBlock. I see my friends using it and how Firefox consume tons of his PC memory.

I use different browser to access different category websites. Brave browser for social media and Firefox for news reading or researching. Both of that browsers using the same local proxy connection in my PC. I installed Privoxy and Squid on the same PC with that browsers and so they are appear as the same user agents (e.g. Chrome).

When I access http://www.janbambas.cz, of course my browser loading faster as my Privoxy allow/block these domains:

  2017-12-21 00:55:32.780 00000f04 Request: www.janbambas.cz:443/

  2017-12-21 00:55:34.632 00001194 Crunch: Blocked: fonts.googleapis.com:443

  2017-12-21 00:55:34.670 00000e3c Crunch: Blocked: secure.gravatar.com:443

  2017-12-21 00:55:34.697 00000e8c Crunch: Blocked: www.google.com:443

  2017-12-21 00:55:34.711 000013e8 Crunch: Blocked: secure.gravatar.com:443

  2017-12-21 00:55:34.741 000012cc Crunch: Blocked: secure.gravatar.com:443

  2017-12-21 00:55:35.680 00000ce4 Crunch: Blocked: secure.gravatar.com:443

  2017-12-21 00:55:35.721 00000978 Crunch: Blocked: secure.gravatar.com:443

  2017-12-21 00:55:35.736 000010e0 Crunch: Blocked: www.google.com:443

  2017-12-21 00:55:35.759 00000104 Crunch: Blocked: secure.gravatar.com:443
You see that, my Privoxy blocked 9 connections and 3 domains.

What makes me sick is this person #Honza Bambas# doesn't give a real solution to our browsing problems. He only makes us die slowly.


Can you explain why you think gravatar is a problem?

Or google fonts?


By blocking both of domains we reduce page load time. The browser doesn't have to wait _forever_ for both of connections complete as it already blocked. Moreover, I need only the article and not picture of his avatar or fancy fonts. Remember, Google fonts are files in Google's web server, every request to a web server will be written in its log file (e.g. browser's user agent, http referer, IP address). Like suspicious OCSP requests (in disguise), it's a kind of tracker, right?


Is there a config setting for the delay period in double precision units? :)


If you're alluding to setting the delay to infinitely large, you can do that by going to Settings -> Privacy -> Tracking Protection and setting it to "always".


Search for "network.http.tailing" on about:config.

No, I don't know what the settings do. I was just searching for "delay" and found the tailing settings.


Google search pages are taking a lot to load on Firefox sometimes, they fail altogether a lot also after some time.

Does that happen with anyone else?


Unfortunately this can give ammunition to the anti-neutrality lobbyists. "See, everyone is doing it anyway."


Well, love them or hate them but tailing them is not Net Neutrality ;)


Is this net neutrality compliant ?


Net neutrality is in regards to the internet carrier, not the browser (or client). The browser is irrelevant to net neutrality.


I 100% agree with what you are saying, but in support of GP, I'm sure they were meaning the spirit of net neutrality (don't shape my request), NOT the legal definition of it.


the browsers requests are your requests


But are they? You have no real control on what the browser does when you say GO. Sure you can change browsers, and thats cool, but most consumers don't know who/what/when/where/how to change to another browser.

Its a bit naive, because we technical people can switch willy-nilly from one browser to the next, to assume everyone can.


> but most consumers don't know who/what/when/where/how to change to another browser.

I don't think that's true.


>The browser is irrelevant to net neutrality. //

It's not the currently spoken of Net Neutrality, but it's relevant to the neutral carriage of data over the internet. If browser companies pick and choose whose data to delay (or otherwise alter) then they have the power to bias the web - Firefox could always delay scripts from other companies than Google, for example, in order to preference their business associate. They presumably aren't informing users, or requiring users to enable the function.

In short it seems highly pertinent in the net neutrality debate, to me, despite perhaps not having reached problematic levels and despite not being the specific form of neutrality that has erstwhile been grabbing the headlines.


it's an issue with similar consequences to net neutrality, but it's very much not the same issue. The network is one layer, the client is a different layer. Let's not start forcing clients to conform to network rules, or vice versa.

additionally, the reason net neutrality is so important is because there is no consumer choice in the ISP market for many users. Even when one browser is dominant, there's a lot more choice among browser vendors, so pushing out regulation to them is less important.

But let's not muddy the waters of net neutrality by injecting separate issues into the debate. Yes, browsers should treat all traffic equally, but that isn't net neutrality.


I think the anti-net-neutrality people are running out of fresh, not yet discredited arguments.

In OSI model IIRC:

Application (software) is layer 7.

Physical connection is layer 1.

I am sure somebody will correct me.


Not sure who you're claiming is anti Net Neutrality, but if that's levelled at me it's quite wrong.

See my other comment in thread, but in short I don't think a user cares really that action is at a different level of the conceptual OSI model, they care if they can consume given media, if NN legislation shifts things so that the browser blocks the media from a particular server instead of that servers upstream ISP I don't think users are going to be applauding too much.


They should be applauding though. We can always fork Firefox if it's not acting in our best interest. We cannot fork Comcast.



Well "cant" v. "wont" is sometimes hard to distinguish. There's little practical differences here though.


Yes, but then you need to ask yourself where do we begin, and where do we stop?

Does this pertain only to browsers, or is it to all applications/clients that have a connection to the internet? What about my XMPP clients, or my IRC clients? Do they have to follow net neutrality as well? On top of that, does a FOSS project like Firefox where the user defines what is in the packaged binary have to follow it?

Net neutrality or the law in general overreaching like that could jeopardize support, especially when it is not easy to find an answer to any of the questions I listed above.

Software freedom is important.

My opinion.


There's something to be said for being wary of browser makers influencing the web unduly, which we already see to a much greater and more damaging degree from IE/Microsoft and Chrome/Google; but mixing user-agent policy with transport neutrality only serves to muddy the water around the very important and really quite straightforward issue of IP endpoint neutrality for ISPs.

They are two separate, important issues, but confusing them serves no one.


They could, in theory, have logical equivalence. If my ISP retains neutrality wrt transport between YouTube and my client [eg to comply with legislation] they can _in_theory_ pay Mozilla instead to throttle the connection at the UA for any user with an IP on the ISPs network, delay transfer, limit total data, etc. - the effect would surely be close to identical to ISP interference wrt the traffic carried between YT and my client browser.

Because of the possibility of such an equivalence it strikes me that the issues are not _separate_, though obviously not identical either. If Firefox blocked/throttled sites but allowed others [in the same category] for money, then Mozilla through my client would be enacting the same effects as net non-neutrality. They could charge for an add-on to remove all/some blocks, or charge the server endpoint company to remove the block just for them.

The point is that for a user the important endpoint is their sensory input, and that a browser manufacture can filter/throttle/block that consumption in the way that an ISP can.

Legislation could be written that would encompass preventing a browser manufacturer from covertly interfering in data transfer, whilst equally demanding ISPs carry data regardless of origin. Equally poorly drafted legislation targeting ISPs could impact browser makers ability to implement black-/grey-/white-listing of malware, etc..

Whilst we're focussing on such bias that can interfere in our web/internet consumption I think it's the perfect time to make sure we don't solve the problem in one place only to leave a gaping hole that allows the same corporations to enact the same controls.


I agree that it's definitely relevant. In theory, users can run any browser, so it's not a big issue. In practice, the web has become so complex that it's very hard to challenge established web browsers. I think we should push for simpler standards on the web, possibly "levels" of complexity. So you can have HTML-only compliant sites for which very simple HTML-only browsers suffice; sites and browsers that implement a minimum of javascript; and "full-featured" (i.e. bloated and full of adware+tracking).

This or some similar proposal could open up competitiveness and options in the browser space, which I think would go a long way toward solving some of these issues.


This doesn't seem like it should be built in at the browser level. The brower's job should be to process requests and render webpages as quickly as possible. If a user wants to intervene in that process, that is their prerogative, but it definitely should remain at the plugin level.


I agree the browser's job should be to process requests as quickly as possible.

If the web page asks for 150 scripts to be loaded (e.g. that's what https://www.nytimes.com does right now), what is the fastest way to load them? "All in parallel" is not the right answer, so you end up prioritizing. At that point, maybe you want to prioritize the 40 non-tracking scripts (how many get loaded if I enable tracking protection in Firefox on that site) over the 110 tracking ones.


Problem here is Firefox is making big and sometimes wrong assumptions about what these so-called "tracking domains" do. That's not in any W3C spec -- that's Mozilla attaching specific assumptions to specific parts of the Internet. Case in point:

> One example is Google’s Page-Hiding Snippet, which may cause a web page to be blank for whole 4 seconds... Both the analytics.js and the test script are loaded from www.google-analytics.com, a tracking domain, for which we engage the tailing delay.

Clearly if a domain is serving up an A/B test script then it is doing more than tracking.

> Simply said some sites may need to be fixed to be able to adopt this change in scheduling.

I have a problem with the term "fixed" here. Seems like Firefox's assumptions are what's broken here.


I built PrivacyWall to block all Firefox telemetry urls. It may make your browsing even faster since it blocks all unwanted data collection that happens in the background at the OS level. That means it is more effective than an extension running on top of Firefox, and Firefox cannot surreptitiously send data without your knowledge anymore. If you are working on sensitive projects and this is something you are worried about, I am making it available for free for non-commercial users at http://www.privacywall.org

I received alot of emails from loyal Firefox users telling me they are worried about their privacy when using Firefox 57 after the Looking Glass debacle, so I decided to make it available for free. If there are tracking domains sites you you think should be blocked due to suspicious behavior, tell me the urls and I will evaluate for inclusion. Please feel free to submit it as a comment to this thread or submit it using the form submission field on the PrivacyWall homepage.


So if I'm worried about tracking, I should install some closed-source binary from a site which has no identifiable information so it can control all connections on my computer?


Fun fact: you can turn off telemetry yourself in Firefox. And it's open source, so you (or someone else) can check that it's actually off.


For the sake of Firefox users, I hope you are right. There's been complaints it flips back on after Firefox updates, so your privacy is at the whims of Firefox.

Fun fact: Firefox just pushed out the Looking Glass add-on to users without notice or consent this past weekend.


In the case of it flipping back on, you're at the whims of bugs - just like with all software. If there's any organisation I'd trust, it's Mozilla - if only because if they make mistakes, there is a lot of pressure for them to correct this (case in point: Looking Glass).

Note that Mozilla pushes code without explicit consent for all parts of it all the time - they're called software updates. The problem in this case was that it was for a potential feature that very few people cared for, and that it showed up as a scary extension in the extension list. That definitely should not have happened, but it's not a privacy violation.


This is the ground on which the war against Google, Facebook, et al is fought, but open-source solutions like Firefox are going to need to find a better way to monetize if they want to win. All this does is firm up the resolve of user identity resellers like FB and Google to see that Firefox gets destroyed, and it demonstrates why tech eventually all boils down to platform control.

With Facebook code intruding into every nook and cranny of the web via React and Google's position as digital Sauron, monitoring and parsing out your search history, watch history, all phone activity, emails, SMSes, and more, user identity resellers are a formidable foe, and Mozilla is in its familiar position as the David of noble, non-moneyed interests challenging the user-hostile Goliath.

Firefox may find unlikely allies in Microsoft and Apple, since these two companies still make at least some money selling an actual product instead of just slurping up information about their users and repackaging it. The best thing Mozilla could do is convince Apple and Microsoft to give up their independent lackluster browser implementations and ship Firefox as the default instead.


I agree it would be great if other browsers implemented this as well, but you lose me at:

> The best thing Mozilla could do is convince Apple and Microsoft to give up their independent lackluster browser implementations and ship Firefox as the default instead.

Multiple independent implementations are of vital importance to the web. We've seen it with IE6 before, and having two browsers is cutting it too close.


The "independent implementations" aren't worth much if nobody uses them. Chrome already has a majority of the market share at ~60%, with Safari, its nearest competitor, trailing far behind at ~15%. Firefox is sitting at 9.3%. [0]

Combining IE/Edge, Safari, and Firefox still leaves Firefox at half of the market share of Google, and that's still a distinct disadvantage, but it's a much stronger fighting position than single-digit market share. Google and Facebook are not naive upstarts and it won't be easy to quash them, especially not when little old Mozilla is running on comparative fumes compared to a couple of the best-capitalized companies on the planet.

Google and Facebook's interests are aligned as both base their business model on profiling and reselling data derived from user behavior, so it's unlikely that Google will feel inclined to implement desired user protections against that business model.

It's very important that we have strong competition representing a diversity of interests, especially when the dominant player's business model is just "slick spyware". Without a competitor in the same league, consumers don't really have an option.

[0] https://www.w3counter.com/globalstats.php?year=2017&month=11


> Mozilla is in its familiar position as the David of noble, non-moneyed interests challenging the user-hostile Goliath.

Mozilla is in its familiar position of only existing because Goliath gives back a small fraction of its tracking/advertising earnings in exchange for sending users its way. The exact opposite on non-moneyed interests.

Do you remember how Apple considered using mozilla code to buld their browser but rejected the idea because the code quality was much too low for their standard so they went to KHTML to build webkit ? Yeah vendor lock-in walled garden Apple ditching safari for firefox, not happening.

What's to gain for microsoft by switching to firefox ? Chance are this is not happening either.


> Mozilla is in its familiar position of only existing because Goliath gives back a small fraction of its tracking/advertising earnings in exchange for sending users its way. The exact opposite on non-moneyed interests.

I respect this position and admit that it's relevant. However, under this hypothetical, they may not need such sponsorship anymore, and they're clearly currently taking actions that will help protect users from it.

IMO this is only a serious issue if you believe that Mozilla's survival is predicated upon the continued flow of search sponsorship dollars. While their budgets may change and they may come out the end looking much different, I don't think that Mozilla or a spiritual successor serving substantially the same purpose (like OpenSolaris -> illumos) will cease to exist, no matter what Google et al do to them.

>What's to gain for microsoft by switching to firefox ?

Apple and Microsoft's interest would be defensive, blocking Google's dominance over the browser platform. They'd be interested in this because Google is currently a serious threat to both companies' major lines of business.

At present, with a fragmented competitive landscape, it's unlikely anyone will be able to stand up to Chrome. And however good or bad we believe Google or Chrome are, it's never good to lack a peer that can keep the company accountable (that is, monopolies are bad).

> Chance are this is not happening either.

I agree completely, I don't think this is a likely scenario. I just think it's an interesting and positive one.


take all the tracking away sure, but lets leave react out of this.


Well, that's just the thing. Facebook's control over React means it won't be left out of this. Why do we think that they invest in this? Every company wants to integrate themselves as an irreducible dependency in your processes and systems. Facebook doesn't get any money for letting you use React, and they no longer get the value of using it as a kludge to allow them to more easily steal the intellectual property of small inventors (not that they needed anything extra on that front anyway).

The continuing value added to Facebook's bottom line by React is in granting influence far outside of their natural scope. Could they counter Mozilla's move by introducing new React "features" that Mozilla simply "can't support" yet, even though they "really super duper hope the new featureset can come to Gecko-based browsers next year"? Even more insidiously, could there possibly be some little snafus in their QA process that let bugs that affect Gecko-based browsers go unnoticed?

That's their leverage, that's their retaliation against Apple and Mozilla's moves to stymie their flow of user behavioral data. But, I'm sure a company whose revenue stream is wholly predicated upon reselling user identity would never do something "shady" like this to prevent competitors from blocking their access to the data that forms the lifeblood of their product, right?

... Right?


but i like it.


Well, <expletive>. Looks like I can't rely on either Chrome or Firefox not to <expletive> with websites.

I would really love to just stop using the web entirely and go back to Gopher.


From the article:

> Scripts are delayed only when added dynamically or as async. Tracking images are always delayed. This is legal according all HTML specifications and it’s assumed that well built sites will not be affected regarding functionality.

(emphasis mine)


I don't know about you, but I don't browse the World Wide Implements-The-Specification-Perfectly Web.


If you aren't complaining because it violates the spec, what are you complaining about? Is any change to any detail about how a browser works necessarily bad?


I understand the complaint. Let me put it this way: how would you feel if your ISP was delaying your connections to a subset of websites for a few seconds? It wouldn't violate any specs, as far as I know. But a lot of people have expressed the sentiment that they don't want middlemen messing with websites. It's not clear to me that Firefox qualifies as an exception to the rule, especially if this becomes something other browsers adopt.


It's not even a NN-type middlemen issue for me, though that is exactly what's going on here. The bigger problem for me is causing regressions for users. On top of that they're causing regressions just because they don't like X traffic, and they're not even exclusively affecting X traffic - they're affecting unrelated traffic too. It's just an incredibly arrogant, annoying, bad thing to do to users who never requested this to begin with.


Should only affect pre-broken code. Like complaining that a compiler is doing something with undefined behavior than you wanted: I get that it's annoying, but maybe fix your code so it's not a problem?


This isn't even a broken code issue. This is a totally unnecessary functionality regression issue. Instead of just loading a page, they're waiting four seconds to load the page, because the page uses an asset on a domain they flag as a tracking domain.

This is like if the compiler generated loops with 4000ms sleeps because the app links a library the compiler thinks is annoying.

Technically the compiler never said it wouldn't add random sleeps into loops. It's totally in spec! What's the big deal?

Meanwhile, my app is slow now. Or in the case of some apps, actually broken for active use cases where it used to work fine. Which, again, is totally regression by any QA standard.


> they're waiting four seconds to load the page

You make it sound like Firefox is just adding a wait for no reason.

The reality is that the page is asking Firefox to download dozens or hundreds of scripts [1]. Firefox needs to prioritize those loads somehow, because it generally doesn't want to open that many connections to the server in parallel. So it prioritizes the non-tracking bits over the tracking ones. If all the non-tracking bits are done loading, the trackers start loading at that point.

> This is like if the compiler generated loops with 4000ms sleeps

No, it's more like if your OS scheduler decided to prioritize some applications over others based on how much it thinks you care about them (e.g. based on whether they're showing any UI, or based on whether they're being detected as viruses by the virus scanner).

[1] For example, http://www.cnn.com/ shows 93 requests for scripts in the network panel in Firefox. If I enable tracking protection, that drops to 37 requests.

Or for another example, http://www.bbc.co.uk/news has 67 script requests and only 20 with tracking protection enabled.

Or for another example, https://www.nytimes.com/ has 150 script requests and only 40 with tracking protection enabled.


Much of this discussion is missing that the point is to _speed up_ page load/display. It is NOT like a compiler generating sleeps.

The bazillion tracking scripts loaded by pages is slowing down time to view/interaction on the page. Firefox is taking scripts that are _already_ being marked as loadable asynchronously/delayed, and delaying them until the page is otherwise loaded. That's it. It's not an arbitrary 'sleep', it's an attempt to prioritize UI responsiveness over tracking scripts.

To the extent it breaks or _slows down_ pages, that's an undesired side effect, not the goal. If it does that to a lot of pages, the feature won't be succesful and will be rolled back, I bet.


Your app is only slow now if you are blocking its content and/or most basic usability on the loading of external trackers - a lame yet increasingly common practice that needs to stop.

According to the article, they're only delaying these resources when loaded dynamically or async - so developers should be able to "fix" this by loading tracking scripts synchronously, which is what they are effectively doing already if this new FF behavior causes any noticeable impact.

It's hard to feel much sympathy for devs who have _explicitly_ prioritized the sending of their users' info to external parties, over their sites being baseline usable.


I would go to my package manager and install a new ISP, of course /s


You should expect it, though.


I have both written software for standards-based protocols, and used software written to standards-based protocols, so no, I would not expect it.

Not breaking backwards compatibility for existing users is the golden rule of software support. When an unfortunate pull-requester attempts to break backwards compatibility with Linus Torvalds' software, he has some very choice language to complain about the practice. If they were attempting to break backwards compatibility just because they dis-liked some particular app or service or use case, he might even use foul language.

Fortunately, I am a very well behaved and good little HN user, so I will not repeat that language here. But imagine what Linus would have said, really loudly, with all capital letters. There. That's better.


Linux breaks backwards compatibly all the time. Just not for userland programs. But if you are expecting your kernal module to be low maintenance you are in for a surprise...


The kernel has a clear definition of what will be backwards compatible, and what never will be. In-kernel interfaces are never stable, and kernel-to-userspace interfaces are very stable, with an ABI docs directory breaking out what is and isn't.

https://github.com/torvalds/linux/blob/e7aa8c2eb11ba69b1b690...

If the kernel's user interface just started blocking on open for 4000 milliseconds for no apparent reason, people would not be happy. Firefox expects users to demand that app writers edit, recompile, test, and ship them a new app to prevent the block. This is <insert lots of not very nice adjectives>.


This is an interesting example because the kernel's open() interface blocks for 4,000 ms all the time. Heavy swapping or the ext3 journal is full and some other app just called fsync().

Apps have to handle it. If they don't, because for example, they access the disk while also trying to be a display compositor, then they are simply broken. It does not matter if the kernel is usually fast enough. Because sometimes it isn't.


Do you know what the reason for this distinction is? On Windows, it seems kernel-mode APIs seem to stay quite stable as well... there are exceptions mostly on the device driver side because hardware tends to evolve (e.g. display/graphics drivers), but generic drivers (= kernel modules) generally seem to be able to rely on backwards-compatibility too.


My understanding is that it's to intentionally discourage trying to keep things out of tree, where they will inevitably break in worse ways. It also makes the GPL enthusiasts happy, but I doubt that was Linus's big goal.


I see, thanks. I'd be curious as to why he feels it would "inevitably break in worse ways", seeing as how that's not really the case on other platforms.


Basically, if you're going to have drivers out of tree, the driver ABI has to be perfectly stable, which restricts internal refactoring. Otherwise things break - and this does happen elsewhere; lots of drivers for Windows XP don't work on 10.


:[ I'm afraid this might be a "too-little-too-late" attempt to stave-off the downfall of net-neutrality.

Net-Neutrality means that the finance of the internet is driven by site-owners receiving funds from advertisers (or from subscriptions...but that seems comparatively negligible)

Getting rid of net-neutrality will mean ISPs (despite their common-carrier designation) will control this whole model^...meaning they will control the internet.

The largest argument in support of repealing net-neutrality involves taking power away from advertisers...so does this Mozilla policy.


I don't think that this feature has anything to do with net neutrality...


I'm not sure what article you read, but this change in Firefox is simply a change in the timing of requests to certain domains that are known to be for tracking/analytics in an attempt to improve the load time of websites.

If anything this move is anti-Net Neutrality since it's prioritizing non-tracking domains over tracking domains as opposed to treating all domains equally.


It cannot possibly be anti-net-neutrality, because it is an action being taken by the user agent, not a transport.


@Rudism actually convinced me you're correct (in spirit) unless there is some other portion of information we're missing. However a client-application is not the same thing as a user-agent. Just because their software is on my laptop does not mean I'm in control...What makes you think client-side requests preclude this being about net-neutrality?

Latency is additive...regardless of what layer we're talking about. increases in transport latency can be compensated for at the physical or application layer...time is time, the OSI model isn't involved.


A good point, although I read the article that this comment-chain is geared towards. Be more forthright with your condescension please ^_^.

The article was put out by mozilla, so of course their stated point is to make everyone's experience better. I'm in marketing, so I tend to ignore a good portion of the text in marketing/PR-blogs such as this one.

Aside from that, I hadn't thought of this. You're right that if anything, this will increase the load-times of companies who rely on advertising revenue. Perhaps it's preparation?


> You're right that if anything, this will increase the load-times of companies who rely on advertising revenue. Perhaps it's preparation?

The _intention_, if we believe the statement (I understand you may not) is actually the opposite of this. To _improve_ perceived load times (time to display/interactability) of sites that use lots and lots of tracking scripts. If a site does not use many or any tracking scripts, it actually won't effect it at all.

There are sites that use lots and lots of tracking scripts, and it actually slows down UI. The intent of this change is to _speed it up_ again.

If it doesn't do that, and instead makes lots of sites load more slowly to the user's perception, it will not have been succesful at it's claimed goal.


Well! After re-reading, you've convinced me that I misunderstood the mechanics of their change. Thank you! Hopefully HN can forgive me for learning ^_^.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: