This is a great solution to a problem that seems to becoming more prevalent. Reminder to devs that bolted on third party scripts should not be in the critical path. Meaning, if you are doing something like capturing a click event in a Google analytics handler and blocking the redirect until you’ve tracked the click - you’re going to have a bad time. Many tracking scripts are designed for this and will gracefully/silently fail via something like an array push mechanism but I’ve encountered the opposite as well.
Being an indie dev with a pihole setup has been tough - I’ve gotta turn it off a lot for various client projects - but it’s also helped me build more resilient applications that work just as well for people who don’t have trackers enabled.
>I’ve gotta turn it off a lot for various client projects
As a web dev I'm running into a lot of security products dorking up web projects and processes these days. It seems to be increasing.
I've got customers with security software or other privacy related tools that are constantly 'trying' to do the right thing ... but just become support ticket overhead for me.
It's ULTRA frustrating at this point.
I've run into several customers now whose email scanners not just block emails arbitrary, but also follow links (fine by me) ... and even SUBMIT A FORM (NOT ok). Presumably to avoid some malware delivery, but now they've submitted something to us on a one time use form...
So just sending them an email means their software submits accept or decline options on a form (with our without the email reaching them) and we get a ton of "but I didn't get the email and I didn't decline anything".
Meanwhile the end customer is too technically behind the ball to entirely understand what is going on, and some ultra aggressive IT admin just keeps doing it. If you have a lot of customers it just seems to never end.
I kinda want to abandon email because of it but there's not a lot of good options.
Other issues include some unknown software installed by someone's kid (their IT guy) that blocks rando boring API calls ... the list never seems to end.
I support these privacy / security initiatives 100%, we don't do any insidious tracking or anything like that, but it is starting to hit entirely innocuous stuff.
Read this and replace "web" with "desktop" and large parts are still spot on, wrt virus scanners and the likes. We work on a product of which the installation is a bit complicated because it needs to install a bunch of other things. 95% of the time that fails the reason is some overly active security tool which messes up (the other 5% mainly machines which haven't been updated for years) To the point that we've started wondering if we wouldn't just start requiring dedicated machines or at least without any of that software, or even just ship PCs with the application pre-installed as it would likely turn out cheaper. Unfortunately that is not really an option as a web dev, so that situation is even worse..
I develop desktop apps that are run exclusively virtualized, and don't really have to deal with either set of problems. From a developer perspective I'd say it's a pretty sweet spot, as you get the best of both worlds to some extent.
Granted, communicating with any type of hardware on the users machine is a major challenge. We've had to spend significant effort just troubleshooting printer issues. If you need low latency, virtualization is likely a non-starter.
The upshot is that the latency between our apps and the application server/database (hosted in the same domain) is much lower than for a conventional web app.
Being an indie dev with a pihole setup has been tough - I’ve gotta turn it off a lot for various client projects
When you get enough established paying clients, consider firewalling work gear from home gear.
My client work computers are not my personal computers. My client computers have their own router that is separate from my personal router. At one time I had my personal internet on cable, and my clients on DSL, but unfortunately that's not possible where I am now.
I get a lot of peace of mind from knowing the two are isolated from one another. The only thing they share is a desk. But when work time is done, client laptops go into the closet. Helps with the work-home balance, which is harder working from home.
Any particular reason you don't just use VLANs? It sounds like you're describing a textbook case for them. Nearly any routing software should have a firewall too and the whole point is handling layer 3. Even for purely your own stuff segmenting off various devices into their own subnets can still be handy. If you want you even with a single WAN connection you could do something like get a cheap $5/mo VPS|droplet|etc, run wireguard on it, then route all traffic from a given VLAN through it. That'd give you similar WAN isolation.
Because the router I own is better than the router I don't own.
Further, everything you describe is a bunch of complexity I don't need in my life. My time is too valuable to spend fiddling with configurations every time some piece of kit does a software update.
The way I do it, I plug a wireless router into one ISP, and a different wireless router into another ISP, and I'm done with it. Simple and clean. I'd rather spend my time with my family than "handling layer 3."
Appreciate the reply and everyone has their own circumstances. But it doesn't sound like you've actually considered it or know anything about it which I guess is the basic answer to my question.
>Because the router I own is better than the router I don't own.
That's obviously not necessarily true. By this logic all upgrades for all time are pointless. Why get a better CPU/GPU/SSD? "The CPU/GPU/SSD I own is better than the CPU/GPU/SSD I don't own" after all. Except it's not, hence the interest in upgrading.
>Further, everything you describe is a bunch of complexity I don't need in my life.
Your setup sounds much, much more complicated actually.
>My time is too valuable to spend fiddling with configurations every time some piece of kit does a software update.
This isn't actually a thing. If anything, a core reason for using VLANs is precisely being able to just have any system at all and plug it in/join network and have it all be isolated and routing the right place with zero configuration.
>The way I do it, I plug a wireless router into one ISP, and a different wireless router into another ISP, and I'm done with it. Simple and clean.
Sounds complex and PITA, not least because it requires multiple ISPs and associated infrastructure, billing, tech support if needed, dealing with any security issues in their bottom barrel AIOs, etc. It's not free either, dual ISPs at least around here could easily add $400-1200/year. That's real money, even at the low end it's more money than a basic quality switch+router would cost.
I’ve noticed a trend that when visiting the homes of network engineers and sysadmins who have some custom network setup, the wifi is more likely to be broken than the average person who has something off-the-shelf like a Google Wifi puck.
> Being an indie dev with a pihole setup has been tough - I’ve gotta turn it off a lot for various client projects
I have a family member who works in marketing and am regularly asked to either turn off the pihole or add a new URL to the ignored list for exactly this reason.
The easiest way around this is to install a secondary browser (or use a profile in firefox, but that is cumbersome) for work. They could use a different DNS provider in that browser. I use Brave for this and Firefox for my private stuff.
Or they could ask their employer to pay for a VPN services that comes with DNS. Your family member will then have an easy to understand and easy to spot (VPN is ON) way to 'go into work mode' and out of it for private.
Or you could have a nice router, like Ubiquiti EdgeRouterX, that is cheap and can create multiple networks. You pin the "marketing enabled" device to a different network without pihole as DNS for their device.
It would be interesting if Firefox could release two nearly identical browsers: Firefox Home, and Firefox Work. The only difference between the two is the name and the color of the icon.
With both programs in their computers, or both apps on their phones, people could more easily isolate the two phases of life, without going through all the rigamarole of profiles.
You've really just described profiles, Firefox just needs to make them easier to use like Chrome. On Chrome they're easier to find, allow you to configure the profile icon, and give you the option to create a desktop shortcut to the profile. Mobile is probably more tricky.
You can use containers, but those only apply to cookies, etc, everything else, settings, add-ons, etc, will be the same. Different profiles allow for different sets of add-ons, bookmarks, settings, etc.
As a person who has been using NoScript or the equivalent for years now, I appreciate this. It also helps out your clients. Instead of a blank page or a broken one, I would now presumably see a functioning page. While they may not get any ad revenue through me, I can't buy their product if they don't let me on the site. Or review it, or tell people about it, etc. There would be a chance now that I would come away from their website with a positive impression instead of hitting the back button and trying the next search result or just writing them off completely.
If you care about privacy at all, the web is a very broken place.
Blocking events until they are handled by the tracker's event queue is quite a common problem when using PiHole. It would be nice if those event handlers were registered using Google Tag Manager as it would mean that those event handers would never be registered if trackers are blocked.
By the way, I use VPN to bypass PiHole when I encounter these problems. It's a lot less hassle than switching the sinkhole off/on.
I’m unable to use my banks app because the tracker being blocked causes an error that fails very loudly and blocks login. I refuse to whitelist it in pihole.
> The SmartBlock stand-ins are bundled with Firefox: no actual third-party content from the trackers are loaded at all, so there is no chance for them to track you this way.
Those third party scripts may not be able to track, but I wonder if the act of loading the stand-in scripts quickly (?) from within Firefox would lead to other issues.
> We also want to acknowledge the NoScript and uBlock Origin teams for helping to pioneer this approach.
Not to belittle the effort by others and other projects, but these two extensions, along with some others (like Privacy Badger), have helped users immensely in protecting themselves.
Hi, lead SmartBlock dev here. My regular job at Mozilla involves diagnosing web sites for web compatibility issues, so I definitely share your concerns -- I routinely see sites relying on scripts loading in a specific order, but not coding themselves in a way that ensures that they actually do.
I haven't received any reports so far during the six-month-or-so nightly cycle where SmartBlock was only on nightly builds, so I'm optimistic. In the worst case we might be able to just add in an artificial delay to fix that, but of course I'd rather waste user's time like that unless that's 100% necessary.
And ultimately, problems are at least as likely (in my experience) to manifest with scripts loading too slowly, or not loading at all due to random networking hiccups.. many sites just aren't very tolerant at all of script loading failures.
If you're not aware, the HTTP protocol specification[1] doesn't have a technical way of knowing ahead of time if it's a 1x1 tracking pixel.
So the remaining realistic options are:
(1) block ALL <img> tags downloads which then blocks any 1x1 tracking pixels
(2) allow <img> tags but block some (and maybe most but not all) 1x1 pixels via a blacklist of url domains (e.g. doubleclick.net) ... and/or heuristics based on the "style" attribute
The (1) already happens in many email clients that render HTML.
The (2) is happening with the ongoing cat & mouse game with AdBlock, EasyList, etc
Or Firefox could crowdsource the building of a bloom filter of the URLs for images that are 1x1. Or you could learn 1x1 tracking locally since the set of pages you visit will probably be similar if you just want some more general protection.
The bigger problem is that anything like this and the providers go up in size 1px at a time until it’s harder to distinguish from real content (at first transparent, then positioned off-screen, then overlays hiding it, then visible in a part of the page that doesn’t get as many views, dual-purposed with images/ads already on-screen, etc).
A better way is if Firefox just bundled an ad blocker and pushed ad blocking technology forward (eg more hooks to do expensive processing natively to save on power like Safari does). The challenge though is that something like 100% of their funding comes from an ad company.
Bloom filter wouldn't work. The bloom filter will tell you that a particular URL is "possibly" in the list. What do you do then? Reject it because it might be a tracking image?
Google pays mozilla in order to avoid a monopoly lawsuit. Just because it is good for mozilla, it doesn’t put any sort of incentive for them to follow.
What we probably need is a client-side neural net within the browser to notice that certain high information-content IMG URLs (i.e. those with random-looking garbage in the URL) from certain domains tend to result in very low information-content imagery. And that's the signature of a likely tracker.
You can get most of that with uBlock Origin right now! "Block large media elements" and set the limit ridiculously low. You now have to click to view images. Control this via settings and filter lists.
uBlock does not know the size until the file is requested. At that point it is too late, you have accessed the tracking file. That feature in uBlock is meant for people trying to save bandwidth/CPU. uBlock is great though, I would never browse the web without it.
Edit: I forgot the grandparent mentioned them. Ublock Origin implements the parent's proposal,so I mentioned it.
Focussing on file size is a mistake because you can just make the tracking image display something decorative or whatever. I was referring to a solution for not loading third party resources without manual approval, which would need to be more general than just blocking single pixels to be robust (although hueristics might be good enough for some cases)
I always wondered why mail providers don't load all images automatically the moment the mail is received and present a cached image to the users. Wouldn't that make tracking useless?
This helps in that you don't connect directly to the image so it doesn't leak your IP and other info that would be available from the connection.
Marketing emails still send unique URLs for each recipient so they can associate your email address with opening any images and links. Google's proxy doesn't remedy this.
As far as I know, Gmail doesn't load the images until the first time the user opens the message, so it unfortunately doesn't make the tracking as useless as you could hope.
Technically it loads all images through a Google proxy[0] which in theory prevents third parties from using images for tracking. The third parties only get a 'hit' when Google pulls the image into their cache, which is not when you open the mail.
Google can still track you, but if you really care about that then you're probably not using Gmail anyway.
Concrete-ish but still simple simple version: If you visit example.com, close the tab, and then load a page with an ad on it, both parties would love to show you an ad for whatever example.com is selling. If example.com has a tracking pixel for the ad domain, then this is trivial to make happen.
OK, I understand privacy concerns with actually private information like name/address/email/etc. But a tracking pixel for ads? Why is this a concern? Since when?
Would it be so bad if people actually worked their brains a bit and got smarter with online advertising (and lots of other stuff)?
This looks suspiciously like the situation with obesity. Instead of eating less to lose weight, and eating fresh to stay healthy, people just blame "the corporations" for all their troubles.
Seems to me like it's counterproductive. Just put up safety nets for people because they can't control themselves. What do you end up with? A bunch of impulsive idiots and a few organizations with way too much control over them.
This is reality as I see it. You can be offended if you want, but I'd suggest you learn not to be offended by some words on a screen. That of course requires responsibility, which is free but I can sell it to you for $99/month if you want.
No one is suggesting advertisers tracking you is bad because people "can't control themselves." People don't like being tracked because, over time, private information can be inferred from your viewing history. Things like your age, gender, relationship status, income, ethnicity, education, hobbies, health concerns, diet, voting preferences, family names, etc. can all be known about you with high certainty simply by having trackers on the sites you visit. Ironically, this is exactly the kind of data you understand being concerned about before making several bad faith assumptions about why people don't want to be tracked.
If this works, it will be a very welcome development. Firefox is still my preferred primary browser given who the competition is, but the number of normal, everyday sites I visit that don't work properly in Firefox has become irritating. It appears that quite a few of those problems are caused by the security/privacy blocking rather than a lack of other functionality in Firefox, and it's usually the blocking by Firefox itself rather than any relevant add-ons because disabling the latter doesn't solve the problem.
Sorry, I haven't thought to bookmark any of them. I'm just talking about sites I'd come across while browsing, perhaps following some interesting links from sites like HN. But on a noticeable number of occasions now, dev tools confirm there are script errors that seem to come from identifiers being undefined and the like, and if I literally disable every add-on I'm using, they are still undefined (but only in Firefox, not other browsers). The changes to promote privacy in recent versions of Firefox seem like the most likely explanation at that point, though to be fair I have no hard evidence of that either.
I have encountered a few issues with uBlock and annoyances filter breaking the chocolatey website (portable software provider) where it leaves the site with an overlay. I've seen this in a few other places too.
I opened the chocolatey website with outright disabled scripts and it works ok. Markup is not very bad, search even works and they have noscript autohiding spinners (didn't know such a thing can exist at all), I suppose it was designed with noscript compatibility in mind.
"a number of common scripts" - I wish they would have linked to where to find the technical details of which scripts they are emulating. Anyone know where this can be found?
I think the question was which of those trackers have a stand-in script (could be a subset of all blocked trackers) and what the stand-in script looks like.
It’s different, as Decentraleyes hosts a local copy of libraries like jQuery, while the new Firefox feature is about emulating tracking scripts with a minimal set of features to trick the site into thinking it is present.
Based on the comparison shown between using the third party scripts with tracking, and using the smart block stand-in, I wonder if this could be an edge against Chrome? While the Google team can push the envelope, pay for whatever is necessary, and add their own standards (such as AMP, though that may be going away IIRC), they're likely stuck waiting on trackers just like Firefox was (on non-AMP pages).
Yes: "Previously (left), the website tiny.cloud had poor loading performance in Private Browsing windows in Firefox because of an incompatibility with strong Tracking Protection. With SmartBlock (right), the website loads properly again, while you are still fully protected from trackers found on the page."
Why not add something to protect the web security?
XSS protection ?
CSRF protection?
We could do those things in the browser and not in every website in existance…
One word: Compatibility. There are already protections against XSS and CSRF build in, and adding stricter rules would cause sites to break. Do you want to maintain a list of all sites that need cross origin GET requests to function?
SmartBlock can only kick in when tracking content is actively being blocked by tracking protection. If you'd like that to be on all the time, you can turn strict (or custom) tracking protection on in all windows, but of course the trade off is that you'll likely experience more site breakage, like you might in private browsing windows.
The new 'trim referrer by default' in Firefox 87 [1] was already enabled in private mode only, some months/weeks ago. So maybe they will make it default everywhere after some weeks? Maybe after working out any kinks?
Probably not worth reading, but then at least look at the comments that describe what it does.
Basically, it is a dummy implementation of often used tracking libs, so that if a website wants to use one, firefox will instead call their own instead. Making websites both faster and more privacy-friendly.
Being an indie dev with a pihole setup has been tough - I’ve gotta turn it off a lot for various client projects - but it’s also helped me build more resilient applications that work just as well for people who don’t have trackers enabled.