I am concerned about the approach however; a simple blacklist of fingerprinting scripts may be insufficient, in that non-blocked scripts can still access the data that is used to accomplish fingerprinting.
Personally, I would like to see more security around the data that is used for fingerprinting, such as user agent, screen size, window size, loaded plugins, and so on. If this type of information was either protected with permissions, or if bogus values were provided to non-user-whitelisted sites, then it would be far harder to fingerprint users, as there would be less identifiable information to go off of.
A less aggressive approach might be to have some kind of notification to the user if a website is accessing many API calls that are commonly associated with fingerprinting. Maybe a site that just wants to know window size is fine, since it might want to render something or select a certain layout, but if a site wants to know a wide variety of different information all at once, that would be a red flag that could be signaled to the user in some way.
I think this is already available, just not enabled by default. In about:config one need to set privacy.resistFingerprinting to true. (be aware however that this setting causes problems with google captcha - the number of challenges that you will need to solve will drastically increase)
No kidding. I'm talking about ~30-40 clicks (1 click per task in the captcha grid)
often, after a few difficult ones, I realize I get stuck into the same 20 challenges. over and over. no matter if I get them rigth or not. We do run all browser in the office with figerprint protection on and run non-exit-tor-nodes in all offices. But those are hardly excuses.
The hell bans happens more often on firefox for android, but I guess that is what you can expect when you go against goliath.
It's literally google censoring me from talking (and sometimes reading) random sites on the web
I'm willing to go through extensive captcha cycles if that's the cost of retaining some anonymity.
I installed uMatrix a while back to recover some anonymity and it worked at first, my Captcha load spiked significantly which was a great indication that I'd succeeded but it has dropped over time. I guess I'm gradually being fingerprinted again.
Google's captcha tests are my litmus paper test that what I'm doing is effective.
Like for $1 give me a certificate that I can use to say "I'm not a spammer" and I can anonymously buy as many certificates as I want.
And then if a certificate is used by a spammer it becomes invalid. Seems like it's expensive enough to be worth using for existing spammers but let normal people pay a $1 every year or two to not have to deal with captchas.
Certificates would be an even more accurate unique ID over what fingerprinting could provide
(Though I don't know a lot about and would be interested to hear criticisms of it.)
To be clear, I'm not saying that the analytics industry hasn't been complicit in its own punishment, it's just come to a head in a way that I feel warrants more cooperative action than blacklists based on... what criteria?
From my perspective as and end user, why would it be desirable for me to facilitate the "analytics companies" business model?
Sure, far-reaching blacklists are probably bad for analytics companies across the board, regardless of their intent. But as an end user, why should I care?
Right now there’s no way for any analytics company to convince you they don’t do anything surprising.
I want my customers to run successful businesses and to use the data we collect on their behalf to help them do that.
Why should you care? Mostly hypotheticals, but as said above, it shouldn’t matter to you.
If analytics works, we can drive down costs, and improve how companies spend money on marketing and product development. This should mean more affordable and or higher quality products and services.
As you say perhaps sites can get a kind of "entropy budget". If they ask for my screen size that's X bits of entropy. If they want to render things to a canvas and read back the result that's Y bits of entropy (Y >> X). Once sites reach a certain budget that users can set themselves, they get fake or invalid data. Worst case if I set the entropy budget too low is I get a captcha or an incorrect layout somewhere.
Available in Nightly under a hidden boolean preference `privacy.resistFingerprinting.letterboxing`.
Why the scare quotes? Because the purpose of recaptcha isn't to tell humans from bots, it's to punish users who do not wish to be tracked by giving them an endless stream of challenges to solve no matter if they keep getting them right or wrong. It is especially obvious when they intentionally delay the loading of subsequent images if you have too many privacy features enabled, because it does nothing to prevent bots from solving them. It's grouped into several tiers, depending on the amount of frustration they want to generate:
1. Invisible captcha - you have Chrome, you're logged into a Google account, your advertising ID has a profile full of useful data. You go in with no hassle.
2. 1 click - maybe you're on a new IP or a new device, but you're logged into a Google account and use Chrome. Click the checkbox and that's it.
3. Regular captcha - You're not logged in but you don't use any privacy enhancements, so through a combination of fingerprinting, cookies, and other tracking techniques you're uniquely identified anyway. You get 9 images, select 2 or 3 of them and you're good to go.
4. Annoying captcha - you're blocking third party cookies, you're not on Chrome, looks like you're not being a good cog in the machine. You get a captcha with 9 squares that load more images, or you have to "select squares containing X", and you get 2-5 of these in a row.
5. Infuriating captcha - you're blocking third party trackers, cookies, all other storage methods, you block or mitigate canvas fingerprinting, you're behind a VPN, your fingerprint is not recognized, there's no data in your profile. Google won't squeeze a cent out of you, so you don't get to use the internet. You're getting an endless stream of slowly loading squares, or 5-7 objects to recognize. Even if you do all of them correctly, it won't let you in. Maybe after 4-8 cycles, but that will still waste ~10 minutes per try. You're barred from any website that links to reCaptcha.
These days websites using it are for all purposes dead to me. I can't visit them and I won't waste my time clicking their images or selecting squares or whatever.
Fact is, Brave is, as I understand it, a system that intends to pay a user to see ads (via replacement) - although they may have pivoted (again?) away from that. It’s not an anti-tracking or anti-advertising effort.
I could go on forever with Mozilla's connections to organizations promoting and facilitating politically motivated censorship.
Because I think when people ask about Mozilla controversies they're thinking about situations in which Mozilla has "broken character" by e.g. risking users' privacy, not activism that is absolutely in line with Mozilla's stated goals (whether you personally agree with them and their interpretation thereof or not).
Heck, even Netscape Navigator started out as shareware. It was "personal use only" but most commercial users never bought a license. It was eventually defeated by Microsoft Internet Explorer, which was free for commercial use even before it shipped with the OS.
If there is any chance someone will attempt a paid browser again, it will most definitely be based on Chromium (or maybe Firefox) rather than written from scratch and no website will make any effort to test on it (just like barely anyone ever tested on Opera).
Developer tools is already invaluable, but there’s no reason it cannot be better.
While lots of people here already have uMatrix or other blockers running, blocking fingerprinting and cryptomining domains by default would be a big step!
(Disclosure: I work on ads at Google.)
But an advert that steals from you, or harms you is neither of those things. Google Ads doesn't need those to be profitable. It would suit them if those went away.
Not if that targeting is done using data gathered about me without my consent -- as it almost universally is.
Targeting based on context (what sort of website the ad is on, for instance), is fine.
Google doesn’t actually have a problem in serving contextual ads on their own properties, since they have plenty of context. The problem is with AdSense since there advertisers need some sort of user profile, plus in the EU bidding exchanges are in jeopardy due to the GDPR.
Why? I didn't consent for that.
i (and many others) believe that surveillance is like this. the effort that it takes to do what your describing does not scale, and cannot be used to implement dragnet surveillance and data collection (unless it's a police state and you have a lot of notetakers). lots of people (myself and many others) think dragnet surveillance (whether by private entities or governments) is a thing to be avoided (because it creates really bad power asymmetries, which i think are inherently a bad thing).
also, i don't think that large companies should be granted the same rights as individuals. just because a person can do a thing on their own doesn't mean that a large entity should be able to do something similar in spirit at thousands or millions of times the scale.
And in your case of the note taker, a better example would be somebody that frequently follows you, and takes notes about what you wear. In many places, that could be grounds for harassment claim. In other words, it’s not the act that matters to most people, but the frequency and scale at which the act takes place.
Stop thinking about data ownership. Ownership is irrelevant.
It's illegal to process any data about an identified or identifiable person unless you have a lawful basis to do so, and there are only a half dozen of those. "Because I own the data" is not one of them.
I think the fundamental problem here is that people in the EU will choose privacy over the ability of companies to make money. Its a different outlook on life. When its my interests versus the interests of business I choose me.
"An easy thought experiment demonstrates this. Imagine that you hired a private detective to eavesdrop on a subject. That detective would plant a bug in that subject's home, office, and car. He would eavesdrop on his computer. He would listen in on that subject's conversations, both face to face and remotely, and you would get a report on what was said in those conversations. Now imagine that you asked that same private detective to put a subject under constant surveillance. You would get a different report, one that included things like where he went, what he did, who he spoke to -- and for how long -- who he wrote to, what he read, and what he purchased. This is all metadata, data we know the NSA is collecting. So when the president says that it's only metadata, what you should really hear is that we're all under constant and ubiquitous surveillance."
My point is that surveillance is not illegal and does not require any consent to accrue information through public observation.
I disagree completely.
We might not like it but it is legal and they own their observations.
> Whilst there is no strict legal definition of 'stalking', section 2A (3) of the PHA 1997 sets out examples of acts or omissions which, in particular circumstances, are ones associated with stalking. For example, following a person, watching or spying on them or forcing contact with the victim through any means, including social media.
Definition of stalking
Stalking is not legally defined but section 2A (3) of the PHA 1997 lists a number of examples of behaviours associated with stalking. The list is not an exhaustive one but gives an indication of the types of behaviour that may be displayed in a stalking offence. The listed behaviours are:
(a) following a person,
(b) contacting, or attempting to contact, a person by any means,
(c) publishing any statement or other material relating or purporting to relate to a person, or purporting to originate from a person,
(d) monitoring the use by a person of the internet, email or any other form of electronic communication,
(e) loitering in any place (whether public or private),
(f) interfering with any property in the possession of a person,
(g) watching or spying on a person.
I mean, that's pretty clear.
First, most of this sort of spying involves using my own equipment as a weapon against me -- and actively subvert my defenses in order to do it. This is, in my view, not much different than them breaking into my home and installing surveillance equipment.
Second, the data gathered about me (even if it doesn't involve subverting my own equipment) is not kept in isolation. It is combined with a lot of other data about me and then mined for further insights. Every little data gathering act may be insignificant in isolation, but the end result is a degree of surveillance that is deeply immoral if done without my consent.
For example, suppose depressed people are more likely to buy expensive impulse item X. Person A is depressed. Lets show them ads for item X!
That would be a nicely profitable strategy that could emerge organically out of a sophisticated ML ad targeting model, something like... AdWords.
Targeted advertising is not designed to serve the viewer, it's designed to serve the advertiser. So advertisements you get are even sleazier than non-targeted advertising because they have by definition more information about the reader. So instead of generically exploiting people's self esteem, it exploits people's self esteem armed with much more information about the users.
Targeting and tracking is a plague and should be discouraged and blocked to oblivion.
On average yes. Just as there is a gradient in non-targeted advertising of 'basically benign' to 'scum of the earth', I think there is a gradient in targeted advertising too.
On the basically benign end of the gradient you have the "You recently bought book X from author Y, perhaps you'd be interested in Book Z from author Y." (I don't like that stuff, I still block it, but it doesn't quite get me incensed if you know what I mean.) But the potential for harm from targetted ads can be truly extreme.
Advertising doesn't help the advertiser unless it helps you. Showing you an ad for something you don't want, can't use, and would never buy, benefits nobody.
If the ad results in the viewer getting an eating disorder, for example, that's fine for the advertiser if it also results in a sale.
Often it doesn't even require a sale. A lot of Facebook & Google advertising is for bullshit sites which push increasingly sketchy content backed by even sketchier advertising. Sometimes the goal isn't even profit, the Russians paid for advertising to influence politics.
If I an a potential buyer of, say, a book... and my interests include AI, multi-agent systems, and operating systems, then an ad for Barnes & Noble offering the new title OS Development for AI and Multi-Agent Systems is probably going to be mutually beneficial, because it will help me find a book I would want, and it helps B&N sell said book. OTOH, an ad for the new title Necrophilia And Cemetery Porn Of The Deep South is not beneficial to either party (if it's displayed to me) because it's not something I'd ever be remotely interested in. Frankly, I'd much prefer the (accurately) targeted ad.
If it resulted in a sale, then that means by definition it helped me find a product or service I wanted. If I made a bad decision in making that purchase, that's an orthogonal issue.
I, personally, have no issues with these platforms collecting my data and making money off it in exchange for their services that I use. But when they do the same even when Im not using their services or explicility expressing my disagreement, I'm not cool with that.
That's incredibly naive. You've failed to consider the wide class of products which are tempting but harmful.
This is nonsense. History is littered with cases of advertising abuse and misuse. Everything from literal snake oil salesmen to modern day shysters profiting from selling conspiracy theories and anti-tax bullshit is enabled and propagated by advertising.
Also, FYI, I'm not the one who down-voted you. In fact, have an upvote to counter-balance that.
The implicit message here is buy buy buy. "There's cool hats shaped like golf balls. Everyone's getting one! I need one too!"
Do I really need a hat shaped like a gold ball? Do I need any of the crap our materialistic society says I need?
Ads like this are trying to shape my expectations about myself, the meaning and purpose of my life, and what I need to feel fulfilled; and all for someone else's benefit, not my own. I have made it a personal goal to reject consumerism, and instead live a simple, sustainable, and efficient life. Rejecting ads (electronically blocking where possible, mentally blocking everywhere) is part of how I'm trying to achieve that goal.
Any company trying to sell me something that I didn't seek out myself is my enemy.
Edit: I believe that analyzing the biases and incentives underlying the choices and abstractions presented to us is an important part of what it means to be an intelligent human being. Ads are not reliable sources of information, as the company is incentivized not to inform you, but to sell to you. There are many better sources of good ideas than ads.
Pessimistic answer: technical countermeasures don't prevent these invasions of privacy but they make them significantly harder, putting companies with less technical skills at a disadvantage. Anti-fingerprinting protections hurt the bottom feeders while larger companies like Google can likely work around them.
Pragmatic answer: this wasn't really about the fingerprinting but the crypto miners. Ad networks don't like crypto miners either but blocking them is difficult so browser vendors are really just solving the ad networks' problem for them.
* Cookies: ads ask the browser to store something, later they can ask what the stored value is.
* Fingerprinting: ads collect enough information about the browser that they can distinguish it from other users' browsers.
While cookies aren't ideal, I think an ad industry that uses them is a lot better than one that uses fingerprinting. The key differences are user control and visibility. If you clear your browsing history, or close an incognito tab, your cookies are gone but your fingerprint is unchanged. Similarly, you can see who's setting cookies but you have no idea who is trying to fingerprint you.
Like, hey we love and embrace any technology that fights our team's fingerprinting efforts...
I'm commenting as myself, and not for the company. But my perception is likely colored by working for an ads company and it seems fair to let people know that.
> Like, hey we love and embrace any technology that fights our team's fingerprinting efforts...
While I don't know for certain, I don't believe Google Ads uses fingerprinting. Firefox/Disconnect doesn't seem to think so either, since Google ad domains are on the "Google" list but not the "Fingerprinting" list: https://github.com/mozilla-services/shavar-prod-lists/blob/7...
Anecdotally my phone is about 4 years old maybe and the battery still does fine browsing with JS on in Firefox mobile. That is with ublock though.
Is that relevant?
Don’t worry, they’ll tell you.
[Allow this session] [Allow 5 seconds] [Deny]"
I'd be particularily interested in the second option since it would allow us to use sites that depend on JS for content while they roll back the craziness that is depending on scripting to show static content.
While I'm at it:
I want badges!
Obviously I'm exaggerating the implementations here but I'm serious about the idea.
(And yes, I earn good money on frontend work, I just think it often makes solutions worse.)
I have no-script on by default, and these days I need be convinced there's a REALLY good reason to temporarily whitelist a site.
Sure, a lot of the web is now either a blank page (or "you need to enable js to run this app") but on the positive side, I'm a lot more productive as I just close those sites and move on!
You can manually pause and resume js by opening up the debugger and hitting pause. The code exists it just needs to be exposed to the UI.
Would it be that after 5s the user couldn't interact with elements on the page like that (e.g. if they open an image, and wait a few seconds, they then couldn't close it because the 'x' onclick handler wouldn't run), or would each handler run by a user action (like an onclick) have 5 seconds to run?
The simplest way to do it would just be to pause js 5 (or maybe 15) seconds after page load, and have a button beside the url to resume/repause js.
The devtools keep working with js paused though, so it should be possible to do something like have a onclick handler that resumes and starts a timer to repause the js.
Ideally I think I'd like click's to resume js for a few seconds unless they are clicking on a link (with a href that leads to another page). I'm not certain that would be technically easy but it seems likely it would be.
To be real, though, somewhere close to 0% (rounded to the third decimal place) of users would agree to grossly inefficient cryptomining in the browser. As a web funding model it is terrible and is almost always akin to malware. It certainly costs the user much more in electricity costs than it will ever benefit web publishers.
Mozilla? Optional protection? Don't trigger my memories.
They also made it optional to block unsigned extensions, which you could turn off if you wanted to tweak one to fix a bug because it wasn't being maintained fast enough.
Like, if you believed in the whole Open Source/tinkering philosophy, or something, which Mozilla may or may not care about.
Then, they started disallowing it in 2016.
And they turned off key remapping too.
For one thing it's probably not a good idea on battery-powered devices, so it's only useful for monetizing desktop browsing. It also means that the money you make out of it depends on the average power your "customer" has available to mine.
Beyond that since mining is a zero-sum game it means that the more people opt for this model, the less money they individually make. Maybe today you make on average 0.001cent per minute and per user and a year from now you make a tenth of that. You have absolutely zero control on it since it's merely a factor of the total hashrate and the cryptocurrency's value.
I have a hard time imagining how this could become mainstream. Tipping using cryptocurrency microtransaction seems more promising but even that is far from a solved problem. I'd rather directly send $.002 to the website rather than waste $.01 of electricity for the website to make $.001 out of it.
But how feasible would be to limit the amount of info retrievable from the JS layer instead than relying on a black list of domains serving fingerprinters?
For example, this discussion of a new API for gamepads immediately turned to a discussion of its fingerprinting risks and how they can be mitigated: https://groups.google.com/d/msg/mozilla.dev.platform/75GrJSP...
And I welcome them, it would be awesome to program an arduino from a web based dev env.
Unfortunately such privacy measures come with a bunch of inconveniences. For both, Recaptca will become more obstructionist. Your window won't start maximised any more. Zoom levels will be forgotten when you open a link in a new tab. And if you use Tor Browser, by design there's also no saving passwords, no saving cookies, no saving tabs between sessions, and no browser/address bar history.
Maybe it would be better to find the worst offending JS APIs and demand a user consent step similar to webcam or notifications in order for the scripts to run at all.
Buying a new domain to bypass the list is pretty easy but adding one line in the blocklist is even easier (and easily crowdsourced).
Because Firefox is open-source, everything can be gamed. If they went for some more "intelligent" method, the kind that instinctively appeals to people like you and I and anyone posting on HN, the fingerprinters could see exactly what they were being tested on, and it would be faster to iterate their counter-measures (keep this particular activity just below threshold X on metric Y) than it would be to make new Firefox releases. And the counter-measures to these more smart kind of measures happen _in secret_, whereas counter-measures to the "bash the problem to death with simple rules" approach are public, are (if their fingerprinting is to have any point) widely distributed, and thus much more immediately picked up and remedied by the many wonderful people who work on the lists used by privacy tools like ad blockers.
While we all like a cool and innovative solution, sometimes bashing the problem to death with dumb rules really is the best approach :)
(edited for a clarification)
On the other hand your fingerprinting/mining JS has to be served by a website that people willingly browse. That's a much higher barrier of entry and means that you can't just change your server's domain every hour lest you manage to convince your partner websites to update their code as frequently (which in turn might end up blacklisting them instead).
In collaboration with Disconnect, we have compiled
lists of domains that serve fingerprinting and
cryptomining scripts. Now in the latest Firefox
Nightly and Beta versions, we give users the option
to block both kinds of scripts
I would prefer to see Firefox giving more power to extensions. For example, it is still impossible to make an extension that makes a typed in url use https per default. Because it is not possible for an extension to know if a network request stems from the user typing it, using a bookmark or one of the other many ways a browser can be triggered to do a network request. So typing urls in Firefox keeps being dangerous because it will load the url per http by default.
Firefox does a better job at vetting extensions, but the reality is extensions have incredibly deep access to sensitive data, and they bypass every other security measure on your PC. HTTPS? Pointless if you've got a list of extensions installed on your browser.
The EFF's Privacy Badger has been my sole extension for a while, but as Firefox Tracking Protection has expanded, I've found Privacy Badger catching less and less, since Tracking Protection blocks them first. I will probably retire my use of Privacy Badger pretty soon, because it's just becoming superfluous.
If the review process is still insecure (That is how I understand you reply) I would prefer them to put their energy into this. Analyzing popular extensions in depth (and giving them some 'in depth analyzed' badge) so you do not have to trust a third party.
>Add-ons built on the WebExtensions API will now be automatically reviewed. This means we will publish add-ons shortly after uploading. Human reviewers will look at these pre-approved add-ons, prioritized on various risk factors that are calculated from the add-on’s codebase and other metadata.
Power users like you and I can disable this if we like, using about:config.
That's definitely not easy but it beats blacklists which are trivial to work around.
One day I found that navigator.getGamepads() did rat out my gamepad in Chrome while using private mode, I twitted Google, they didn't answer. Who knows what else is exposed.
I didn't know Firefox had privacy.resistFingerprinting.reduceTimerPrecision.jitter option, that's cool, but what about requestAnimationFrame()? Games wouldn't work without it. Not to mention spawning workers and passing values between them; delays while using things like shaders and gpu.js; decoding various formats like audio and measuring time, etc. Anyone tried to block videos on news sites? They are unstoppable, I can watch vids like with everything red in uBlock Origin.
I think Mozilla could make a contest for breaking their fingerprint resistance, before they are ready to merge their privacy features from Nightly to master branch.
All browsers have to do is share a single advertiser ID and have it reset by the user whenever they want. No more cookies, pixel syncs, or fingerprinting and all the related countermeasures.
This is the exact mechanism used by mobile apps right now so it's already well-tested and proven to work.
Unlike what people seem to think, adtech has been designed from the start for anonymity. The persistent ID is needed primarily for ad frequency and conversion tracking. Eventually identity is revealed when someone fills out a form or buys something but that's not necessary at the top of the funnel.
This is good for advertisers.
If a user-controlled resettable ID prevents advertisers from doing this, what incentive do they have to not use their existing methods (in addition to the advertiser-ID)? Further, why would they not use tracking to say "Well, they reset their ID, but I know they're the same because of this other data, so I'm gonna link it back up behind the scenes."
I'm much happier for a site to mine on their tab while I'm watching a video than to show me 2 minutes of advertisements every 10 minutes. On mobile in particular, where video ads end up eating a large chunk of my data costs.
Of course people could (and do) build malicious mining scripts that try to use way too many resources, just like people could (and do) make malicious ads that spam you with INCREASE YOUR DICK SIZE BY 20 INCHES IN 5 MINUTES popups, but that's not an inherent problem with the model itself.
However, a large portion of devices on the internet run on batteries, and don't have huge quantities of reserve power.
Back to things such as desktop computers, though: am I supposed to close my browser before doing anything CPU intensive?
Someday, will I need to close StackOverflow to avoid negatively impacting the time it takes to compile my code? That's a trade-off I'm not willing to make.
I think the battery concern is valid enough and it's something that I didn't really consider since it doesn't affect me personally much. I still think the tradeoff of not having ads is worth it though. Perhaps let users choose whether they want ads or mining?
 My portable devices are a ThinkPad with a slice battery that lasts me 9 hours of continuous normal use and a phone with a 20kmAh battery bank so I often forget battery life is a problem for some people
And it's unclear such resources are being 'stolen' if it's stated in the site ToS.
Cryptocurrency mining is a lot less deleterious than ads. Mining doesn't need to track your behavior, it doesn't generate misleading native content, and it doesn't distract you from what you're trying to do.
Sign me up!
disclaimer: I was one of the founders of Tidbit, https://www.eff.org/cases/rubin-v-new-jersey-tidbit, the first(?) crypto mining ad replacer.
Unless I, as a user, explicitly consented to crypto mining, no such thing should be allowed to take place. Same thing goes for auto playing videos.
Everything went fine, until I noticed WhatsApp web becomes unusable, because it does not generate the initial QR code for establishing the session (to be fair, it flickers, which seems worse, as it smells of an active countermeasure on WhatsApp/Facebook part).
While I did I not have yet the time do dig deep into the specific technical reason WhatsApp may have to expose such a maddening behavior, I am inclined to think that this is more a policy choice.
If so, it's troublesome. We collectively as users arrived to the point of willingly give up the keys of our online communication to a few megacompanies. It's their infrastructure and their product, so they are in power of steering it in whatever direction it wants.
I see this as something that will increasingly become a political problem. As tech versed person, I see the responsibility for not doing enough about it.
It's unfortunate that browsers are privacy-insane by default. Luckily, with a bit of effort, most browsers  allow you to mitigate this with plugins (e.g. User-Agent switcher, Cookie/Referrer controller, and JS/Adblocker). Pi-Hole  can help too.
Mozilla should be commended for trying to improve the situation.
2. Chrome's days are numbered: https://news.ycombinator.com/item?id=18973477
I've been using add-ons that protect from canvas finger printing but those are super laggy and slow firefox down.
Default settings can move the industry in a way that opt-in things like uMatrix generally don't.
(Disclosure: I work on ads at Google)
My biggest disappointment is that this doesn't do anything new. My secondary disappointment is that this will make my life a little harder when I can't get a website to work. I hope you're right that this means I eventually won't need to install a plugin for uMatrix functionality.
I think blocking mining scripts is a step backwards, hindering the adoption of something that could finally be an unobtrusive and ethical replacement for the failing advertisement model.
If the content on a website is just a vehicle for delivering advertisements, I would consider such a business model to be fundamentally flawed.
Swapping "delivering advertisements" with "hijacking my processor cycles to mine cryptocurrencies" doesn't exactly offer anything that would convince me to change my mind.
I'm more than happy to pay for quality content, but I'd prefer companies to be forthcoming about the cost involved in providing it, rather than turning me or my data into a product that can be sold to the highest bidder.
I would also love to live in a world where I could just deposit some reasonable amount of money every month and have it fairly distributed to pay for all the things I love, but I can see that that's not viable in the real world. Having websites silently use my unused computer resources is a perfectly viable alternative to me in a way that forcing me to stare at things I don't care about is not.
How can I be sure I'm not being taken advantage of?
Of course companies will always be incentivized to squeeze as much value out of you as possible, but they'll be simultaneously incentivized not to screw people over too much. Just like how abuse ads have led to widespride use of adblock, abusive use of mining scripts will just lead to people blocking them (be it on a case-by-case basis or universlaly). But while I think ads are always going to bother me no matter what they are or how many of them there are, there's a level of CPU utilization that I wouldn't mind or even notice at all.
They'll still show us advertisements, though they'll probably optimize them to use fewer CPU cycles since those would directly affect their bottom line.
I sincerely doubt the vast majority of the general population are using ad-blocking software today; at least, not to the extent that companies would dial back their advertisements in an attempt to prevent the size of this demographic from increasing further.
"You aren't a subscriber. Do you want to see the article while running a cpu intensive script, or pay $0.50, or pay $10 for a yearly subscription?"
I'm not too optimistic this would work however. Bandwidth issues, and the short stay would make it very tricky to do something efficiently.
That's it everyone. Shut down Mozilla and have all the users switch over to Brave. Brandon Eich's got everything covered.
Yet. Give it one or two scandals, maybe involving a heavyweight like Reddit, and users will be aware of what fingerprinting is and why it's not in their interest to have their digital fingerprints taken, analyzed and stored every time they enter the internet equivalent of a grocery store. Give them analogies they can understand and they will feel like they're in a dystopian surveillance state movie, because that's what we're in on the internet.
Imho they would care if they knew. They don't know so they don't even know why & what they should learn about it. That's why it needs scandals and good analogies to tell them about it. They won't read "weird" tech blogs, they need the evening news to tell them about it, and to explain it in simple terms. Kind of what Al Gore did back then with global warming: "the planet has a fever". Everybody can understand that. It's not technically correct, but it gets the point across. People don't know what other products are better than Apple's, so they rely on social proof: everybody is buying Apple, so it must be good, so they buy Apple.
People are starting to shift away from Facebook because there's a narrative "Russia stole the election by using Facebook". It's wrong, but it gets the point across that Facebook's algorithms aren't transparent, FB has too much data on the users and whoever controls FB wields a powerful weapon. That got people's attention, that's what you need to for any technical issue that the general public should be informed about.
So? They don't understand password hashing, either. Doesn't mean it shouldn't be implemented.