It is very useful to know where your traffic is coming from, but that’s usually viewed at a higher level than the querystring params being shown. In some cases, this may restrict you from knowing which article the person was reading on the given site before clicking through to yours, but if that’s so important, there are other ways to instrument source tracking.
If you want to not be tracked you require a random/different IP and a browser fingerprint that blends with the crowd.
As generations that understand this tech implicitly come to power, things will change. To someone that does everything digitally the meaning of these words will be quite different than the currently accepted interpretation:
“The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”
From May on sites have to offer yes/no, they may only track you if you choose yes, and if you choose no they may not ask again, and may not refuse access either.
Which is what was originally intended all this time, and which all good websites offered anyway.
Would something like that work?
(Edit: Ah, apparently this sort of thing is already allowed, according to comments I didn't read before writing this. Sensible legislators.)
Alternatively, the legal definition of tracking cookie can differ from the technical definition of cookie. It wouldn't be the first time...!
Cookie to say "has clicked no"? okay.
Login cookie? okay.
Cookie to track the user? not okay.
Local storage to track the user? not okay.
Tracking the user based on magic? not okay.
How stupid is that? Privacy at the expense of security?
JS is also used for things that are doable with CSS or even HTML only. This isn't the first time.
My tin-foil hat persona suspects it's yet another way to force me to disable the various script and ad-blockers in the guise of protecting users for privacy concerns. But my more realistic side assumes this is just laziness.
Or just use NoScript.
Anyone know if this is still the case? I looked at a couple of pages and only saw tiqcdn.com being blocked by uBlock.
Since that was blocked, you never even saw the other tags that would have attempted being ran.
"Can one disable the HTTP referrer for only when one is going to a third-party domain?" https://support.mozilla.org/en-US/questions/1130505
A stock Firefox can tune the referer behavior through about:config, including completely disabled or spoofed to the target URL.
It's increasingly becoming impossible to have any actual privacy, so in my opinion the best option is to make the data collected worthless.
We did a user research study measuring website breakage under various privacy protections:
tl;dr - strict-origin-when-cross-origin was one of the protections with the lowest amount of breakage. Entering Private Browsing is a clear, strong signal that the user wants more privacy, so we started by implementing this protection in Private Browsing.
However, note that some advertisers demand that AdTech vendors must not serve their ads on certain kinds of pages. (e.g., https://support.google.com/adsense/answer/1348688?hl=en&topi...) Many of those agreements require full referrers to be able to audit the ad inventory.
So there are some concerns and trade-offs to make in this space.
If Mozilla's users are saying “yes, this is good; please do more”, Mozilla can use that as a defence against any resistance from advertisers.
However, if “you” is a company that is in some other business, but collects analytics for the purposes of optimizing it’s product and figuring out what actually works for their users, then you don’t event want to run the risk of ingesting something private. Especially with exposure to different jurisdictions.
Think of it as the difference between a double-opt-in email list where you are very sure people want to receive the communications and an unsolicited spam list. If the user volunteers this data and it has some relevant business purpose, that’s great. However, if the user doesn’t know I have this info and wouldn’t want me to have it, then acting on that data could create a lot of negative emotions that as a company I wouldn’t want.
Private browsing will(1) start out with no sessions (none from normal browsing mode), (2) provides automatic tracker protection, and (3) will clear sessions (cookies and history) on exit leaving no trace. Private browsing is great for a number of use cases such as debug/test web app with a new session, visit NSFW, or hide the fact you watch cat videos on your puppy-lover friend's computer.
So it makes sense to hide the referrer in private sessions opt-in automatically. As for the normal browsing, you can disable referrer. See .
I would like to see more privacy and security options to be shown in the preferences UI. about:config is "okay" but it feels like Windows registry, not good for user experience, even by developer standard when one just want to toggle on/off privacy and security settings.
Information leaks. You click a link from your email to a news article. The URL for the news article has your email in it. Then you click an ad on the news article. The ad just got your email address.
Worse still - instead of an email address, it's a token that auto signs you in to your account with the news site.
Sites are pretty bad at sanitizing their outgoing referers.
Hopefully, even though groovecoder doesn't mention the possibility, if this works well then we'll see it rolling into regular browsing too, or at least having some UI exposed, in the same way that tracking protection moved from being private browsing only to having an option to enable it in regular browsing.
Actually, now I think of it, I wonder if the two could sit behind the same preference in the end?
We could, of course, build the same functionality into our implementation without relying having the path/query in the referer.
Personal data in 3rd party referral is only one of the many side effects. If personal data is available in the URL as query string, chances (very high chances) are that the same data is perfectly visible in clear in the web server logs, and from there only God knows where it's spread, including all the 3rd party services used on the backend side.
Yeah, but it may surprise you to know that your opinion isn't shared by the people who own the computers in question, or the private data you're digging in.
Your personal strong belief is, in this case, utterly irrelevant.
They don't seem to have said so clearly enough. I think some people just read the first half of the sentence and are ready to be mad!
It only does it in private mode. I experimented with the referrer options mentioned in the article with mixed success. Not sending the referer header breaks some sites and often in a non-obvious way.
EDIT: referrer header -> referer header
After all the referrer is useful for the site owner, not for the browser user.
To me this is a super good illustration of why we probably should get rid of the 'Referer' (sic) header altogether.
Apparently only in Private Mode though which makes my use case probably less common.
Thing is, do you remember the uproar about addons being broken in Firefox 57? It was for good (security) reasons but many non-technical users don't understand or respect the pros and cons. All they see is their workflow being broken.
Only Microsoft seems to really understand backwards compatibility.
They changed so much in the code base which broke compatibility with the old system that they would have pretty much had to do a hard fork and include both Firefox versions into the download in order to provide this; pretty much doubling maintenance cost as well.
On the flip side, they do now actually have an actual extension API, abstracting away from that code base, making it much more unlikely that such a breakage will be necessary in the future, and it especially also prevents those frequent smaller breakages that were commonplace with every new Firefox release up until 57.
You mean some websites that fail to employ the best practice of “don’t trust external input”. Yes that applies to easily-spoofed headers.
Yes, I think it should be in the HN title otherwise it feels like clickbait.
Maybe only allow a referrer from the same site policy?
The majority of hotlink checking is to disable cross-domain hotlinking, which is not affected by this change.
This change could only negatively affect hotlinking that is permitted.
A site that wishes to hotlink to another may need to set a Referrer-Policy sso that the referer is passed through for images at all times.
Some image boards do it to prevent casual hot-linking, as my sibling poster notes.
I have referer completely disabled. It's nice.
Happy to see FF do the right thing here and I'm really curious if Google will follow suit. Microsoft and Apple have an opportunity here to show they care about end user privacy than Google.
Monitoring conversion rates can be used to find out whether people are actually able to use your web service. The goal of a “conversion” doesn't have to be a sale.
But I agree that if you're going to do this sort of tracking, it definitely needs to be private.
I hope there's a court case soon where the court rules that sending a whole load of business-sensitive data to Google, Microsoft and Apple actually does breach a non-disclosure agreement.
Coming to a courtroom somewhere in Europe in 2019.
// TODO: https://github.com/pyllyukko/user.js/issues/94, commented-out XOriginPolicy/XOriginTrimmingPolicy = 2 prefs
It seems that Firefox 59 will effectively force about:config's network.http.referer.XOriginTrimmingPolicy to 2 (default is 0) when in private browsing.
In the URL bar, type:
Search for the following setting:
Double-click to set its value to true. This basically sends the destination or target URL as the referrer.
Description: "If a page hasn't set an explicit referrer policy, setting this flag will reduce the amount of information in the 'referer' header for cross-origin requests."
Configurable referer behavior, including whitelist.
For a moment I thought this was an example to make a point…
> EFF researchers discovered this leak of personal health data from healthcare.gov to DoubleClick
It blows my mind that a site such as healthcare.gov would include 3rd party trackers. You guys in the US really don't care about privacy at all.
The government employees managing the contract typically do not have the expertise to evaluate the project or write proper specs. The HealthCare.gov contract was a mess of incompatible buzzwords.
The engineers have no vested interest in the project as they're only there to complete that contract and they're so many levels removed from the government agency that no one actually knows who they are so it won't reflect poorly on them when everything comes out poorly.
Because their career doesn't really depend on the success of the project, as the government gets blamed for contractor failures while contractors get the credit for success, they don't really need to do more than meet the specs. A better way to do this would be to expand the number of engineers within the government through groups like 18F and USDS, and give preference to them over private industry.
Private contractors rarely work, but even when they do it's only when you have expenses that the government doesn't need (such as contracting a machine shop or car manufacturer to build something with their preexisting infrastructure). In software though, your only expenses are really your engineers and the cloud (as no one needs to run their own data center). The only thing subcontractors can do that the government can't is pay their employees more than the GS scale. However because the contract is supposed to be cheaper than the government just hiring employees themselves (as industry has "profit motive") they're going to have to cheap out elsewhere, either by hiring fewer developers or neglecting parts of the development.
All of this is solvable by Congress, simply boost pay flexibility, but there's no political motive to fix it as all of the contractors are political donors. As a result, government software sucks.
That can be extraordinarily expensive once it leaks out that classified government data is in the hand of uncertified third-party cloud in some other nation, and you have to rush and pay twice or three times more in order for the contract to be changed and now have local certified supplier. This is what happened here in Sweden in equivalent departments for the DMV, which later implicated a further 40 different government department which used the same practice.
When the cost go up by 200%-300%, suddenly the idea of running your own data center sounds much cheaper. It ended up being the highest single cost the departments had, excluding salaries and rent. you can get quite a nice data center for those billions.
>IBM took over the agency's IT operations, and "IBM used subcontractors abroad, making sensitive information and an entire database of Swedish drivers’ licences accessible by foreign technicians who did not have the usual security clearance".
IBM used subcontractors, which is the profit maximizing stuff I'm talking about. When you pass stuff off to a for-profit corporation, they're going to do what they can to maximize profit, even if it screws the government over, because people will blame the government, not them.
>When the cost go up by 200%-300%, suddenly the idea of running your own data center sounds much cheaper. It ended up being the highest single cost the departments had, excluding salaries and rent. you can get quite a nice data center for those billions.
The costs never were lower though. They just looked lower on paper because the bill was less. But they weren't actually getting what they paid for.
There are already cloud providers certified for government use (at least in the US). But you don't need to pay a company to pay some other company. Government employees can do that fine.
But I've never seen a gov. contractor purposefully add analytics code.
It's far more likely that one of those free frameworks, ui-kits, or fonts, benevolently provided by one of the privacy-invading Silicon Valley behemoths, ended up in the code base.
Anyway, there's a good reason that the government doesn't hire their own developers. Hiring a GS14 (at least), who writes code all day, is going to end up being far more expensive than a contractor after paying the lavish benefits, pensions, etc. which federal employees receive.
Furthermore, most government projects are only a few years long. The government uses contractors because they can get rid of the dev teams when they're finished with the project. Can't do that with gov workers.
Every decade or so, there is a push to use less contracts and hire more in-house Federal workers. And then the payrolls become bloated, and the next administration goes back to less feds, more contractors.
Federal benefits are not that lavish on an international scale. You get things that are considered human rights in other countries and a good pension.
And while the benefits may cost the government more in the long wrong, the subcontractor is still taking a profit off what they're paying developers. I'd rather tax dollars go to the actual worker than some corporation.
>Furthermore, most government projects are only a few years long. The government uses contractors because they can get rid of the dev teams when they're finished with the project. Can't do that with gov workers.
That's part of the problem though. If you're on a contract, you know you're expendable. You have no skin in the game other than doing the bare minimum to not get fired because the actual client (the government) is several layers removed from you.
And getting rid of contractors after you're done with them doesn't really add up to software. For one, many contractors work as contractors for years, moving from project to project. You can do the exact same thing in-house. Groups like 18F and USDS mean that you can move employees around as necessary.
This is worth a deeper explanation. "Good at winning contracts" involves a lot of things that have nothing to do with the ability to do the job. From legislated preferences for "minority- and women-owned" businesses (which is usually a farce), to kickbacks to the bureaucrats who award the contracts, there are a LOT of reasons why government hiring "private" companies to do work goes wrong.
Frankly, I don't think that's really the case / honest here. There's been a movement to create "readable URLs" over the last few years, how many people considered that this could leak information through referers? I really can't say I remember seeing any discussion of that issue.
So I agree it's probably an issue that people still don't understand this, but it's not a new issue, and there's really no excuse for not knowing this.
When there's a subcontractor involved though, there's an additional profit motive involved as well as requirements legislators set up to maximize the profit for the subcontractor.
It also means requirements need to be written out in advance and in a way that may not be optimal. When you're working between employees of the same organization, that's a lot less of a problem, at least as long as legislators don't put plenty of hurdles up.
And yet for some mysterious reason Firefox hasn't broken ranks with Google by incorporating ad blocking. Even though its an obvious major feature and Firefox is losing marketshare every year.
We know why Google won't prioritize the interests of Chrome users but why is the only major independent browser seemingly corrupt in the same way?
Mozilla should be helping society by pushing it past an era of internet advertising and the clearly terrible clickbait-fake-news culture it creates. And yet, it does not.
Is Google using the money it pays Mozilla to "discourage" Firefox from going forward with ad blocking? As a concerned citizen, I sent an email to firstname.lastname@example.org requesting an investigation. Anyone with insider info should send it there.
More people use ad-blockers than Firefox has users. The best way to attract more users is to make the best possible browser. That means incorporating ad-blocking as users have loudly demanded for a decade.
... of Firefox.
Most people don't care as much about their browser choice as about the ability to access those major services. I would continue using Firefox, but only because I don't care about most of the popular sites. The majority would switch browsers in a heartbeat.
Over the past decade, I've had to switch ad blockers several times to deal with performance and privacy issues, and I still have to deal with a ton of broken sites that would make most people just assume their browser was broken. The logistics of including the blocker in the browser are a nightmare, and I doubt the results would lead most people to call it the "best browser."
You don't find it curious that Mozilla has made the decision not to implement the most user-demanded browser feature for over a decade?
There's a whole load of politics like this in being a browser vendor. Just because users demand a change, does not mean that it's the best for users. Users want broken webpages even less than they want ads, they are just too short-sighted to realize that blocking ads will lead to that.
And Mozilla really is already gambling hard with such things as Tracking Protection or this referrer blocking when Chrome has no such restrictions in place and has the majority market share. For many webpage owners, blocking Firefox users would probably not even drop their revenue by 10%. Leaving it unmaintained much less so, in the short term.
So, unless Mozilla manages to create perfect compatibility with Chrome, they cannot afford to block revenue from webpages. And creating perfect compatibility with Chrome would also mean blindly following all of Chrome's webstandard choices, ingraining tracking- and ad-supporting technologies into the web, which again would be shit for users in the long run. And really, just use a Chromium-fork, if this is what you're looking for. You don't need Firefox for that.
#2 Won't this be possible to bypass simply by encoding more in the domain part of the url than in parameters? So you switch from a.b.tld/foo?p=123 to 123.a.b.tld/foo ?
Also I remember someone I know got an email that a page ge was linking to was about to move. I guess this was only possible because of the referer header.
Anyone knows a good document on when/which browser actually sends referer header?
Stripping everything but the domain should be mostly ok for the client though, so if I come from a.b.com/foo, it just sends b.com as referrer? Both "a" and "foo" can hold any amount of data so those would have to go. Sending b.com should be enough to provide traffic statistics?
Although tracking down fraud is usually as it relates fraudulent traffic which mostly effects advertising so I imagine a lot of the commenters here would say that is not a good reason.
HTTP Auth, on the other hand, is actual password protection which cannot be spoofed by a malicious client. Very different from robots.txt.
Basically, set the following:
network.http.referer.(XOriginPolicy|XOriginTrimmingPolicy|trimmingPolicy) to 2
network.http.referer.spoofSource to true
network.http.sendRefererHeader to 0
network.sendSecureXSiteReferrer to false
I'm very interested in this thread. Other replies here are correct - there are many ways that sites try to detect private browsing, and many ways they can get it right or wrong.
How do people feel about the "stealth" design goal of private browsing? Should it be a goal? What about a hide-in-a-big-crowd tactic? (E.g., how Tor tries to make all its users look identical.)
I think this shouldn't just be a goal for private browsing, this should be a goal for browsing period.
Shouldn't the default be to just send the top level domain (if anything) of the source site as soon as you go somewhere else? The next site can't possibly use the complete url of the referring site for any (non-shady) purpose?
1. What does it matter if the creators of a website are aware that a user is using private browsing to view the site? In other words, what would they do with this information?
2. As it's possible to strip out referral information using other means, assuming there's a practical use for this "private browser user" information, what could the site creators do to guard against false positives?
A lot of features/apps/websites have been built around the assumption that this information is sent, but it would be nice to start dropping it by default.
I managed to disable cookies by default using cookie whitelist, and I counted many website that broke down.
I applaud firefox for daring to break website for the sake of privacy, but I'm waiting for websites to react.
Firefox should be even more strict regarding privacy: ask the user if he want to set a cookie, never save history etc.
I'm using the extension that compartmentalize website usage on firefox, and this should be made default.
Firefox 59 PBM now implements strict-origin-when-cross-origin by default, which trims the path off the referrer value of ALL 3rd-party requests.
2) It pisses off webpage owners, as it hampers their analytics and probably ad revenue. If webpage owners are too pissed off by Firefox or simply don't make enough revenue from it, they'll stop testing/building their webpage against it, which leads to broken webpages in this way as well.
Seems counterproductive my browser is taking so much care to encrypt my querystrings then leaking them to any host from which the site I'm visiting happens to pull content.
One could block refer(r)er altogether, and then adjust on a site/resource basis as needed.
There is one current web extension compatible extension that purports to do this, but when I tried it, it didn't want to cooperate with my configuration, despite adjustments. Further, it sucked Discus comments into its local configuration dialog/page, something that I find... sucks.
Finally, it wasn't open-source and didn't have a well-known provinance. All this didn't leave me feeling too confident in it.
P.S. uMatrix is supposed to provide layers of referrer control, but I haven't made the effort yet to switch over to it including switching some of my other points of configuration to use it instead.
It's simply good data hygiene and privacy.
The big news is them enabling it by default in some fashion (that is when you're in Private Browsing), meaning that all users now have this, not just the 0.1% who understand referrers (and have not forgotten to enable this the last time they installed Firefox).
As a power user who knew about this, you might not particularly care, but for users in general it's great, while it pisses off webpage owners.
Then again, even as a power user it's impossible for you to know about all of these sort of config options, so you might care to use a browser which tries to help its users out while having to keep an eye on not pissing off webpage owners too much, rather than a browser that tries to maximize revenue for webpage owners while trying its best to hide all the ways it infringes privacy from its users.
The misspelling of referrer originated in the original proposal by computer scientist Phillip Hallam-Baker to incorporate the field into the HTTP specification. The misspelling was set in stone by the time of its incorporation into the Request for Comments standards document RFC 1945; document co-author Roy Fielding has remarked that neither "referrer" nor the misspelling "referer" were recognized by the standard Unix spell checker of the period.
Edit: Actually, that doesn't seem to be true -- although there's some usage of "referer" in English, most of the hits in Google Books prior to 1960 turn out to be for the Old French word "referer".
Funny how the misspelling of a double consonant comes from someone who has a misspelt given name with an extra double consonant!
That a misspelling has become particularly common (or like for my own name, much more common than the historically correct spelling) doesn't make it anymore correctly spelt than "referer" in my opinion.
But if you disagree with the term "misspelling", I can formulate it another way: let's say that it's funny how the creative modern spelling "referer" instead of the historical "referrer" comes from someone who has a creatively spelt name "Phillip" where "Philip" was historically more common, and that both differ from the historical spelling on a double consonant. It's a much more awkward sentence though for such a trivial, passing remark.
I know this is treading into the classic prescriptive vs. descriptive linguistic debate, but the reason why we can call "referer" a misspelling -- rather than a creative decision - is because the original authors seem to admit that it was unintentional. Fewer folks would be calling it a misspelling if the authors had meant to do it, e.g. to avoid a name collision with some other attribute named "referrer" or to honor a colleague named "Referer".
Edit: I'm glad you made this comment. I only know people with the name "Phillip" but I constantly find myself double-checking how their name is spelt before I refer to them (in text). I chalked it up to me being a bad friend but I guess it doesn't help that I might be seeing enough of the "Philip" variation to get confused.