Yes but how does the average person really know if those suggested foreign VPNs are not CIA or other government coordinated honeypots?
In terms of cyberspace cat & mouse games, I think VPNs can be useful to evade Netflix streaming restrictions to particular countries or to hide your DNS queries from your ISP. You don't need a lot of trust in VPN entities to evade commercial businesses.
However, using VPNs to evade government surveillance is a whole different ballgame. Because of the far reaching tentacles of government agencies, there's no reliable method to determine which VPN to trust.
I would suggest there is no inexpensive and reliable method to determine which VPN to trust.
I generally think that designing against state level adversaries is a fools errand but if one went down that road, placing a hardware device that you own into a non-shared rack that you pay for inside a United States datacenter would be a very good start.
Now the complaints / warrants / letters come to you and it is your prerogative to respond as you see fit. Further, the uptime and tamper-resistance of your systems are under your own control.
Why even bother a non-shared rack when you can't even guarantee the physical security of your hardware in the first place, as you've outsourced all that responsibility to a third party, the datacenter?
So you're right - the datacenter can open and close and tamper with the rack as they see fit - but I can notify myself of that and even video record it.
Interruption of the physical monitoring would be itself a signal.
I've never done this, but I have placed many strange and arbitrary devices within, and on top of, racks I have leased and nobody notices or cares unless you're a nuisance somehow.
For example: Mass surveillance and targeted surveillance are different privacy scenarios. Mass surveillance becomes ineffective when it becomes too expensive to identify people in the large scale even if they are not secure against targeted surveillance.
If CIA mass surveillance is your concern, good VPN, Tor and other privacy tools puts you into their radar faster than buying second laptop to do your private stuff without any privacy tools. Targeting and deanonymizing just just few million privacy conscious people using Tor justifies more work per user than large scale mass surveillance of billion people.
Anyone using Tor or good VPN will be be in the much smaller pool of users who CIA/NSA may not be able to identify quickly but who they are interested in knowing better.
If you want to avoid Facebook and other commercial trackers, VPN combined with good tools goes a long way.
> there's no reliable method to determine which VPN to trust.
I'll offer a method, people can agree or deny if they like... Why don't we just extend the transparency to the creators, developers and hosts? As in, the people central to the operation of the VPN give up a percentage of their privacy in order to gain public trust.
For example, it's well known that developer Bob eats and drinks at this pub house on Fridays, and welcomes chat from his customers. It's also known that he takes MMA classes twice a week, and likes to attend the local ballgame. We know and trust Bob because he's part of our daily life.
I'm sure an undercover operative can navigate this scenario (and that's true regardless of the subject), but the lives of an entire team of developers would be difficult to fake.
> Inb4 remote work, difficult to arrange
I'm wondering if this is actually a beneficial use of a "social score" type of system. If a person has the social "proof" that they are indeed Bob the software builder instead of Bøb, FBI#1337, then this question of authenticity might not be needed.
Bonus thought: I would imagine the global superpowers are trying to hack China's social scoring system via undercover agents. I'm not sure about the program's details, but surely it has weaknesses.
even Verizon now peddles VPN services to take their slice of the privacy snake-oil market https://www.verizonwireless.com/biz/security/wireless-privat...
I still don't trust the VPNs for super sensitive stuff. For that I'll setup my own somewhere.
as you say though it all comes to trust.
I’m not a bitcoin person, but I still don’t understand why people think of bitcoin as more private than other transactional systems when the whole premise of bitcoin is a publicly shared ledger.
They're not perfect but they're an improvement on say credit cards, which used to be the only option.
And if you want, some providers offer more privacy-sensitive cryptocurrencies like ZCash or Monero.
Monero or some other privacy focused coin would be better. Disclaimer: Not an expert.
You don't have to provide personally identifiable data to transact.
As far as I can tell, this is not achievable digitally with any traditional currencies.
Yes it requires connectivity that might yield personally identifiable data, just like every other method of connectivity via the internet.
Paysafecard  does exactly that with traditional currencies.
I can buy those with cash, even at gas stations, and use them to pay online without ever sharing my personal details with anybody.
Tor is basically a free VPN that's more anonymous.
If that VPN provider is compromised, Tor gave you no additional protection, just a shittier internet connection while you sent all your traffic to a compromised endpoint over an encrypted channel.
Using Tor over a VPN, it doesn't matter as much if the VPN provider is compromised-- your traffic is still scattered and encrypted by means of Tor itself.
Based on traffic analysis you can be identified either way so I can't speak to how well this holds up to hostile governments but in such cases the former is an amateur mistake likely to lead to your disappearance. The latter gives you more of a fighting chance.
Simply visiting this site makes it more likely. 
For anyone who fears state-level surveillance: Using a VPN or Tor and some privacy plugins isn't enough. Don't assume that you're safe just because of it. In fact, you make yourself identifiable if you rely on such plugins.
I won't go into details on how to be able to have privacy that can compete with state-level surveillance, because you'll have to commit crimes to get it. If you think that your government is watching you - don't trust these simple instructions. It's way harder. Some people had to die because of this.
Many (authoritarian) governments don't let you use a VPN without putting you on a watch list. If you try to keep a low profile, you need other measures. False sense of security can be dangerous in some countries. I hope that those who need this (a fraction of those who read it) keep themselves safe.
edit: I think they should clearly state that this tutorial isn't suited for individuals who are in great danger w.r.t surveillance. It's for people who are interested in privacy, not for people in life-or-death situations.
Which is a good reason for people who specifically don't have anything to hide to start using them - if you're not signal, you can't help out by being noise.
if you are serious about security don't use digital devices at all. if you are willing to accept some security risk in exchange for using digital devices then you are best off using some obscure back channel, not some high tech "theoretically unbreakable" crypto.
security by obscurity is, ironically, the only thing that actually works (until you get caught), even though in theory it is the worst. every crypto scheme is trivially broken by state level actors.
“Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say.”
But ultimately, the idea that you want privacy because you have something bad to hide is a deeply flawed one, pushed by governments, maybe not necessarily because they are "evil" and want to abuse that power (although that certainly seems a factor to consider lately), but also because pretty much the only times they do want to bypass privacy laws is when they deal with criminals. So that gives them a very narrow view of the issue. When all you have is a hammer, every problem looks like a nail.
Privacy is both about "keeping things to yourself" and not wanting others to know everything there is to know about you for no good reason, as well as to protect yourself against potential abuses (from governments, but also criminals, unscrupulous companies, etc) that can't be predicted ahead of time. There are thousands of potential uses for the data, like say using your data to manipulate you with ads during elections, make you buy anti-depressants, make you pay higher insurance, and so on.
Yes. To put it succinctly: I may trust the current government to use this data for (mostly) good, but I don't trust all future governments.
You don't have the knowledge to know if you trust "that" government or "another" government. The best solution for all is always be suspicious of concentrated power....especially power that involves state-sanctioned violence against you.
Unfortunately, IMHO, it will take a large-scale "privacy disaster" unlike anything we've ever seen and where people are profoundly impacted, before the public gets wise to what it means to give up privacy.
Those have already happened but no one has explained the gravity of those situations thoroughly enough for people to understand them.
For example, the OPM hack dumped all of the people who applied (note: weren't granted - which is an important distinction) for security clearances in the United States. This is very extensive information about a person's life, going back at least 10 years of their lives, and includes people they knew at addresses that they've lived at, spouses, fingerprint data, etc.
When you couple that with the Marriot/Starwood hack, this is information that includes passport numers, times stayed at hotels, etc.
When coupled together, if I'm a foreign adversary, I now have a larger picture of a group of people who have security clearances, their travel patterns, their passport numbers, and - if coupled with the OPM data - a veritable life's history.
This might seem tin-foil hat but if I have this data, I can then turn to the great advertising machines of the world (e.g.: Facebook, Google, etc.) and attempt to obtain targeting information related to those individuals, specifically.
Going out further on a limb, let's assume I've found an individual of interest, whom I want to "spoof". I have their life's history, I have all of their identification-relavent data, I - quite possibly - could obtain their historical advertising data to see patterns. If this person was living in the United States and is no longer doing so, say they moved away 10 years ago, I now have enough information to impersonate them (including fingerprint data) - in person - and all I require, then, is (possibly) a little cosmetic surgery to insert my foreign intelligence operative to impersonate them.
Now, I understand that all of this is a far stretch and is seemingly implausible but your large-scale "privacy disaster" has already happened (see also: 2016 elections).
What, instead, I think would have to happen (sad as this consideration is) is that the privacy breaches result in some other nefarious event that ties directly back to them.
Otherwise, people still live with this "it's no problem because I have nothing to hide" mentality. To them, so what if China stole their life's history? Unless someone takes that data and then makes them indebted for life (e.g.: to banks or the IRS and that ends-up in jail time, etc.), then they're still going to keep trucking along like it isn't that big of a deal because no one has explained the gravity of the potential use of that data. I think that onus is shared between the governments (themselves), businesses that were hacked, and the news agencies reporting these events.
If you told everyone affected by the OPM hack that they could be impersonated (almost perfectly), as long as the actor[s] had other data to correlate specifically to them, I imagine the response[s] would have been drastically different.
Instead, what you get is a couple of years of credit monitoring and that's about the extent of it - which, in my honest opinion - is woefully inadequate for the level of data that the compromise[s] exposed. Monitoring your credit is great and all but it does absolutely nothing for someone impersonating you for employment at other companies - say those who don't require a major background check - to build a recent history in a new city - let's say Philadelphia - to latter leverage that background to infiltrate a government sector.
Again, I get that most of what I'm positing is tin-foil hat seeming kind of stuff but the statement inferring that a large-scale privacy disaster hasn't occurred is, to me, a bit flawed (no offense intended or inferred) and discounts that it, indeed, has occurred - it's just that no one has explained it in the right/correct way[s].
 - https://en.wikipedia.org/wiki/Office_of_Personnel_Management...
 - https://www.wired.com/story/marriott-hack-protect-yourself/
Oh, and also people who happened to be living with them at the time (assuming the applicant followed the instructions). :-(
> ...large-scale "privacy disaster" has already happened
> (see also: 2016 elections).
> What, instead, I think would have to happen (sad as
> this consideration is) is that the privacy breaches
> result in some other nefarious event that ties directly
> back to them.
This is extraordinarily, almost axiomatically bad advice. The USG has an NSL process for obtaining information from US-based service providers. It has no process whatsoever for obtaining it from foreign providers. It can simply do it. We have the largest, best-funded signals intelligence agency in the world, and literally the only place in the world you have any procedural, legal defenses against them is here.
I'm not being normative. You don't have to like this state of affairs. But it is the reality in which we live, and signing up with a European privacy service won't keep your data out of the hands of US surveillance if they want it.
I think jurisdiction is the wrong question. The most important question to be asking about a service provide is "what information do they collect and retain about me". Sometimes these comparisons are hard to make from the outside, but other times you can make inferences just based on the features they offer and the protocols they use.
This is El Chapo getting burned by this exact bad advice, among other things.
The legality doesn't matter at all; you get "legal" protection from the NSA only to the extent you use a service hosted in the US.
Again: I don't think this is a good reason to pick US services. I think jurisdiction is simply of no use whatsoever in picking.
I like that it has OS-specific recommendations.
It wouldn't be that hard to provide a sentence-long justification for their avoids in addition to their recommendations.
However, I am afraid that using those tools to protect your own privacy is at best a temporary band-aid as long as the current trend of accepting more and more backdoors into our personal lives persists.
To change this a significant portion of people need to see the government not as the main savior from terrorism (poverty, disease, crime, etc.) but as a big bureaucracy where a lot of clerks care more about their paycheck than the end results of their day's work (which is fine). And a large portion of public servants who do care, care more about their career, power and perception than about people who chose them to govern (which is bad).
This view change, if it ever happens, should force government to justify their actions and pay more attention to real issues (poverty, crime, disease, terrorism) and less to scare tactics. A used car salesman can provide a useful service -- knowing that a customer suspects him to be a swindler forces him into a partial honesty. That said, I am not optimistic that this view change will happen soon.
You might think it's a trivial thing, but it actually tells a lot about you. If someone can trace your activities through time, it's essentially a detailed profile of you, and they can learn how you live and work. Sometimes it can even be used to de-anonymize you by cross referencing with your "real" online identity.
In general it's impractical for users to fully understand what kinds of meta data were included in each file format or send by each application. EXIF data is often included in image files generated by cameras or image editing software. Your full file path to a source code file may be included in the executable you compiled, and it may leak your personal information. Your operating system may send regular health report to its company. A proxy service may append your real IP address in HTTP headers. Even for some encrypted services, they don't encrypt or sign everything. Like 1Password in the past didn't encrypt the URLs of your saved login sites. TLS 1.2 doesn't sign the cipher suites. TLS 1.3 doesn't encrypt client certificate.
Most of these software and protocols were not designed with privacy as a primary concern. Even they do, there are info that they decided to be okay to leak. However, it should be up to the users to decide whether the design decisions were reasonable for their own use case. Even many of these meta data leak seem like targeted surveillance, it's actually scalable and can be adapted to mass surveillance.
There probably aren't even as many Brave users total as there are Tor Browser users making someone who uses Tor inside Brave instantly worldwide unique.
Also appearently Brave is a Chromium based browser. Then why is it on a list called "privacytools"? Last time I checked Webkit/Blink based browsers have no possiblity of counteracting fingerprinting or zombie cookies in any way shape or form.
This would make you either more suspicious or your adversary (justifiedly) think you are an idiot.
The mask only works if everyone is wearing the exact same mask. Which is the exact opposite of what Brave is doing.
The main point seems to be that they're either local or able to be self hosted.
In theory online password manager service providers could be forced (or otherwise compromised) to access a user's password database or interface to said database.
Encryption could be potentially worse than the more commercial offering if implemented wrong, but you can also restrict access better to local/self-hosted databases.
>Shouldn't, I want to pay money for my password manager?
Not if it's unnecessary to do so, and it also adds a paper trail relating your account to your passwords etc should they be compromised.
Personally that's pretty far out of my threat model, but I still have my password DB locally because I figure if someone compromises my computer they'll get my passwords anyways (keyloggers, etc), but at least an online service I have no control of wont get compromised and affect me.
Only real difficulty with local keepass databases is keeping them synced/up to date on my devices.
Could they really, though? 1Password, for example, extensively details their client-side encryption protocol. Unless they were forced to distribute a compromised client, there's really no downside to using it.
Not particularly familiar with the methods 1Password uses, but that is the general theory.
It's pretty out there and you'd likely have to be in pretty deep for a government (or some other attacker) to try and pull something like that (especially just to hit you personally).
There's similar pie in the sky arguments for most software on your computer that will auto update as well I suppose (Windows Update, Google Chrome, whatever...).
I don't think it's a realistic concern for most people, but if your life (or your freedom) depended on your password manager you'd want the least amount of points of failure possible.
The same argument can be made for any software. This horse has been beaten many times already, and I don't think we really want another open vs. closed source flamewar....
Yes Brave is just Mr. Eich being frustrated with his life and paying the privacytools guys and paying people to post pro Brave propaganda and downvote everything contrary to it.
Who can argue with vacuous conjurations like that? “That doesn't necessarily follow.” is the most charitable response you can expect.