11. Validate your controls through extensive code review and testing. Don't expect that just because something is open source that this has happened. Don't just take security "best practices" as the truth.
To principle 11, this year we were once again reminded why regurgitating security advice without looking into the actual implementation can be a problem. For years security and privacy advocates told users to use proxies in combination with HTTPS to protect their privacy and security, yet it turns out that this advice as applied to iOS and macOS allowed full middling of HTTPS connections by any bad actor with access to the users network. For all other operating systems, as covered in the CERT advisory, it lead to the ability to phish for authentication credentials:
Which security and privacy advocates have told users to use proxies to protect their privacy and security? Free proxies are widely recognized to be a threat to both.
This has been pretty common guidance over the last two decades. It goes something like this: use a proxy along with HTTP to protect your privacy and security. Here's just one example of the guidance from the EFF:
"Use web proxies and anonymizing software like Tor (advanced)..."
On top of that, there are vendors like BlueCoat who've built a business on selling proxies to government agencies corporations.
You complain a web proxy can be MITM, but the very same is true about Tor exit nodes. Should EFF therefore not recommend the usage of Tor and/or Tor exit nodes?
You don't use a web proxy to protect your search privacy against a nation state actor; you use it to protect your search privacy against the search engine(s).
Also, if you actually read the link you linked you'd find that 1) it is flagged as 'advanced' which suggests some kind of technical expertise of the user 2) nowhere it is suggested to use someone _else_ their web proxy 3) instead, Privoxy (software; not a service) is suggested because it has privacy enhancing features and 4) anonymizers are suggested. As for anonymizers, that suggestion comes with a fair share of warnings. Basically, in the article EFF recommends Tor.
We've known for years that HTTP (clear-text) traffic can be middled out of exit nodes. What FalseCONNECT showed was that HTTPS (encrypted) traffic via web proxies could be middled. We should all be very, very, very troubled by that.
In terms of the EFF article, it's just one of many which recommend the use of proxies and was just an example providing that this has been guidance that's been provided for quite awhile. There are even commercial web proxy providers like TorGuard which market web (HTTP) proxies:
> In terms of the EFF article, it's just one of many which recommend the use of proxies and was just an example providing that this has been guidance that's been provided for quite awhile. There are even commercial web proxy providers like TorGuard which market web (HTTP) proxies:
Again, the article (which stems from 2006) recommends: 1) Tor. 2) Privoxy (which is not the same as 'recommending proxies' since you can run it on 127.0.0.1) 3) Tor + Privoxy (guess where Privoxy runs) 4) anonymizers.
The article should indeed not recommend #4 if protecting against a nation state. But that doesn't seem to be the goal of the article. The article is out of date, and its focus is on protecting the user's privacy from search engines and the like. It against profiling by ASPs.
A better question is "should out of date articles have warnings about them being inaccurate and/or dated?"
Finally, people who anonimize their BitTorrent usage via a proxy or VPN generally do this because of hiding copyright infringement or avoiding blockades. The intention is to hide their IP address against MPAA (and the like), not NSA (and the like). On top of that, "TorGuard" seems to have very little to do with "Tor"; that alone tells me to not trust such a service.
Security requires a combination of things. Only proxies that worked were VPN's and protocol guards since their job can be handled almost totally at that layer. Doing native or web app security only relying on that layer is asking for it unless you trust everyone you're connected to. Doesn't stop companies from doing it.
The original criteria for high-assurance security, TCSEC's A1 class, said you must secure the endpoints, network connections, databases, and application-layer software. Those systems resisted penetration by NSA's hackers during evaluation. Then, there's the mainstream "security" recommendations that said use low-assurance firewalls + insecure endpoints + bandaid software on them + optionally security- or privacy-oriented proxies. This was to get the benefits of those beautiful, fast-moving, COTS OS's without investing anything in real security. It came with the detriment you mentioned among many, many others that won't go away for some time if ever.
The key thing which FalseCONNECT demonstrated was that HTTPS in combination with a web proxy wasn't sufficient. This was a case where everyone assumed that using HTTPS via a web proxy would be safe to the point Microsoft even enabled auto-proxy support by default in Windows. Unfortunately, a few mistakes made in the implementation and the assumption that HTTPS was safe when using a web proxy no longer held true.
The ten principles are a key principle (number 1), and the nine principles that follow (according to the post):
1. Do not rely on the law to protect systems or users.
2. Prepare policy commentary for quick response to crisis.
3. Only keep the user data that you currently need.
4. Give users full control over their data.
5. Allow pseudonymity and anonymity.
6. Encrypt data in transit and at rest.
7. Invest in cryptographic R&D to replace non-cryptographic systems.
8. Eliminate single points of security failure, even against coercion.
9. Favor open source and enable user freedom.
10. Practice transparency: share best practices, stand for ethics, and report abuse.
I wrote two on these things. First is general stuff for both security and stopping legal reach of LEO's. Second is incrementally designing something like Tor with assurance against a NSA-level opponent using methods that stopped them in the past and then some overkill on top of it. The latter is to cover any advances they might be making.
Using end-to-end encryption might be a good example: if done right, you won't give away your users' secrets even if someone's got a gun to your head - because you can't.
You can if the app supports automatic updates. Therefore, end-to-end only fits if the app doesn't do automatic updates and the target is unlikely to do a manual one that's potentially compromised.
Or if it, for example, does proof-carrying automatic updates that are verified before install.
Something as simple as publishing the hash of the update in a blockchain and then widely distributing the source to construct reproducible builds for that hash would go a long way. You don't even need to make it open source, you could just have a robust enough set of geographically distributed testing labs inspect this code and check the published hash code.
> Sandboxing, modularization, vulnerability surface reduction, and least privilege are already established as best practices for improving software security.
And yet Tor Browser Bundle still uses Firefox, which is going to get sandboxing Real Soon Now (8 years after Chrome released with it). Just two weeks ago, we heard about another FBI malware discovered in the wild exploiting a Firefox 0-day to deanonymize Tor users; who knows how long it was used before being discovered, or what other exploits may be lurking out there.
To be fair, I'm not sure whether the Chromium sandbox protects against 'mere' IP address disclosure, but still...
To be honest, I would only feel safe using tor over ssh or with curl -H "". Firerox almost feels anti privacy because it's so difficult to disable the headers and javascript.
On "give users full control over their data." What about if a third party could use that user's account to gain access to their data? What are best practices around that?
To principle 11, this year we were once again reminded why regurgitating security advice without looking into the actual implementation can be a problem. For years security and privacy advocates told users to use proxies in combination with HTTPS to protect their privacy and security, yet it turns out that this advice as applied to iOS and macOS allowed full middling of HTTPS connections by any bad actor with access to the users network. For all other operating systems, as covered in the CERT advisory, it lead to the ability to phish for authentication credentials:
http://www.falseconnect.com/