1. Use a user-friendly password manager like Dashlane or 1password with a long unique password and a second factor (that isn't SMS based). Password re-use is the #1 way accounts are being compromised at the moment and there are now good password managers that are easy to use with a low barrier to entry
2. Use an extensive ad blocker like uBlock Origin and use multiple profiles in your browser to separate your serious accounts like webmail and banking from general web browsing. The other common way of being exploited is drive-by malware and web-based exploits. A combination of blocking third-party content and separating your browsing profiles will prevent a lot of it. Don't feel guilty about blocking ads - most publishers are extremely negligent with what they allow on their sites via ad networks. Bonus: switch to Chromium (firefox isn't sandboxed and exploits are too common) (but alert yourself to Chromium updates with an IFTTT of the release blog to <pick your notification method>) or alternatively remove Google, Flash, Java etc.
3. Get a VPN subscription and set it up on your laptop & mobile devices. Seriously, don't use open WiFi networks or shared networks without wrapping your connections in encryption. sslstrip is extremely effective and many apps either don't verify/authenticate SSL connections or don't pin certificates. IVPN, PIA, the Sophos VPN product - take a pick.
4. Most home routers are super shit and full of holes. Upgrade to a router that supports open firmware and pick one of openwrt, dd-wrt, monowall, pfsense etc. bonus: run an UTM like Untangled (commercial) or Sophos (free up to 50 CALs iirc)
5. Encrypt your stuff - VeraCrypt is a decent TrueCrypt fork but most operating systems now have support for volume encryption - your local disk, USB sticks, or a file-based volume. Backups should be to encrypted media
6. Be anonymous - create a disposable email with a fake name to signup to services with. even better sinkhole a random domain name you register. No service outside of banking, insurance, health, etc. really need to know your actual identity details.
Firefox seems to be the only browser in which one can maintain privacy and security (e.g. all the privacy tweaks from privacytools.io). Chrome doesn't allow for most of the tweaks, for example WebRTC can't be disabled.
For privacy the Tor browser - but even then only in a VM because of the prevalence of exploits. Regular Firefox will just get you fingerprinted in any case.
> unstable Chrome/Chromium releases
The build site I linked to lets you switch between trunk/stable
 if you know what you're doing you can change the WebRTC route settings with this extension https://chrome.google.com/webstore/detail/webrtc-leak-preven...
The second difference and a large advantage Google has is the security team they've put together to find and fix bugs. Google between project zero and engineering are probably the best team in the world. Firefox don't really have an equivalent.
Then there is the legacy code that Firefox is built on and the problems that had lead to. In Chrome a successful exploit requires the combination of 5-6 diff bugs/exploits to bypass all the controls and sandboxes, while in FF many straight forward bugs become exploits.
This is most reflected in two places: First the pwn2own contest where FF does poorly and second in the price of 0day between Firefox and Chrome. The Chrome exploit price has never been less than $100k and at the moment $1M+ is being turned down, while OTOH Firefox started at $5-10k and at the moment is $25-30k and are common (common in a browser exploit sense).
The idea situation would be a Chromium fork that is built with the Firefox UI / extensions / settings / profiles etc. built on top of it. I've wanted to build this project for a long time and have a privacy/security specific browser but have never had the chance to do it. I hope at some point somebody does - it's really complicated today to recommend both a secure and private browser.
It's in the OP:
"I highly recommend using KeepassX as a password manager, secured using a key file and not a password. Also, you should download the source code, compile it (using a Linux machine) and always look over the source code for rogue functions, you CANNOT afford a vulnerability inside the password manager."
That being said, I'm decent with C++, and yet auditing its code is a little daunting.
I don't really understand the reasoning behind that - so rather than trusting a proven OS and its packages with lots of eyes on the critical code, I should rather read the code of a cryptographic product and then make a decision based on that? Common advice is not to write crypto code yourself, but reading it and deciding that there might be a backdoor based on gut feeling is right? And having identified some stuff I don't understand, then decide for another program that is goofy but easier to read, and compromise security?
So if there are rules to improve the privacy of users, I would worry about making it look to complex or extreme. I suspect rather than picking some parts that work for them they will go "ok this i can do, this I can do, and .. wait compile it myself and read the source?! - ok, I'll stick to post it notes".
Because you're always just one burglary away from passwordocalypse?
My password database is stored on a USB key that I carry with me, with a regular copy made and securely stored.
Key file is stored on devices I use, in a directory restricted to my own access and on a drive which is encrypted. An encrypted copy is also stored on the USB key with the password database; this can be decrypted using a GPG, key stored on a yubikey and also carried; if a device can be trusted enough, this is how I move the key file around.
Access to the database requires 3 things rather than two. A long passphrase could be recorded by an observer, who could then take my USB key. The key file ensures that they still do not have all that they need.
2. I do that. In addition I use this:
I have a python script that builds a host file from several sources.
4. Most readers here should be able to build their own router with a banana pi and IPfire
I eventually got many not-so-technical family members and friends to adopt Dashlane - which is easy to use and provides great support.
> 2. I do that. In addition I use this: http://winhelp2002.mvps.org/hosts.htm
That's a good idea - you can also configure a local bind/dnsmasq/unbound server to block based on these lists with ACL's (sure if you google each you'll find tutorials, like this one: https://github.com/jodrell/unbound-block-hosts)
Some of the better home router distros will also do this at the local network level
I don't see how you could do better than something like 1Password. What is missing in your opinion?
I don't understand why they don't offer (at least) 2-factor key for the vault.
Also, they support TouchID on iOS devices, very useful. But in the US, at least one case of someone being legally required to unlock via TouchID, whereas offering up a passcode is still debatable.
So at least offer a short PIN-and-TouchID, and support some 2-factor like Googly Authenticator.
They must have at least considered these things. I don't understand any security issues with this. Implementation is work, and perhaps a support hassle for them.
What are the technical reasons?
This is pretty much the nature of password managers. That password is only ever entered locally. If an attacker can grab local keystrokes, it's game over anyway.
>they don't offer (at least) 2-factor key for the vault
Neither TOTP nor any kind of push/SMS token can be used to secure data at rest. These are mechanisms to authenticate to a server. You could have "2 factor" for data at rest by storing part of the key separately, but there'd be nothing dynamic about it; copying the key material once would be sufficient to use it forever.
LastPass offers 2-factor to authenticate to the LastPass website, but your vault is cached encrypted on the client side, and such a cached copy can be opened using only the master password. (IIRC there is an option to disable this, which works by erasing the cached copy at the end of a session. Hardly bulletproof, and precludes having any sort of backup resilient to the failure of LastPass itself).
I can understand that TOTP cannot be used for encryption.
But the app is asking for authentication. Zero-knowledge proof game might apply. Of course the local app must have the decrypted key in memory.
I wish we could defend against the Evil Maid...
The app is not asking for authentication, it's asking for encryption. Else an attacker could bypass the app's logic and read its data directly.
Take a look on FreeContributor 
Its important that you dont host any domains on the VPS you run the VPN on.
Security might not be top priority at the VPS provider.
All your requests come from the same IP address (and the VPN provider might very easily give out your private info).
I think a VPN from a reputable provider (like f-secure) is better for most users.
I think a reputable VPN provider offers the better tradeoff, but there are legitimate reasons for self-hosting a VPN.
To leverage this, it's fairly easy to detect you're on a self-hosted VPN: your IP is in an IP range assigned to a hosting/colocation provider, is not a TOR proxy (there is a public list of those) and doesn't belong to any remotely popular VPN (easy to enumerate for a little money, lots of lists exist).
In exchange for that you have eliminated your ISP (or public wifi) as a threat but instead added the hosting provider to the list of threads. And for any adversary that stands above the law, the routing infrastructure of your hosting provider is already a valuable target.
My Streisand hosted on AWS looks to the outside like anybody else's Streisand hosted on AWS, doesn't it?
Similarly, my f-secure egress looks like anybody else's f-secure egress, so what's the difference?
I don't really know, I don't use a VPN. Really asking.
They certainly let law enforcement and intelligence agencies know, often without a warrant.
Please read my comment as if the threat model includes panopticon governments, not common skids running aircrack-ng.
(I'm not saying you're wrong; again I've not really thought about having to thoroughly anonymize my own traffic.)
Why is that is I may ask? I have the impression that the great firewall blacks a lot of domains by default and they are allowed/blocked after a review after somebody tries to access them the first time. I may be paranoid but often I try to open an obscure site. It is blocked. I have to use a VPN. A few days later the site can be accessed. Why not use a VPN sever with a nice website that makes it look harmless?
- You might forget private whois and expose your identity.
- There might be issues with the private whois that exposes your identity.
- The contents of the website might expose you.
What does this mean? I've tried to figure it out from context, the article, and a quick google search, but It's not clear how dns sinkholing is going to help me stay secure.
If you're really personally targeted, then I agree with you. But for a casual person it's probably an easier thing to convince them of doing this instead of installing a password manager.
What accounts? At least for financial fraud this is certainly not true, phishing remains #1 by far.
I'd also hazard to guess that botnet logs result in far more hijackings than password reuse.
I tried this. Turns out to be a bad idea. SSH will walk through each private key and attempt to authenticate with it in order. That means a lot of bad login attempts which in turn leads to getting locked out. SSH public keys are public for a reason.
What attack is this even preventing - that someone will be able to reverse ssh public keys and get the private? A better approach is to generate a unique key per client so that if you lose access to a device you can remove only its public key.
> Also, you should download the source code, compile it (using a Linux machine) and always look over the source code for rogue functions
So I becoming an Underhanded C Contest judge is the price of admission to using the internet? Can anyone really be expected to do that? Can we blame anyone who gets owned because they didn't?
# Disable SSHv1
# Only use a key explicitely provided by an IdentityFile directive
# %h expands to the hostname, and %u to the username
I think the thought is the security practice of compartmentalization. If you lose the private key you use for GitHub, Amazon, DigitalOcean, your home servers, etc... you've effectively given root away.
Now if my laptop is compromised, it doesn't matter if I have one key or ten, I've lost them all. But if there's something heartbleed-esque that allows individual private keys to be stolen when pushing commits to GitHub, I've at least isolated damage to my GitHub account.
1. Some sort of remote memory leak that leaks the current private key, I guess.
2. Some sort of relay attack where you can impersonate the legit host.
In both of these cases, it seems like at a minimum you would need to, on the client, set up an ssh config that limits each identity to each host so as to prevent the client from trying each key in sequence (and thus potentially exposing it). That's a huge hassle!
So I guess tl;dr: I can think of a few cases where this might be useful, but if you're always SSH'ing from the same laptop, this step can probably be pretty far down your list of things to do.
I don't think this is about security. Just about privacy.
Some people don't like that they can be identified by their public key. eg (I think) github allows public viewing of a specific users public key, and that allows other services you use with your public key to know your github account etc.
It's not a mainstream privacy concern, but there are some privacy oriented people that worry about it.
Should only use the specified key file then, AFAIK, without doing the cycling you mentioned.
I like KeePassX as well, but prefer to unlock using a password. I have a Yubikey programmed to output a 32 random password that I generated and I append to that a 16 character password that's in my head. I keep the Yubikey and the SD card on which I have the password vault separate. The SD card itself is encrypted* and the version of KeePassX I run is on the card and is one I compiled myself.
Not sure I'd be getting additional protection with a key file. But perhaps I am wrong.
*I did that so that someone couldn't just copy the KeePassX database off it when I wasn't looking and run some offline attack against it. The SD card also has a kind of social engineering defence mechanism on it to dissuade the curious from playing with it... I wrote the word INFECTED on it.
What is important is that in my daily life, this is working perfectly well and I do not feel at all the annoyance of the added security against using the same dadada password on all the websites.
I really recommend a head stored + hardware generated password too, this is working wonderfully.
There is no one-size-fits-all solution and it should clearly depend on the threat model. I can imagine why someone who could be expected to have the keys to CloudFlare's infrastructure might want to take extra care.
I'm not aware of ready-made solutions to locally decrypt cloud-stored data on mobile phones though, I don't think you can mount TrueCrypt volumes on your phone. Anyone know of a way to do this?
Based on the protocols they offer it is easy to mount encrypted. LUKS is available for Android (rooted)
It may be considered a faux pas, but I have come to like the http plugin, for KeePass2, which allows Firefox to reach into my database when I come to sign into an online account.
I'm not sure that this actually possible in any reasonable sense. Its not that hard to throw in an obfuscated back door into source code, especially in a complex system (ignoring the build chain and the whole trusting trust thing.)
Even if there are a small number of people who have the time and expertise to audit such systems, it just doesn't scale.
No one wants to audit every line of code they use (nor is that possible).
But if one relies on relatively popular open source software, just the fact that someone else could have audited it helps a lot. Add on to that the fact that you can use a linux distribution which keeps an eye on the vulnerabilities reported in the wild and updates the packages for you, and you are much better off over someone who only uses closed-source software and hopes and prays.
and lol at the having trouble keeping up with your employees. at least they are productive :)
Privacy Settings: https://addons.mozilla.org/en-US/firefox/addon/privacy-setti...
Most people should just use an adblocker and strong passwords.
I don't know whether there is any place where people still do this, but in a community where everyone feels they belong and aren't driven to desperation, I could imagine an "open lock" policy working really well.
Everyone locking up their own stuff and blaming people who did not lock theirs down if they get robbed is in itself a form of arms race, which aren't usually optimal.
My parents live out in a rural area, and they never lock their doors, house or car. The odds of someone driving to their house and burglarizing it are just too low to worry about it - and if someone were specifically targeting their house, they could just break a window and get in that way.
In denser areas, however, that logic doesn't make sense; it's trivial to case dozens of houses in five minutes just by driving down a street.
Regarding hibernation/locking: many people leave laptops unatteded in more risky situations than at home and at the office. As a trivial example, imagine somebody going around a university library, infecting any unatteded laptop with a virus.
If you're living as some kind of enemy of the state maybe it's just time to stop developing software. And do you really need to holiday in North Korea?
I will not let their fear tactics get in the way of my freedom of doing what I please without fear of leaks, theft or spying, be it directed toward my person or as a simple passive measure.
Same for password managers: Are there any that allow you to split your data into two categories: Protected by fingerprint and protected by passphrase? I'd love to see that feature.
I.e., you cannot securely encrypt something with a function of your fingerprint: anyone can cycle through fingerprint representations and eventually get decrypt the data (or the key to the data). You can, however, authenticate yourself to someone (or something) which holds a plaintext encryption key, and once you have been given the key, decrypt the encrypted data. This only works if you can trust the person or thing to never give the key to an unauthenticated part. That only works with hardware, since any software which holds a key in plaintext can be examined to extract the key.
This would cover the case where you use your phone a lot and need to lock/unlock faster, while forcing a password entry when your phone gets stolen or used behind your back. You can still be forced to unlock it right after usage, but at this point you might have bigger problems.
If you are privacy conscious you should configure your browser to
a) block 3rd party cookies (all browsers except Safari have them enabled by default, even Firefox)
b) delete all cookies when the browser is closed.
Make it a habit to close the browser every now and then.
Case 1: If you're using a search engine not based in the US, and you're not a US person, then the NSA probably can't use any legal tools against you (depending on country). However, the NSA is allowed to use the full range of its capabilities to collect against you (PPD28 notwithstanding). They can infiltrate that service by technical or human means and carry out espionage activity without legal restriction (Title 50/EO12333). Further, they can retain the data unredacted for a long time.*
Case 2: On the other end of the spectrum, if you're a US person and you're using a US-based search engine, surveillance activities against you are far more complex. Warrants, NSLs, and/or other legal paperwork is involved, and there are strict rules on data retention, sharing, and minimization. That's not to say that there isn't surveillance, just that it comes with substantially more overhead. Meanwhile, most of the NSA's technical exploitation approaches are off-limits, and any collection/exploitation activity must be carefully managed.
Case 3: The intermediate case, where you're a non-US person using a US service, is a bit more hairy but still is better than the first case. While the NSA/FBI can utilize a range of legal tools (again, warrants, NSLs, etc) against you, because your data is likely entangled with US-persons data, it must also deal with all the overhead of minimizing and redacting that data (same as case 2). Similarly, the use of technical means against US providers is heavily restricted, so you won't be fighting against the same capabilities as you would be in case 1.
At the end of the day, which do you think is easier for the engineers at NSA: exploiting, entering, and just taking everything (case 1) or filling out a huge amount of paperwork and carefully handling the redacted scraps of data that comes back from the provider eventually (cases 2 and 3)?
I think you can make an argument for either side, but I tend to believe that technical exploitation is easier than legal, for now.
*Caveat here is that this intelligence data is hard[er] to use in US law enforcement activity against you. It's worth noting, however, that NSLs and FISA data are also non-trivial.
Those are concerning, because I'm positive that something I registered for in 2006 and never used again probably used a weak, re-used password.
The actual bit in the 30c3 talk where this was discussed is here: https://www.youtube.com/watch?v=KV4XnvE2p34#t=54m24s
I don't see how this makes sense. Assuming your private keys all live on the same machine (presumably with 0600 in /.ssh), then if your machine is stolen and your user password compromised, access to one private key is the same as access to all of them.
But then again, if you don't trust the remote to know who you are, then why do you have an identity with them? I mean, the remote service is SUPPOSED to know who you are. That's kinda the point.
GitHub should know I'm the user who has access to push to repos a, b, and c.
AWS should know I'm the user who has access to update code or data at places d, e, and f.
But neither needs to know my full identity, or about each other, at all.
Paid services necessarily require a higher level of trust (since you are handing them money) than random internet services. So we are off-topic from ssh keys and identity.
If you don't want someone knowing your personal payment details (CC #, billing address), then pay in cash and use services don't deliver things to your home. And if you can't, then just don't use a service.
But that's living in way too much paranoia for most of us.
Just imagine that somebody can request from you the ssh key to just one of the services you access. Then he gets the access to all of them.
I was actually almost involved in one of such cases, I haven't invented it out of the thin air. If you can't imagine such a scenario happening to you, you're of course lucky and you'd like to use one private key for everything. But the scenario is real.
The equivalent when the scenario is an attack, and not a legal game: some entity manages to hack your computer with which you access the service A and on which you have only the private key for A, but not your another computer with which you access the service B, with the another key.
Separate keys: just your access to the service A is compromised, one key: all accesses are compromised at once.
The real goal is privacy, given that your public key is available on github and sent via plaintext when authenticating.
There are some added benefits to managing separate ssh keys for each server: it forces you to use tools to manage your keys, which make it easier to mitigate disaster when the time comes to rotate your keys due to compromise.
If you do want Google apps, at least turn off all the creepy features like Google Now, location history, etc.
I laugh when websites etc ask for a phone number to help secure. My first thought is great idea so now when you get hacked you can give up my phone number too!
Internet has been and always will be Mos Eisley spaceport to me.
I'm worried about this. And how about .tar.gpg backups, if I lose a single byte I lose the entire file?
This is hard to recommend to everybody, but I use SELinux and this way I am more sure that my private keys won't get stolen.
For the rest of time, I use XMPP-Skype transport (gateway) to stay connected with ~100 of my skype contacts. This XMPP-Skype gateway handles 1:1 and groupchats, which is ok for me. I host this system as a public service, so if you are interested, feel free to check http://decent.im . This is a work in progress on deployment of powerful open source stuff in a supercharged and easily reproduceable way, so no slack killer yet, things are dirty, just a handy tool for me (and few other account owners) to aggregate all one's messaging into one, and very flexible, mechanism.
I'd love to switch from a software to an offline, open source, and self maintainable solution that will work for everything, not just websites/when I have my browser open.
Would rather use a third party solution that's not so easily coerced.
You're trading privacy for security, and where you have less security your privacy is long gone.
Using encryption on laptop can be very battery-greedy unfortunately.
If your looking for a tool which has a ton of easy security guides all in one place, you might like to try Umbrella App. It has lessons and checklists on everything from how to send a secure email to how to deal with a kidnapping. Built by the human rights and tech community, it's open source and available on Android.
Ends blatant plug :)
Some people will click on exe's because they believe the virus checker will protect them.
Is it even possible to use the web nowadays without JS enabled?
I've been using NoScript for years and how much is blocked never ceases to amaze me. 99% of the script that most sites run has nothing to do with viewing content, or usability, and everything to do with tracking (there are usually multiple instances, sometimes dozens, on a single page; it's astounding).
Another nice feature in NoScript that I just picked up on is the shift+left-click option in the script list. This allows me to investigate what that particular script is for, and choose to permanently block/allow it. Very handy, and also eye-opening in regards to privacy.
Hiding non-suspect behavior is, for everyone watching, the same as hiding very suspect behavior. If you do this and make a single mistake (anything really, speeding could be enough) there could be a red flag on your file that makes sure your possessions will be searched (and possibly taken) and be prepared to spend some time in jail.
I get it, everyone should be hiding all their activity online so that hiding your activity online isn't suspect behavior. But I really don't think that will ever happen and I'd rather be an open book about all my behavior then try to hide as much as possible while becoming a target.
I will probably piss myself and cry if I ever really "become a target" as it happens in China, cartel controlled parts of south america, dictatorships etc. But I will be damned if I don't make some kind of token resistance to us going down that path if all it costs me is keeping my privacy and maybe having legal hassle+ cost of replacement if my stuff gets seized.