Hacker News new | past | comments | ask | show | jobs | submit login
DOJ: We don't need warrants for e-mail, Facebook chats (cnet.com)
179 points by declan on May 8, 2013 | hide | past | favorite | 134 comments



I might be too late for anyone to see this, but I think it's an important point.

A lot of people assume the origin of wiretapping laws has to do with protecting citizens from the police. This isn't the case, at least not originally. Wiretapping laws were about protecting citizens from each other. Let me explain:

Once upon a time, it was too expensive to run a phone line to each house. AT&T had the brilliant idea of running one phone line and having many people on a street share the line (the rich, of course, enjoyed private communications). So if there were seven houses, and you were in house number 5, you would only answer calls that rang 5 times. The problem is that since all the houses on the block shared one phone line, any one nosy enough to but in could eavesdrop just by taking their phone off hook. The original wiretapping laws were setup to protect citizens from this, not the Government.

As a side note: before the North American Numbering plan, phone numbers were names followed by numbers. An example might be Oxford7 or OX7 on the number pad, which would ring the 7th house in the Oxford area (Oxford is a placeholder and doesn't refer to Oxford proper.)


> This isn't the case, at least not originally. Wiretapping laws were about protecting citizens from each other.

Thanks for this. I wasn't aware of this bit of history.

But I don't care. We need to be protected from the government and need new wiretapping laws for that end.


I COMPLETELY agree.

Also, people don't realize the global negative consequences of supplying government research grants towards analysis and monitoring of its citizenry.

For more on this subject, check this out: http://thepbxblog.com/2013/05/08/how-wiretapping-in-the-us-h... (DISCLAIMER: I wrote this blog this morning).


interesting. i assume you're referring to legislation at a state level? got any links? i haven't read much about that era, but from what i have, it sounds like things were awesomely crazy back then w/ PIs tapping for clients, corporations tapping each other, etc.

at least at the federal level, everything i'd read places the catalyst for modern wiretapping laws as the conviction appeal of seattle bootlegger Roy Olmstead's arrest based on evidence gathered by a warrantless wiretap by "rogue federal law enforcement officers." his appeal reached the supreme court in 1928 and was upheld 5-4 on 18th century trespass laws. in 1934, congress passed The Communications Act of 1934, which made wiretapping a federal criminal offense and any evidence obtained from such would be inadmissible. it wasn't until the The Omnibus Crime Control Act of 1968 that the constitutionality of wiretapping was articulated for investigative purposes.

this gives a better overview/detail compared wikipedia: http://www.americanbar.org/content/dam/aba/administrative/li...


Alright so digging a bit:

At common law, “eavesdroppers, or such as listen under walls or windows, or the eaves of a house, to hearken after discourse, and thereupon to frame slanderous and mischievous tales, are a common nuisance and presentable at the court-leet; or are indictable at the sessions, and punishable by fine and finding of sureties for [their] good behavior.” [1] Very Early Eavesdropping law.

but it wasn't prosecuted very often and faded from the common parlance:

“Eavesdropping is indictable at the common law, not only in England but in our states. It is seldom brought to the attention of the courts, and our books contain too few decisions upon it to enable an author to define it with confidence.... It never occupied much space in the law, and it has nearly faded from the legal horizon.”[2]

The first wiretapping laws were enacted by congress during world war 1:

40 Stat.1017-18 (1918)(“whoever during the period of governmental operation of the telephone and telegraph systems of the United States ... shall, without authority and without the knowledge and consent of the other users thereof, except as may be necessary for operation of the service, tap any telegraph or telephone line ... or whoever being employed in any such telephone or telegraph service shall divulge the contents of any such telephone or telegraph message to any person not duly authorized or entitled the receive the same, shall be fined not exceeding $1,000 or imprisoned for not more than one year or both”); 56 Cong.Rec. 10761-765 (1918).

And you're right, that wiretaps weren't constitutional until 1968, but this doesn't discuss neighbors spying on neighbors.

I think we found the same document: http://www.fas.org/sgp/crs/intel/98-326.pdf

I'll ping my buddy to see if he can shed a little more light, but this is related to wiretaps, which is slightly different from eavesdropping.

[1] 4 BLACKSTONE,COMMENTARIES ON THE LAWS OF ENGLAND, 169 (1769). [2]1 BISHOP, COMMENTARIES ON THE CRIMINAL LAW, 670 (1882)."


Alright so I believe you're correct, again with respect to federal guidelines, and that the state guidelines are murky. I believe I may have to rescind my post at the top of this thread according to my friend, whose name I'll redact for his privacy:

"It's funny -- one of the ideas I had batted around with my publisher for a second book was a history of wiretapping. :-) But, I never got around to doing any research and don't know the definitive answers to your questions off hand.

In the thread you linked to below, mrexroad is correct about the Communications Act of 1934 (and section 605) being the first wiretapping federal law, at least that I'm aware of. And Olmstead is indeed a seminal case in the field, but whether it was the first, I don't know. The American Bar Association article he linked to seems pretty definitive.

I had not ever heard the theory that wiretap/eavesdropping laws originated due to party lines. Not saying it isn't true, just that I've never heard that."

So that's that. I'll have to blame some of the old engineers I talked to at AT&T for my miseducation, although they may have been referring to local (non-federal) laws.

This has been fun :).


Oh Joy! I'll go see if I can dig up some links. Thanks for sharing this. I've seen this before, but I believe the legislation, at the state level, was driven by nosy neighbors. I'll ring up a friend of mine who's an expert on the subject to see if I can get some commentary.


> you would only answer calls that rang 5 times

How would the person at house 6 ever get a call if house 5 always picked up the phone before it had a chance to ring 6 times? :)

The actual way this worked was the ringing cadence was different for each subscriber: http://en.wikipedia.org/wiki/Party_line_(telephony)


My uncle switched to a party line in the early 1980s because it was cheaper and since everyone had private lines by that point, he didn't have to share it with anyone.


Guys, when will it sink through? The technology to protect yourself from wiretapping is in your hands. End-to-end asymmetric encryption has never been more accessible. Explain to your friends and family what PGP is and start using it.

Maybe the NSA has it cracked, maybe not. But the IRS sure doesn't, nor do the FDA, DEA, FCC, FBI, Google, Apple, Facebook, Microsoft for that matter.

EDIT:

Here's how it works: I run PGP/GPG (GPG is the open source/free version of PGP, they're the same protocol) locally and the first time I use it, it creates two files, a "public key" and a "private key". I post/share my "public key" online (and you assume it's really me sharing it with you - this is important).

   gnupg --gen-key
   gpg --armor --output Desktop/mqudsi.asc --export "Mahmoud Al-Qudsi"
For example, here is my public key: http://neosmart.net/downloads/miscellania/mqudsi.asc

Anyone can now use this key to send me a text message encrypted only for me. They just need this file and their own pgp/gpg private key; they tell it to encrypt message X for user with public key Y.

The result of that command (gpg --encrypt --recipient 'Mahmoud Al-Qudsi <mqudsi@neosmart.net>' toencrypt.txt) will be encrypted text that no one but the holder of the private key (me) (which should never be shared) will be able to decode.

The result is plaintext. You can send it via email, text message, snail mail, whatever. There are apps that automate the encryption procedure as part of the sending process.


The problem I've had is not so much explaining what PGP is, or why to use it. It's the software.

Sending an encrypted or signed message to someone who doesn't have your keys throws all kinds of scary messages in most email clients. Maybe not a problem for us™, but a huge problem for most people who find they can't send messages to their friends without remembering to do step X for people A, B, and D, but not C or Q. And even if you explained this to them, will they remember a month down the line when their friend calls them because they think their computer got hacked? (true story. nobody reads error messages, and encryption-related ones are among the most cryptic and scary looking.)

Then try convince them to keep using encryption / signatures for you, when it breaks for other people and makes sending an email more complicated, and they have to keep track of who C and Q are. Then try to convince them that, even though they have nothing to hide, encryption is still useful.

Then do this all over again when they get a new computer and forget to install PGP and have already lost their entire keychain.

--

The crypto is here, and yes, it has never been more accessible. The software using it is still garbage.


>Sending an encrypted or signed message to someone who doesn't have your keys throws all kinds of scary messages in most email clients.

Seriously. The way PGP should work is: Whenever you send an email, it puts a header in the outgoing message with your public key. The header is not normally shown to the user and is ignored as unknown by non-compliant email clients. Then, whenever you send a message to anyone you've ever received an email from, you already have their public key, so your email client automatically encrypts the message before sending it, and their client decrypts any message it receives encrypted.

This is obviously ignoring a whole bunch of problems. What happens if your public key changes? What happens if the attacker sends an email from your address to the user? (Presumably in both cases the recipient will get a message complaining that the key has changed, and DKIM and the fact that your email server authenticates you will help with the second.) But here's the thing: Those problems don't happen normally. The average user doesn't encounter them in the first six months of using the software. They just install a client that supports the protocol and automatically get encryption for messages exchanged with anyone else using a supported client, without having to do anything special.

I think this is one of those "the perfect is the enemy of the good" scenarios. The people who want encrypted email want it to be secure against the NSA coordinating with AT&T and your email provider. Which would be great, if it didn't make the UX so terrible that no one uses it and causes everyone to default to no encryption. Do the above and you still have good security if you verify public key fingerprints manually, but it makes the process of encrypting your email as simple as installing the software, and if you don't verify keys then you're still safer against a large variety of attacks than the primary alternative of not using any encryption at all.


You forgot the most important part, validating that the public key you've received is actually for that person. That may be as simple as automatically checking public servers for a match and providing some confidence percentage based on how many people signed it or better yet how many people signed based on whether you trust them or not.

Without that, it's not just useless, it's detrimental to the system, as there could be lots of bogus keys accepted by people (imagine a virus that automatically generates and adds a PGP key to mail clients before sending to everyone in the address book, just to make it more likely to pass spam filters). Bogus keys in the web of trust would be a big problem.

In fact, if PGP/GPG were more popular, I imagine there would be the accompanying glut of horrible passwords used (or duplication from easily gleaned passwords), and pretty soon some virus would start automatically signing things it shouldn't on infected systems, and then the web of trust that the system relies on for third party verification wouldn't be so trustworthy.


>You forgot the most important part, validating that the public key you've received is actually for that person.

It's not that I forgot that part, it's that that's the hard part. That's the reason PGP is hard to use: They try to make sure you do it securely. And you can't have some third party do that part for you without trusting them, and the whole idea is not to have to trust any third parties. What public servers are you going to use here? Does each email user have to run their own server? Unless you have a single central server, how do you know which server corresponds to which user?

Automating web of trust could be interesting though. Imagine you get an email from a new user that you've never received any email from before. There is some new P2P network where if you have someone's public key, you ask that user whether they know the new user's public key, and they send back a signed response (either "this is the key I have" or "I don't have a key", signed either way with the known user's public key). Then if all your friends who have the new user's key agree on what it is, it's probably right. If nobody has it, you get encouraged to verify it manually (i.e. in person). And if they don't all agree you get the nasty warnings about something fishy going on.

>imagine a virus that automatically generates and adds a PGP key to mail clients before sending to everyone in the address book, just to make it more likely to pass spam filters

That seems like a low-effectiveness method of sending spam, given that the public key is uniquely identifying and tied to a sender address, so once the spam filter realizes everyone is marking all those messages as spam it can just blacklist everything sent using that key. Also, how is it different from existing PGP other than that more people would be using it? If you've infected a machine with a virus you can do whatever you want to it. You could just write the spam directly to the user's inbox, or send it out from their own address and sign it with their actual key. Compromised machine = you're screwed.

>the accompanying glut of horrible passwords used (or duplication from easily gleaned passwords)

This isn't even necessary for a virus. The problem with viruses is that they can stay resident until you type your password and then it doesn't matter how hard the password was.


> It's not that I forgot that part, it's that that's the hard part. That's the reason PGP is hard to use: They try to make sure you do it securely. And you can't have some third party do that part for you without trusting them, and the whole idea is not to have to trust any third parties. What public servers are you going to use here? Does each email user have to run their own server? Unless you have a single central server, how do you know which server corresponds to which user?

I'm certainly not going to argue with this, it's the basic gist of my original reply. :)

> Automating web of trust could be interesting though. Imagine you get an email from a new user that you've never received any email from before. There is some new P2P network where if you have someone's public key, you ask that user whether they know the new user's public key, and they send back a signed response (either "this is the key I have" or "I don't have a key", signed either way with the known user's public key). Then if all your friends who have the new user's key agree on what it is, it's probably right. If nobody has it, you get encouraged to verify it manually (i.e. in person). And if they don't all agree you get the nasty warnings about something fishy going on.

Exactly. This is similar to what I was envisioning when I was talking about confidence levels. Having different levels such as "I have personally verified (signed)" and "I know of and reasonably trust this key based on people I trust" and making that public in some manner would allow a slew of interesting techniques to verifying public keys to different assurance levels.

Come to think or it, it sounds like what we need is for a social network to adopt this. Google+ with it's real name requirements might make a good fit, but maybe real name isn't what we care about, maybe we just care about email. Alternatively, some alterations to diaspora might work out well (I know little about it other than it's a roll your own social network that I think can work as a node of a larger network).

> That seems like a low-effectiveness method of sending spam, given that the public key is uniquely identifying and tied to a sender address, so once the spam filter realizes everyone is marking all those messages as spam it can just blacklist everything sent using that key. Also, how is it different from existing PGP other than that more people would be using it? If you've infected a machine with a virus you can do whatever you want to it. You could just write the spam directly to the user's inbox, or send it out from their own address and sign it with their actual key. Compromised machine = you're screwed.

I'm imagining a virus that generates one on the infected system for the address the mail client is configured for. That could be a LOT of new keys.

The problem is the thousands or millions of bogus keys that start being sent from addresses that previously didn't have ANY key associated with them (or did, but not through that machine), clog the web of trust if they make it on there. If they are automatically added to mail client/PGP systems on the recipients end, that's a lot of bogus keys in users mail clients (even if it's just the 10% that arrive before spam filters react). If clients end up syncing their known keys to some central repo at some point, that's a LOT of bad data. I can imagine a case where someone generates a legitimate key and gets it personally signed by a few people, only to find that it's verified by hundreds of people on some public servers.

As for low-effectiveness, if it evades more filters by just a few percent, at the scales spam is sent that's a BIG deal.

> This isn't even necessary for a virus. The problem with viruses is that they can stay resident until you type your password and then it doesn't matter how hard the password was.

True. I imagine the really fast spreading and pervasive virus's need to be quicker than that though, but I have nothing other than a hunch to base that on.


I somewhat agree with the "perfect is the enemy of the good" issue here, and would love to see encryption more broadly employed for person-to-person communications, and think your plan is pretty reasonable.

That said, I'm still somewhat worried about MITM in the scenarios you describe above - should probably expose "you haven't verified this sender, it could be spoofed" somewhere, but somewhere icon/color-y rather than scary-error-message-y. Details, though...


It's sad because the technology to implement PGP invisibly when possible fully exists. We were planning on drafting a proposal for behind-the-scenes PGP for email but were shocked to discover it's been fully architectured, designed, and fleshed-out (RFC 2538, RFC 4398), just never implemented.

EDIT:

Can't reply, but anyway, DNSSEC isn't really needed. That's just icing on the cake. What matters is a DNS record that specifies either the keyserver or the actual public keys themselves. A DNS TXT record or a custom PGP record will do just fine.


Interesting...but seems to require DNSSEC, which is presumably why it hasn't been implemented (because DNSSEC hasn't been widely implemented).


for future reference, the 'reply' link is hidden for a while in deeper threads, basically to prevent people from fighting too much, too quickly :) if you click the 'link' link, you can still reply.


Fundamentally, I agree. I have my entire team set up with PGP keys.

However, there's a huge training burden that needs to be overcome. People don't understand what it is, how it works, how to manage keys, what they can do securely or not, etc.

People complain about the tools, which I agree are insufficient right now for most people's general use—even with e.g. the very nice https://gpgtools.org —but I'm not convinced that's the primary hurdle.

Awareness of the technology is very low. I think it should be taught in public schools, and not as a specialized course but as part of general "life skills". I would love it if someone with a huge, visible platform like Google or Apple would push PGP integration into their tools for general use. Like, why isn't my PGP pubkey a normal field in my G+ profile? Or Facebook?

There are obvious answers there, of course, beyond the obscurity of the technology: if people start PGP-encrypting everything, those content platforms would lose value.

It would be like visiting Google's house to have a private conversation with a friend that Google couldn't hear. You can do it, but what do they get out of it anymore?

I'm not sure what the solutions are here, but I hope someone solves them.


Really. If I want to send encrypted messages to anyone who is not a developer or sysadmin I have zero chance. Need to negotiate with a recruitment agent over gpg - ha. Want to recieve a job offer that's encrypted. Yup get them all the time.

What's needed is gpg in JavaScript, or a platform specific text input / gpg converter (you know a magnifying glass that hovers over encrypted text and shows the plain text)

Dunno if anyone has made one but till then we have no hope and Bob Hope


http://www.mailvelope.com/

"OpenPGP encryption for Webmail"



It says "aims to" - I could not see a portion where it said "is working". Do you know it's status?

Looks good though, with a dependence on browsers JavaScript vm it must be a nightmare of security


PGP is a UX travesty. Using clients that have OTR encryption built in is much better. It also gives you forward secrecy, which is missing from PGP.

http://privacy-pc.com/news/changing-threats-to-privacy-moxie... https://www.youtube.com/watch?v=-JKqJ7gt5yk


I doubt I could persuade my friends and family to change email clients if there was a decent email client that provided encryption. They're just not that concerned about the government being able to read their mail and would probably be more worried that I was - as it would be seen - becoming paranoid.

Nothing that requires your average user to open a command prompt seems likely to be used outside of the most security conscious of circles.


PGP or GPG don't require you to open a command prompt. The issue is that there aren't many (or some would argue any) good GUIs.


The GUI for OSX are quite good really.


In practice, OTR is more useful than PGP and widely more deployed.

HTTPS is less secure in practice (due to just being transport security and lots of intermediaries to attack), but still decent, and fairly widely used.

SSH is still the only cryptographic system which was so well implemented (in all ways, not just the cryptography) that it ended up taking over the entire market, displacing the non-secure options.

People should be building the next ssh, not the next PGP.


>People should be building the next ssh, not the next PGP.

Yeah. I'd like something that simple and secure (where users can use the same private key across sites) for web browsers. X509 client authentication, as far as I can tell, doesn't cut the mustard. Among other problems, it requires trusted certificate authorities, which causes... problems.


there are good reasons to want a unique key per site (avoiding linking across sites), and most of the problems with x509 (aside from it just being lame) are due to UI/UX in browsers, especially legacy desktop browsers, and lack of good support elsewhere. You could even get away without CAs (or where site = CA) for the client cert problem, too.

It's really not an open crypto research problem; it's design and software engineering and entrepreneurship/marketing.


there are some out of band issues. SMS, email, etc are "push" which means I can't stop people from sending me sensitive things unencrypted. Centralized platforms like facebook, while technically opt-in, are so deeply rooted in the younger generation's lives that opting out has unacceptable social consequences for many people.


FWIW, Julian Assange doesn't use email, and he says encrypted email is possibly even worse. http://wikileaks.org/Transcript-Meeting-Assange-Schmidt.html...


"Maybe the NSA has it cracked?" I'm actually curious, why do you think they may have?


They almost certainly don't, I'm sure he was just saying that since that's always the accusation amongst the paranoid with consumer encryption.


The real point is that even if they have, they aren't going to tip their hat to the public (and other countries) by lending the tech to the FBI for some random violation of federal law.


You should "Show HN" how you managed to get your grandmother to use pgp.


The Direct Project is solving this problem for the medical field in America right now. Hopefully the technology will trickle outwards.


Can you provide a little more info? Is this just a program you can run?


Yep, GNU Privacy Guard (a.k.a. GnuPG a.k.a. GPG): http://gnupg.org/


This is slightly off topic, but I googled "PGP" as I do when I want to understand something specific better, and found that PGP stands for "Pretty Good Privacy." Ha.


I had a PGP key available on my personal website and on the MIT key server from 1994 through 2008. I also had Mutt configured to handle PGP essentially automatically.

I killed it because in all that time, I received no encrypted messages. The only value I received was that it verified the signatures on a few mailing list posts.

I love the idea of ubiquitous public key encryption, but I think it just has a few too many moving pieces to get traction in it's current incarnation.


There is a certain logic to what they're saying, after all. You shared this information with Facebook or Twitter or Google. It's no longer in your private possession, because of that sharing, so if those companies feel like giving your messages to the DOJ, it's not really a "violation" of your 4th amendment rights, because you gave those up when you gave your "private" communications to those companies.

I'm not saying anything about how good or bad that is for American society, but I can absolutely see an interpretation of the 4th amendment where information you share with a third party doesn't get protected as if it were private. There may be other laws that protect your emails and tweets and whatnot, but the 4th amendment specifically may not be the best place to look for that protection.


The postal service, UPS, DHL, FedEx, the phone company, etc. These are all 3rd party services that people use to send private things. People have an expectation of privacy when they send a message to someone via (e.g.) Facebook Chat. Some might argue that it's an unreasonable expectation of privacy if you understand the technology behind it, but I would argue that the majority of the population doesn't understand the way the technology works (which would imply that the majority of the population expects that the communications will remain private).


The postal service is the only one of those mail carrier which you have listed that actually have legal standing for that privacy expectation though.

Regarding phones, those also have specific laws attempting to protect wireless (cellular) communications from eavesdropping (with the exception of legal wiretaps, which cellular providers are required to be able to support).

Likewise for pagers, those have specific laws making it illegal to intercept the communications sent to pagers.

For normal phones, a specific law had to be passed to require warrants for interception of communication (the "Wiretap Statute" from 1968, later extended by the Electronic Communications Privacy Act of 1986).

So the point to all of this is that those privacy "rights" that you talk about are not 4th Amendment rights at all, they are protections granted by specific legal action on the part of Congress. Had that legal action not been taken then you'd be right back in this same "interesting 4th Amendment question" that we're talking about for this.

The bottom line is that if you're willing to give info to a 3rd party unencrypted you need to assume the government can be given access to the same information unless there are specific laws forbidding it.


The anti-wiretapping laws are needed to prevent non-government entities from violating our security. The Fourth Amendment already clarifies that government must not violate the security of ourselves and our communications (papers), without a warrant naming the person being searched, and what is being sought.


Yes, but before those anti-wiretapping laws the phone company could theoretically do ad-hoc wiretapping of their own and voluntarily divulge that to the government. The only limit would have been that the phone company couldn't have wiretapped at the direct behest of the government since the Supreme Court had already ruled that the 4th Amendment applied [to the government] where the person had a reasonable expectation of privacy.

The statute fixed that privacy issue and then put requirements on when wiretaps could be used by the government. Before this law and the Supreme Court decision the government was very... lax regarding warrants for wiretaps.

http://www.it.ojp.gov/default.aspx?area=privacy&page=128...


This is an interesting point. What about a safe deposit box at a bank, can the government search it without a warrant?


You're mixing things up a bit. The government can't search things without a warrant where there is a reasonable expectation of privacy. This applies to even things like DHL packages, the government can't tell the DHL delivery guy to hand over a package to them.

On the other hand, it's not a privacy violation for the DHL guy to voluntarily turn over material to the government and the government to use that as evidence against you. Think of the stereotypical "guy at the photo lab" who notices that someone dropped off child porn to be developed, you (sadly) used to see that in the news yearly, and no one thought anything of that being turned over to the government.

In some cases third-party services have privacy protections added by law. E.g. the cellular communications interception thing, it used to be very easy indeed to intercept a cell phone call just by sticking up an antenna. In 1994 the law was changed to make it illegal to intercept those calls in most scenarios, and require search warrants even for that "public transmission" of communications (but the law also required cell phone companies to make it possible to wiretap, the "CALEA" provisions).

Given how strictly regulated banks are I would be very surprised if there are not similar laws providing some semblance of legal protection to the contents of safe deposit boxes, but I'm not sure and don't have time to Google it.


There is no protection for UPS, DHL, FedEx, etc. The only one for which there is protection is the Postal Service, and that is because it is an organ of the government.

Also: the government could always ask your friend at the other end about what you said, not just Facebook.

The intention of the founders re: the 4th amendment was not to protect communications between people, it was to protect people from personal searches. Read the text: "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures."

Persons, houses, papers, and effects? What do all those things have in common? They are personal to you. You'd feel violated if someone else entered your house or rifled through your effects. That same reasoning doesn't apply to something like a Facebook message that any of a number of people at Facebook could read. Maybe you assume that they wouldn't do that, but they certainly can.

Maybe we need a law to keep the government from getting Facebook messages without a warrant. But that's a separate thing from the Constitutional issue. Personally, I don't even think we need such a law. If you're okay with Facebook or Google looking at your communications, you should be okay with the government doing so.


While I find this line of thought personally abhorrent, I give you my glad upvote for helping me to understand what must be the reasoning of the government.

If you could help me follow the idea further: In my mind, a communication is mine until I send it, at which point it is my property in the trust of another, until delivered to whom I'm communicating with, at which point it becomes their property.

I know legal reasoning doesn't always follow what I may consider to be common sense- can you shine any light on what I'm missing?

(Prefer not to focus on the DHL/UPS etc side of things, because I understand their right to snoop is a part of their TOS.)


The difference between the legal reasoning and your reasoning is the "property in trust" part. Legally, once you hand your communication over to Google, it's their property. If they choose to hand it over to the government, that's their business.

Think of this hypothetical. Say Google reads your g-mail and tells people you ordered green shirts. Do you have a lawsuit against Google for disclosing your private information? I doubt it. If the information is sensitive for a different reason (e.g. it's about your herpes diagnosis), you might have a suit, but then you'd also have a suit against a friend to whom you told that information in confidence. I.e. it's a suit based on the nature of the sensitive information, not the disclosure by itself.

The idea of this "information trustee" doesn't really exist in the law. Google is treated no differently than a guy on the street you tell to relay a message to your friend. And as for the 4th amendment, as I said, wiretapping aside it's more of a "personal search" protection than a "communication protection." This is a good summary of the relevant precedent: http://www.cs.duke.edu/~chase/cps49s/carnivore-history.html


  | Legally, once you hand your communication over
  | to Google, it's their property.
If I put some of my belongings in a storage unit, I don't lose ownership of them. If I accidentally leave (e.g.) my sunglasses at a restaurant, the restaurant doesn't own them now. If I give a package to DHL, they can't decide that they don't want to deliver it, and it's theirs now.


There is in the law the idea of trusteeship of personal property (the law of bailments). There is no such analogue for information.


I'm arguing that the analogue should be there. When I mail a letter to someone it's just a physical manifestation of the information. The fact that it's physical is a by-product of the transfer medium. I don't feel like rights should be lost just because we can represent the content of that letter as an email, and send it over a 'series of tubes.'

If I store files on Dropbox, I am arguably paying for them to store the files for me, not giving them to Dropbox to do with what they like. It should not be treated any differently than a rented storage space in the physical world (regardless of implementation details like de-duplication).


Here is where you get into the tricky difference between what makes sense, and legal definitions. You may very well be right- I certainly feel you are. But that line of reasoning must be established by legal means.

It's like right now you have the idea, but haven't written the code.


But if you leave your sunglasses in a restaurant, and the government wants to take fingerprints off of them, they can.


The statement that once (e.g.) Google has the message that they are supposed to be passing on my behalf, that it's now theirs to do with as they please is a much more bold statement then the original discussion of whether or not they should be allowed to hand it over to the government.


Not really. That is nothing more than a civil contract issue between two parties. And to have a valid contract there must be an offer, acceptance, and consideration. Not just the promise of future consideration either, so saying that Google could make money off of you later (by e.g. directed advertising) wouldn't make an enforceable contract.

But as it stands things are even more lopsided towards Google. Go check their terms of service, I'd bet they have a half dozen different ways of indemnifying themselves from liability in the event they don't deliver that email.


I didn't raise a Constitutional issue. I'm pretty sure that "reasonable expectation of privacy" is found no where in the Constitution.

  | If you're okay with Facebook or Google looking at
  | your communications, you should be okay with the
  | government doing so.
- This treads awfully close to 'if you have nothing to hide, then you have nothing to fear.'

- Government employees wield much more power than Facebook employees.

- If it was common knowledge that all Google/Facebook employees had unfettered access to all communications going through their systems, then people would treat the system differently. As it stands, employees should only be looking at specific communications to deal with work functions (investigate complaints from users, look at a message that caused a back-end system to blow up, etc).


> This treads awfully close to 'if you have nothing to hide, then you have nothing to fear.'

But you're not hiding anything. You're transmitting it in clear text through Google/Facebook's services. You're not worried about the right to hide your information, you're worried about the right to hide it against the government specifically. That's the key distinction. I do have things to hide--I don't share those thing with either Google or the government, and those are the things I want protected. If I choose to share something with Google, I'm okay with the government having access to it too.

> Government employees wield much more power than Facebook employees.

In my mind, the government has a lot more power to screw me over, but private industry has a lot more incentive. I'm much more worried about credit ratings agencies, insurance companies, potential employers, etc, having access to my information than I am about the NSA looking at my information.

> As it stands, employees should only be looking at specific communications to deal with work functions

That's the same for government employees. Maybe the difference between us is that you implicitly trust the employees of Facebook not to do stuff like this, and don't trust government employees, while I feel the opposite way.


  | But you're not hiding anything. You're
  | transmitting it in clear text through
  | Google/Facebook's services. You're not worried
  | about the right to hide your information, you're
  | worried about the right to hide it against the
  | government specifically.
That's not true. If I send a person-to-person message that goes through Facebook or Google, am I ok with that information broadcast across the globe to everyone? Probably not.

  | If I choose to share something with Google, I'm
  | okay with the government having access to it too.
Are you ok with the government having unfettered, unrestricted access with little or no oversight? That doesn't make any sense to me. Even if you don't care about the information that Google/Facebook have on you, do you really want government agencies to get used to accessing information without a warrant? It only incentives them to push for more access to information without a warrant.

  | Maybe the difference between us is that you
  | implicitly trust the employees of Facebook not
  | to do stuff like this, and don't trust government
  | employees, while I feel the opposite way.
Facebook is more accountable that the government is. To be fair, there are bad actors in all places, but the FBI/US Attorney General has more incentive to pull people into court, and send them to jail than Facebook employees do[1]. FBI agents 'doing their job' is more of a risk than Facebook employees doing their job, because the FBI agents' jobs are about arresting people and charging them with crimes.

When was the last time that an FBI agent was fired for overstepping their authority? On the other hand, I feel more trust that Facebook/Google would fire an employee that overstepped their authority.

[1] And they are always testing the bounds of how they can apply the law. See the case of trying to charge the mother that drove her daughter's classmate to suicide, where they tried to charge her with 'hacking' because she violated the Terms of Service of MySpace.


> Facebook is more accountable that the government is.

How do you figure that?

> To be fair, there are bad actors in all places, but the FBI/US Attorney General has more incentive to pull people into court, and send them to jail than Facebook employees do.

The difference is that only a small fraction of people do things that causes the federal government to become interested in putting them in jail. Most people do things that arise the interest of private industry. See, e.g., the credit ratings agency mafia. Those companies would have a field day if they had access to Facebook's information. See also, all the hiring managers that would have to have access to peoples' social networking information in order to blacklist them for jobs. Or insurance companies looking for any reason to drop people from healthcare plans, etc.

How much you worry about something is generally proportional to the product of how likely that something is to happen and how bad that thing would be if it did happen. It's exceedingly unlikely that you'll be at the receiving end of a federal prosecution, even if that would be a really bad thing. But having trouble with your credit, having trouble getting a job, being dropped from your health insurance--all of these things are much more likely, and they can be pretty bad in and of themselves. Yeah, sure, the government can bankrupt me defending a prosecution, but then again so can my HMO dropping me from my health plan if I get sick.

As I said, I imagine this is a matter of outlook. I don't see myself as a revolutionary who might get railroaded by the government for fighting for a just cause. I do see myself as a guy with a wife and a kid looking to buy a house some day, worrying about health care costs for my aging parents, etc. Given that, the ways private industry can screw me over are a lot more real to me.


    I'm pretty sure that "reasonable expectation
    of privacy" is found no where in the
    Constitution.
No, but it was established in Katz v. United States[1] as one part of a two-part test of whether a search is constitutional under the Fourth Amendment.

[1] http://en.wikipedia.org/wiki/Katz_v._United_States


If you agree to a privacy policy that says "we will give your data to the DOJ any time they ask for it" you don't have a reasonable expectation of privacy either.


If you sign an employee agreement that says, "we own all of IP you produce past, present and future," then it's obviously enforceable, right?


That's actually much more enforceable than a drive-by EULA.


Do you actually know - and have the appropriate backing from precedent - or are you just guessing? One thing I've found out about contract law is that it doesn't look anything like you'd think it does just from reading the things you sign, so it seems to me that it's very easy to make mistakes in this area without realising them if you just sort of read it in a tourist-y fashion from time to time.


Are you sure about that?


Yeah, because you would actually read & physically sign an employment agreement, and that agreement actually has some bearing on how you relate to your job. OTOH an EULA is not read by most people, and their enforceability has been challenged in court. https://en.wikipedia.org/wiki/Software_license_agreement#Enf...


You can't sign your rights away, and "we own all of IP you produce past, present and future," is a forsaking of rights that would naturally be exercisable by the person signing the contract.

Employment agreements are also subject to contract law concepts such as consideration[1], which would come into play when someone stops and asks "what's in it for me?" A single job as described at the beginning of the relationship through the employment agreement is probably not going to be seen as valuable enough for someone to have legitimately signed away his future productivity.

1. http://en.wikipedia.org/wiki/Consideration


Your interpretation of "consideration" is, according to the article you linked, the minority position. https://en.wikipedia.org/wiki/Consideration#Monetary_value_o...

I'm not sure what you mean by "you can't sign your rights away". IP is of course transferable.


IP is of course transferable.

Not without a signature on a document itemizing the IP being transferred.


The phone company is the best example because with a couple of alligator clips they could easy listen in on phone conversations. It's understood that listening in is a rare occurrence but we are ok with it happening.

Same goes for FB, Google and other providers. They have the ability to read your mail and occasionally do just like the phone company.

Bonus example: voice mail. It's just like email since its a message sitting on the service providers computer. Can the Feds just listen to those messages?


I guess what people don't have a problem with there is that at least someone is physically listening in on a phone line - it requires manpower to do and thus is very deliberate.

It's when we are being data mined by the government there's a problem. I think we all don't mind being data mined for advertising (within reason) but when we are being watched to be 'kept in line' that's where there's a problem. At that point it's more like 'Minority Report' where you are being scrutinized before you even do anything.


I don't like being data mined, full stop. Advertisers don't get a free pass; I would much rather pay for any service than have companies I don't even know the names of compile mile-long paper trails of my online (and possibly offline) activities.


The problem here is not the level of access they get to our information. The problem is how they are allowed to use that access. The 4th Amendment doesn't really prevent the Feds from snooping on anything they want to, it just prevents them from acting on any of that information.


Huh? If the FBI busts into someone's house without a warrant there are going to be more repercussions than just "we can't use this information in court."


Yes, in (US) legal circles this is known as the "Third Party doctrine". Justice Sotomayor has said in at least one opinion that if we want to re-vamp privacy laws, the third party doctrine is what needs to be reversed, and that she may be willing to do so.


This point has been debated for quite a while: entire conferences have been held on this topic! Perhaps 20 years ago you would have found plenty of people who agree with you. But the emerging legal consensus, in academia, in Congress, and in the courts, is contrary to your view.

Excerpt from the 6th Circuit's Warshak opinion: "Since the advent of e-mail, the telephone call and the letter have waned in importance, and an explosion of Internet-based communication has taken place. People are now able to send sensitive and intimate information, instantaneously, to friends, family, and colleagues half a world away... By obtaining access to someone's e-mail, government agents gain the ability to peer deeply into his activities."


If you want to follow that line of logic then you still have an issue with Facebook, Twitter, Google, et al being forced (via national security letters, and through other means) to turn over data and gag orders issued to prevent them from disclosing it.


You share your telephone conversations with the phone company, so why did they need permission to set up a wiretap in the past? You shared your personal and business possessions with your housekeeper, janitor, plumber and other maintenance staff, so why do they need a search warrant for your home or office? It's not a logical argument, it's a ludicrous argument that sounds logical because the Internet has only existed for most people for 15 years or so and social norms are still being worked out.


The government needs a search warrant, not your housekeeper. If your housekeeper then tells the government of your "crimes" that's whistleblowing, not government invasion of your privacy.

As for the telephone conversations, read this thread a bit more... the government didn't need permission to wiretap until a specific law was passed in 1934 that required it.


This is more akin to a safety deposit box. The bank doesn't necessarily know what's in there. It's not the same as compelling testimony from a housekeeper.

Sure, Facebook/Google have more access than a bank has to the safety deposit box, but the likelihood that anyone at Google or Facebook has actual knowledge of the information the government is looking for is slim to none.

Maybe a better analogy would be a warehouse. Just because things are stored in the warehouse, and employees have access, doesn't mean that the employees have rifled through them.


I'm really not that interested in what the law says. The law often says horrible, stupid things. I was criticizing the OP's assertion that the policy "made sense". It does not make sense unless you also believe that it is acceptable for the government to snoop into your life through other means, using situations in which you have trusted a third party against you.


Well that's just it, the reason that the previous laws were passed to increase privacy controls was in response to the realization that the law as it stood at the time permitted those privacy violations that people found unacceptable.

The law should conform to what's best for society but you need to understand where the law is not best for society before you can understand where to change it.

The whole reason OP gave that the DOJ policy "makes sense" is because there's not actually 4th Amendment protection for this case. And he's right, so don't shoot the messenger, he's telling you what needs to be fixed in order to restore privacy controls closer to what we envision as ideal.

And that is important because a government that can affect things around the world does not run on "expectation", it runs on the law. You can complain until you're blue in the face about how the government is violating your expectation of privacy.... or you can fix the law, as was done in 1934, 1968, 1986, and many more times in between.


I think we are missing the fundamental issue here.

What do "we the people" want? Do we want the government to seek out and exploit every possible loophole it can find and to make up excuses to eavesdrop on us?

Or do we want to make sure that gentlemen don't read each others letters?

I would very much prefer the latter. State the law as "if you are a public servant, don't listen to the conversation of people - don't even think about or try to listen - unless you have a very specific reason to do otherwise and then only if you must".

"They", the government, are the people having guns and the authority to lock you up. Once we get used to the idea that "they are listening" whenever they want ... we will start to self censor what we say and very shortly thereafter what we do and think. You don't want to be the one sticking out, you don't want to catch their attention.

And that is when we have killed creativity, stopped future development of society and made the entrepreneurs walk in the line at the same pace as everyone else.

The quest for privacy is not about protecting the fact that I like to watch videos of midgets dressed up in leather underwear or my protecting friend who is ordering weed online.

It is about the future of our society.


Self-defence (encryption) is the only real protection from government snooping on communications.

You can't rely on policy. It moves too slowly and there are always loopholes.

It's a shame there's no mainstream or accessible solutions to encrypt email while using services like gmail. Although they exist for IM.


Yep. SSL is now viewed as a good best practice for data in transit, to the point that Apple was criticized for not enabling it for the App Store (http://news.cnet.com/8301-13579_3-57573334-37/).

But the real issue, as you indicate, is stored data. Full Gmail encryption would presumably be done in conjunction with the client via the browser. Because the server wouldn't have access to the plaintext, search becomes tricky and contextual ads problematic. One solution would be for Google to make fully encrypted Gmail a paid service.


Google would never add encryption to gmail. As a business decision it's pretty atrocious.


Don't you mean: "Google would never add encryption for stored messages to unpaid Gmail accounts." Why would it necessarily be an "atrocious" business decision to charge extra for encrypted accounts?

Google's happy to charge when the market demand exists for a paid service (GAE and GAB come to mind). It's true that the market demand may be insufficient here to make it worthwhile, but on the other hand privacy-sensitive companies and agencies might be more willing to switch to paid Gmail if their email were seen as more private than it is today.


It's also unlikely that Google could do government-proof mail any better than Husmail. They were compelled to fake the Java client in order to obtain a user's keys; why couldn't Google be pressured to do the same?


I remember writing about Hushmail for Wired over 13 years ago (http://www.wired.com/techbiz/media/news/2000/03/34610?curren...).

Don't you think technology has advanced a tiny bit since then? I'm thinking of Ajax, CSS, XHTML, XMLHttpRequest, DOM, JSON, HTML5, Chrome, Firefox, etc. The tools available today (and the platforms, and the machines) are far more capable.

Your second point is more interesting. Hushmail was ahead of its time in ~2000 when turning to client-side encryption via Java. But nobody really loves Java in the browser. So about five years later the company began to offer a webmail service with server-side encryption. Well, duh. That's vulnerable to a court order allowing the intercept of someone's passphrase the next time they log on. Which is what happened.

So you're right to say that Hushmail was vulnerable. But that was because of their server-side decision, and is unrelated to the vulnerability of properly implemented client-side encryption.


The issue is that the client can be bugged. A provider can have a solid client-side encryption solution, but silently switch the client distributed to users of interest to an identical version which also phones home with their passwords.

It's possible to verify that your code was signed with your provider's certificate, but how do you know it's the right client? How do you know it's the same one that's been peer-reviewed? We'd need a bulletproof way for the browser to check the hash of JavaScript before executing it. Unless you're going to copy/paste and MD5 the source for GMail every time you check it.

The tools to be sure of the executable you're running really only exist for desktop software right now.


Compelled by a warrant? I think that would be acceptable to most people.


To the kind of people that think encrypted email is important, being "compelled by a warrant" to break the encryption is exactly what they wanted to protect against in the first place.


> To the kind of people that think encrypted email is important

Are you really okay with the police reading your email at any time for any reason?

I would understand your attitude if email was already protected and encryption could only serve to hide from search warrants and wiretap orders, but that's not the case. The government can and does read our email whenever it pleases. How are you okay with this?


I didn't read that tone at all. I think he was just pointing out that the people most likely to have access to mail (outside of Google itself) would be the government, so why bother encrypting at all if a warrant undoes it?


Why did you infer that I was okay with this?


You're implying that the only people who think email encryption is important are criminals (or sketchy enough that there's probably cause).

I'm not a criminal, but I don't want the government reading my email.


Not all warrants issued are for criminals, just suspects, and there's nothing you can do to prevent against being a suspect.


Why not? They already have a popular paid version that doesn't have any advertising. It seems like encryption would be a pretty compelling add-on to lure over businesses that are nervous about their data in the cloud.


Search is another one of their features. How would search work if all of the data was encrypted? Pushing search to the client doesn't make sense. Only encrypting emails in transit doesn't make sense because that doesn't get around this issue of the government forcing them to turn over information without a warrant.


Because the average person's data is probably worth far more than they're willing to pay to encrypt it.


They could still do targeted advertising and client side encryption. You just have some of the words hashed and sent to a third party web service that compares hashed words against ad words. With the right level of noise you could preserve both the security of the communications and offer targeted advertising.

Or google could offer ad free encrypted gmail that costs money.

There are plenty of business solutions to this problem and we will see many of them tried.


Shameless plug for my old room mate's startup, [http://penango.com Penango]. Penango uses a browser plugin to implement S/MIME for GMail and several other popular webmail providers. Your private keys stay on your local machine.


Awesome, thanks for the tip! I've just opened up the page and if the technology looks at all workable, I'll sign-up/purchase.

EDIT: Ah no, no Chrome plugin! I may have to go back to Firefox. I've been using it lately for development and it's come a long way in 2-3 years.

EDIT II: I found a very interesting open-source Chrome OpenPGP plugin built by a research team. http://gpg4browsers.recurity.com/ (Turns out project was merged to mymail-crypt project here: http://www.openpgpjs.org/) But I'll be sure to check back with Penango since they have said that a Chrome plugin in coming mid-2013.


Technical solutions might seem "faster" but only for the people at the head of the movement - usually the more technical people. You won't see the majority of the people start adopting those solutions anytime soon.


let's say, hypothetically of course, that a colleague sent me a password to a server via email (gmail), so that password is then retrievable without a warrant from my email account because it was "shared" with the transmission medium? such a broad interpretation would apply to any and all communications where anyone delivers something on your behalf...so literally everything if fair game, i dont see a reason to specifically exclude USPS in such a case.

that's pretty scary.


They would still need a warrant to enter the server. Discovering the password gives them nothing. If they want your server, they can subpoena for the password.


Or, you mean things like all those websites that send you new passwords in plaintext because their security isn't what it should be?


my personal favorites are the ones that send me my current password in plaintext as a form of "recovery"


Now more than ever we need an easy to use PGP solution for the masses.


The DoJ doesn't need warrants to produce email and chat evidence in a lawsuit just like they don't need a warrant to produce other kinds of physical evidence from third parties. A subpoena will suffice and always has. People seem to be unaware of how critical subpoenas are to due process, and how long they've been used.

See: http://en.wikipedia.org/wiki/Stored_Communications_Act


Fair enough. We cant stop it. Its done. It always was. Might as well stop the "chock and horror" thing, and just accept it. In the US, both political parties love being able to spy on us, and in the UK we see the exact same thing. No main stream politicians or law enforcement will ever, ever give it up.

So, have to ask, again, why the hell is any one wanting privacy from the authorities, who we know don't have our best interests at heart, still using electronic communications? It never was, isn't and never will be secure from them. If nothing else, almost every "terrorist" trial features some electronic communications evidence. Doesn't that tell us enough?

Look, just ask your self why organisations like MI6 still use old skool drop boxes in Moscow.

Tor? Its got government finger prints all over it. I trust it as far as I can throw it.


I'm confused as to why the DOJ is arguing over whether it needs a warrant or not for emails and Facebook chats and that sort of thing. This is the DOJ that we're talking about, and surely, even if a warrant is required for email, they would be able to get that warrant regardless.

A warrant is simply a permission by a judge to obtain or enter certain private property, right? Wouldn't the DOJ be able to find a judge that will grant a warrant? I don't know anything about judicial law, so I'm interested in learning about how this all works.


It's about bureaucracy in a lot of cases. It may require senior agent signoff for the warrant, etc.


We're back to administrations saying "stop us if you can".


A hacker could do some real damage with a little fake paperwork. Can't wait for it to happen and believe me it will happen eventually. The hub of information is the police and if they hand it around without any concern why not mine them? The backlash may be a bit too much to handle though if he gets caught.


Is IRC safe? I have a channel as my main form of communication with friends. Some people do local logging but I know who everyone is. Ironically I plan to use IRCcloud or IRCanywhere. Self hosting of online tools should be made much easier. I plan to self-host email but it seems like a pain.


> IRC safe?

Definitely not. It has always been a public communications channel. I don't think anyone ever had any privacy expectations regarding IRC.


> Is IRC safe?

I am pretty sure it's not. One day there was a news story on Drudge Report: http://www.drudgereportarchives.com/data/2005/05/10/20050510... ("feds investigate huge computer attack; worldwide hunt for 'stakkato'")

As a joke, my roommate at university logged into a IRC channel with the nickname and said:

  [01:59.16] * Stakkato (tricky_t@128.42.86.9) has joined #C++
  [01:59.17] * ChanServ sets mode: +o Stakkato
  [01:59.21] <Stakkato> look i made drudgereport headlines!
  [01:59.26] <Stakkato> http://www.drudgereport.com
  [02:01.09] * Stakkato (tricky_t@128.42.86.9) Quit (Quit: )
It was #C++ on DALnet, a small channel of mostly regular members, in 2005. Fast forward a while -- I don't recall how long, maybe a few weeks or months -- and my friend is contacted by the FBI. A member of FBI Houston Cyber Task Force (Houston being our city of residence at the time). The investigator began asking very vague, obscure questions. Eventually my friend and I piece together the subject of the FBI's line of inquiry: that specific IRC conversation. My roommate was completely up front with them about the IRC joke, and that was the end of it. I still have copies of the email conversations from @ic.fbi.gov, where some correspondence took place.

I suppose there is a chance that an informant reported the joke to the FBI, but due to the specifics of the situation, I think it is likely that the text conversation above was caught in a a general FBI dragnet of some kind (IRC server, ISP, etc.) and logged for eventual investigation. It did not seem to be a serious line of investigation by the FBI - more of a "follow all leads" situation. Someone had run a 'grep' for 'stakkato' and my friend's IP address showed up.

That was the day when it became clear to me that everything in plaintext transiting the Internet is probably available to the FBI. At the time it was shocking; even though the conversation happened over a public network, it was surprising to me that the conversation was actually logged and later found. I hesitate to share this story, but I hope it illustrates in harsh relief the probable capabilities of incentivized investigators. Keep in mind this was 2005 - investigative capabilities have surely grown since then.


If you're worried about group-chat security, I suggest http://silcnet.org/ , especially if it's just you and your friends. SILC provides client-to-client trust, meaning as long as you trust the protocol you can be assured your communications aren't being watched even if the server or hosting provider is compromised.

If you self-host IRC and use SSL you'll (probably) at least have the luxury of knowing when someone initiates a proceeding to acquire logs, although there's still the risk of someone going to your hosting provider or datacenter and compromising your server physically.

A hosted IRC service seems like a bad idea overall; in addition to the Justice Department's ability to compel them to watch you (just like with Facebook Chat), there's no way for you to know if they're compromised by another third party.


>A hosted IRC service seems like a bad idea overall

I understand, but it's a lot more convenient for my less tech-savvy friends than a bouncer, I would think.


If you encrypt your email, you'll just raise "bad guy" flags at the traffic analysis level, even before content itself is checked.

Of the top of my head, I can't think of a lever to get enough people encrypting at once so as to overcome that disadvantage.


use torchat. i was surprised how easy it is to use. and actually it have best IM user interface I've seen so far - contacts list, chat window on double click and send/receive file. That's it - nothing else.

on cons side - it take a while to connect to tor network, but once started it is quite fast.

i am not saying government should be allowed violate our rights, but it is much easier to assume that anything gtalk/facebook/gmail/etc is with in reach of government and do yourself a favor - use p2p encryption


Sites like http://flammo.com/ can send private messages which the DOJ cannot see uses end-to-end asymmetric encryption


We need to come up with a way to communicate using these 3rd party services securely, without actually allowing them to store the information in plaintext...


Don't trust policy. If you aren't good willing to handle encryption yourself, find a medium that will responsibly handle it for you (i.e. lavabit.com)


All the more reason to support projects like BitMessage: http://reddit.com/r/bitmessage


I'd use and support BitMessage but I can't figure out how to get it to run on OS X.


Reduce your attack surface by not leaving years and years of emails on your IMAP server. I know its convenient, but there is a privacy tradeoff.


This is one of the reason I don't have Facebook. Weird how foreign rules can affect your privacy outside Europe.


"Warrants? We don' got no warrants. We don' need no steenkeen warrants!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: